Martin Blumenstingl [Mon, 1 Jul 2019 22:42:25 +0000 (00:42 +0200)]
net: stmmac: make "snps,reset-delays-us" optional again
Commit
760f1dc2958022 ("net: stmmac: add sanity check to
device_property_read_u32_array call") introduced error checking of the
device_property_read_u32_array() call in stmmac_mdio_reset().
This results in the following error when the "snps,reset-delays-us"
property is not defined in devicetree:
invalid property snps,reset-delays-us
This sanity check made sense until commit
84ce4d0f9f55b4 ("net: stmmac:
initialize the reset delay array") ensured that there are fallback
values for the reset delay if the "snps,reset-delays-us" property is
absent. That was at the cost of making that property mandatory though.
Drop the sanity check for device_property_read_u32_array() and thus make
the "snps,reset-delays-us" property optional again (avoiding the error
message while loading the stmmac driver with a .dtb where the property
is absent).
Fixes: 760f1dc2958022 ("net: stmmac: add sanity check to device_property_read_u32_array call")
Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 1 Jul 2019 17:48:51 +0000 (10:48 -0700)]
bonding/main: fix NULL dereference in bond_select_active_slave()
A bonding master can be up while best_slave is NULL.
[12105.636318] BUG: unable to handle kernel NULL pointer dereference at
0000000000000000
[12105.638204] mlx4_en: eth1: Linkstate event 1 -> 1
[12105.648984] IP: bond_select_active_slave+0x125/0x250
[12105.653977] PGD 0 P4D 0
[12105.656572] Oops: 0000 [#1] SMP PTI
[12105.660487] gsmi: Log Shutdown Reason 0x03
[12105.664620] Modules linked in: kvm_intel loop act_mirred uhaul vfat fat stg_standard_ftl stg_megablocks stg_idt stg_hdi stg elephant_dev_num stg_idt_eeprom w1_therm wire i2c_mux_pca954x i2c_mux mlx4_i2c i2c_usb cdc_acm ehci_pci ehci_hcd i2c_iimc mlx4_en mlx4_ib ib_uverbs ib_core mlx4_core [last unloaded: kvm_intel]
[12105.685686] mlx4_core 0000:03:00.0: dispatching link up event for port 2
[12105.685700] mlx4_en: eth2: Linkstate event 2 -> 1
[12105.685700] mlx4_en: eth2: Link Up (linkstate)
[12105.724452] Workqueue: bond0 bond_mii_monitor
[12105.728854] RIP: 0010:bond_select_active_slave+0x125/0x250
[12105.734355] RSP: 0018:
ffffaf146a81fd88 EFLAGS:
00010246
[12105.739637] RAX:
0000000000000003 RBX:
ffff8c62b03c6900 RCX:
0000000000000000
[12105.746838] RDX:
0000000000000000 RSI:
ffffaf146a81fd08 RDI:
ffff8c62b03c6000
[12105.754054] RBP:
ffffaf146a81fdb8 R08:
0000000000000001 R09:
ffff8c517d387600
[12105.761299] R10:
00000000001075d9 R11:
ffffffffaceba92f R12:
0000000000000000
[12105.768553] R13:
ffff8c8240ae4800 R14:
0000000000000000 R15:
0000000000000000
[12105.775748] FS:
0000000000000000(0000) GS:
ffff8c62bfa40000(0000) knlGS:
0000000000000000
[12105.783892] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[12105.789716] CR2:
0000000000000000 CR3:
0000000d0520e001 CR4:
00000000001626f0
[12105.796976] Call Trace:
[12105.799446] [<
ffffffffac31d387>] bond_mii_monitor+0x497/0x6f0
[12105.805317] [<
ffffffffabd42643>] process_one_work+0x143/0x370
[12105.811225] [<
ffffffffabd42c7a>] worker_thread+0x4a/0x360
[12105.816761] [<
ffffffffabd48bc5>] kthread+0x105/0x140
[12105.821865] [<
ffffffffabd42c30>] ? rescuer_thread+0x380/0x380
[12105.827757] [<
ffffffffabd48ac0>] ? kthread_associate_blkcg+0xc0/0xc0
[12105.834266] [<
ffffffffac600241>] ret_from_fork+0x51/0x60
Fixes: e2a7420df2e0 ("bonding/main: convert to using slave printk macros")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: John Sperbeck <jsperbeck@google.com>
Cc: Jarod Wilson <jarod@redhat.com>
CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Mon, 1 Jul 2019 16:57:19 +0000 (00:57 +0800)]
tipc: remove ub->ubsock checks
Both tipc_udp_enable and tipc_udp_disable are called under rtnl_lock,
ub->ubsock could never be NULL in tipc_udp_disable and cleanup_bearer,
so remove the check.
Also remove the one in tipc_udp_enable by adding "free" label.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stefano Brivio [Sat, 29 Jun 2019 17:55:08 +0000 (19:55 +0200)]
ipv4: Fix off-by-one in route dump counter without netlink strict checking
In commit
ee28906fd7a1 ("ipv4: Dump route exceptions if requested") I
added a counter of per-node dumped routes (including actual routes and
exceptions), analogous to the existing counter for dumped nodes. Dumping
exceptions means we need to also keep track of how many routes are dumped
for each node: this would be just one route per node, without exceptions.
When netlink strict checking is not enabled, we dump both routes and
exceptions at the same time: the RTM_F_CLONED flag is not used as a
filter. In this case, the per-node counter 'i_fa' is incremented by one
to track the single dumped route, then also incremented by one for each
exception dumped, and then stored as netlink callback argument as skip
counter, 's_fa', to be used when a partial dump operation restarts.
The per-node counter needs to be increased by one also when we skip a
route (exception) due to a previous non-zero skip counter, because it
needs to match the existing skip counter, if we are dumping both routes
and exceptions. I missed this, and only incremented the counter, for
regular routes, if the previous skip counter was zero. This means that,
in case of a mixed dump, partial dump operations after the first one
will start with a mismatching skip counter value, one less than expected.
This means in turn that the first exception for a given node is skipped
every time a partial dump operation restarts, if netlink strict checking
is not enabled (iproute < 5.0).
It turns out I didn't repeat the test in its final version, commit
de755a85130e ("selftests: pmtu: Introduce list_flush_ipv4_exception test
case"), which also counts the number of route exceptions returned, with
iproute2 versions < 5.0 -- I was instead using the equivalent of the IPv6
test as it was before commit
b964641e9925 ("selftests: pmtu: Make
list_flush_ipv6_exception test more demanding").
Always increment the per-node counter by one if we previously dumped
a regular route, so that it matches the current skip counter.
Fixes: ee28906fd7a1 ("ipv4: Dump route exceptions if requested")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
René van Dorst [Sat, 29 Jun 2019 12:24:51 +0000 (14:24 +0200)]
net: ethernet: mediatek: Allow non TRGMII mode with MT7621 DDR2 devices
No reason to error out on a MT7621 device with DDR2 memory when non
TRGMII mode is selected.
Only MT7621 DDR2 clock setup is not supported for TRGMII mode.
But non TRGMII mode doesn't need any special clock setup.
Signed-off-by: René van Dorst <opensource@vdorst.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Tue, 2 Jul 2019 14:55:28 +0000 (15:55 +0100)]
rxrpc: Fix uninitialized error code in rxrpc_send_data_packet()
With gcc 4.1:
net/rxrpc/output.c: In function ‘rxrpc_send_data_packet’:
net/rxrpc/output.c:338: warning: ‘ret’ may be used uninitialized in this function
Indeed, if the first jump to the send_fragmentable label is made, and
the address family is not handled in the switch() statement, ret will be
used uninitialized.
Fix this by BUG()'ing as is done in other places in rxrpc where internal
support for future address families will need adding. It should not be
possible to reach this normally as the address families are checked
up-front.
Fixes: 5a924b8951f835b5 ("rxrpc: Don't store the rxrpc header in the Tx queue sk_buffs")
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Tue, 2 Jul 2019 13:16:42 +0000 (14:16 +0100)]
nfc: st-nci: remove redundant assignment to variable r
The variable r is being initialized with a value that is never
read and it is being updated later with a new value. The
initialization is redundant and can be removed.
Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xue Chaojing [Mon, 1 Jul 2019 23:40:00 +0000 (23:40 +0000)]
hinic: remove standard netdev stats
This patch removes standard netdev stats in ethtool -S.
Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Xue Chaojing <xuechaojing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Tue, 2 Jul 2019 09:12:10 +0000 (11:12 +0200)]
net: stmmac: Re-word Kconfig entry
We support many speeds and it doesn't make much sense to list them all
in the Kconfig. Let's just call it Multi-Gigabit.
Suggested-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jose Abreu <joabreu@synopsys.com>
Cc: Joao Pinto <jpinto@synopsys.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Cc: Alexandre Torgue <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:36:35 +0000 (19:36 -0700)]
Merge branch 'Add-gve-driver'
Catherine Sullivan says:
====================
Add gve driver
This patch series adds the gve driver which will support the
Compute Engine Virtual NIC that will be available in the future.
v2:
- Patch 1:
- Remove gve_size_assert.h and use static_assert instead.
- Loop forever instead of bugging if the device won't reset
- Use module_pci_driver
- Patch 2:
- Use be16_to_cpu in the RX Seq No define
- Remove unneeded ndo_change_mtu
- Patch 3:
- No Changes
- Patch 4:
- Instead of checking netif_carrier_ok in ethtool stats, just make sure
v3:
- Patch 1:
- Remove X86 dep
- Patch 2:
- No changes
- Patch 3:
- No changes
- Patch 4:
- Remove unneeded memsets in ethtool stats
v4:
- Patch 1:
- Use io[read|write]32be instead of [read|write]l(cpu_to_be32())
- Explicitly add padding to gve_adminq_set_driver_parameter
- Use static where appropriate
- Patch 2:
- Use u64_stats_sync
- Explicity add padding to gve_adminq_create_rx_queue
- Fix some enianness typing issues found by kbuild
- Use static where appropriate
- Remove unused variables
- Patch 3:
- Use io[read|write]32be instead of [read|write]l(cpu_to_be32())
- Patch 4:
- Use u64_stats_sync
- Use static where appropriate
Warnings reported by:
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Catherine Sullivan [Mon, 1 Jul 2019 22:57:55 +0000 (15:57 -0700)]
gve: Add ethtool support
Add support for the following ethtool commands:
ethtool -s|--change devname [msglvl N] [msglevel type on|off]
ethtool -S|--statistics devname
ethtool -i|--driver devname
ethtool -l|--show-channels devname
ethtool -L|--set-channels devname
ethtool -g|--show-ring devname
ethtool --reset devname
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: Sagi Shahar <sagis@google.com>
Signed-off-by: Jon Olson <jonolson@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Catherine Sullivan [Mon, 1 Jul 2019 22:57:54 +0000 (15:57 -0700)]
gve: Add workqueue and reset support
Add support for the workqueue to handle management interrupts and
support for resets.
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: Sagi Shahar <sagis@google.com>
Signed-off-by: Jon Olson <jonolson@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Catherine Sullivan [Mon, 1 Jul 2019 22:57:53 +0000 (15:57 -0700)]
gve: Add transmit and receive support
Add support for passing traffic.
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: Sagi Shahar <sagis@google.com>
Signed-off-by: Jon Olson <jonolson@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Catherine Sullivan [Mon, 1 Jul 2019 22:57:52 +0000 (15:57 -0700)]
gve: Add basic driver framework for Compute Engine Virtual NIC
Add a driver framework for the Compute Engine Virtual NIC that will be
available in the future.
At this point the only functionality is loading the driver.
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: Sagi Shahar <sagis@google.com>
Signed-off-by: Jon Olson <jonolson@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:34:46 +0000 (19:34 -0700)]
Merge branch 'blackhole-device-to-invalidate-dst'
Mahesh Bandewar says:
====================
blackhole device to invalidate dst
When we invalidate dst or mark it "dead", we assign 'lo' to
dst->dev. First of all this assignment is racy and more over,
it has MTU implications.
The standard dev MTU is 1500 while the Loopback MTU is 64k. TCP
code when dereferencing the dst don't check if the dst is valid
or not. TCP when dereferencing a dead-dst while negotiating a
new connection, may use dst device which is 'lo' instead of
using the correct device. Consider the following scenario:
A SYN arrives on an interface and tcp-layer while processing
SYNACK finds a dst and associates it with SYNACK skb. Now before
skb gets passed to L3 for processing, if that dst gets "dead"
(because of the virtual device getting disappeared & then reappeared),
the 'lo' gets assigned to that dst (lo MTU = 64k). Let's assume
the SYN has ADV_MSS set as 9k while the output device through
which this SYNACK is going to go out has standard MTU of 1500.
The MTU check during the route check passes since MIN(9K, 64K)
is 9k and TCP successfully negotiates 9k MSS. The subsequent
data packet; bigger in size gets passed to the device and it
won't be marked as GSO since the assumed MTU of the device is
9k.
This either crashes the NIC and we have seen fixes that went
into drivers to handle this scenario.
8914a595110a ('bnx2x:
disable GSO where gso_size is too big for hardware') and
2b16f048729b ('net: create skb_gso_validate_mac_len()') and
with those fixes TCP eventually recovers but not before
few dropped segments.
Well, I'm not a TCP expert and though we have experienced
these corner cases in our environment, I could not reproduce
this case reliably in my test setup to try this fix myself.
However, Michael Chan <michael.chan@broadcom.com> had a setup
where these fixes helped him mitigate the issue and not cause
the crash.
The idea here is to not alter the data-path with additional
locks or smb()/rmb() barriers to avoid racy assignments but
to create a new device that has really low MTU that has
.ndo_start_xmit essentially a kfree_skb(). Make use of this
device instead of 'lo' when marking the dst dead.
First patch implements the blackhole device and second
patch uses it in IPv4 and IPv6 stack while the third patch
is the self test that ensures the sanity of this device.
v1->v2
fixed the self-test patch to handle the conflict
v2 -> v3
fixed Kconfig text/string.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:39:01 +0000 (14:39 -0700)]
blackhole_dev: add a selftest
Since this is not really a device with all capabilities, this test
ensures that it has *enough* to make it through the data path
without causing unwanted side-effects (read crash!).
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:38:57 +0000 (14:38 -0700)]
blackhole_netdev: use blackhole_netdev to invalidate dst entries
Use blackhole_netdev instead of 'lo' device with lower MTU when marking
dst "dead".
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Tested-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 1 Jul 2019 21:38:49 +0000 (14:38 -0700)]
loopback: create blackhole net device similar to loopack.
Create a blackhole net device that can be used for "dead"
dst entries instead of loopback device. This blackhole device differs
from loopback in few aspects: (a) It's not per-ns. (b) MTU on this
device is ETH_MIN_MTU (c) The xmit function is essentially kfree_skb().
and (d) since it's not registered it won't have ifindex.
Lower MTU effectively make the device not pass the MTU check during
the route check when a dst associated with the skb is dead.
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Kelam [Sun, 30 Jun 2019 14:29:49 +0000 (19:59 +0530)]
net: ethernet: broadcom: bcm63xx_enet: Remove unneeded memset
Remove unneeded memset as alloc_etherdev is using kvzalloc which uses
__GFP_ZERO flag
Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:27:08 +0000 (19:27 -0700)]
Merge branch 'net-netsec-Add-XDP-Support'
Ilias Apalodimas says:
====================
net: netsec: Add XDP Support
This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
Since page_pool API fixes are merged into net-next we can now safely use
it's DMA mapping capabilities.
First patch changes the buffer allocation from napi/netdev_alloc_frag()
to page_pool API. Although this will lead to slightly reduced performance
(on raw packet drops only) we can use the API for XDP buffer recycling.
Another side effect is a slight increase in memory usage, due to using a
single page per packet.
The second patch adds XDP support on the driver.
There's a bunch of interesting options that come up due to the single
Tx queue.
Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit
and the normal stack can co-exist). We also need to track down the
'buffer type' for TX and properly free or recycle the packet depending
on it's nature.
Changes since RFC:
- Bug fixes from Jesper and Maciej
- Added page pool API to retrieve the DMA direction
Changes since v1:
- Use page_pool_free correctly if xdp_rxq_info_reg() failed
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:25 +0000 (08:23 +0300)]
net: netsec: add XDP support
The interface only supports 1 Tx queue so locking is introduced on
the Tx queue if XDP is enabled to make sure .ndo_start_xmit and
.ndo_xdp_xmit won't corrupt Tx ring
- Performance (SMMU off)
Benchmark XDP_SKB XDP_DRV
xdp1 291kpps 344kpps
rxdrop 282kpps 342kpps
- Performance (SMMU on)
Benchmark XDP_SKB XDP_DRV
xdp1 167kpps 324kpps
rxdrop 164kpps 323kpps
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:24 +0000 (08:23 +0300)]
net: page_pool: add helper function for retrieving dma direction
Since the dma direction is stored in page pool params, offer an API
helper for driver that choose not to keep track of it locally
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Sat, 29 Jun 2019 05:23:23 +0000 (08:23 +0300)]
net: netsec: Use page_pool API
Use page_pool and it's DMA mapping capabilities for Rx buffers instead
of netdev/napi_alloc_frag()
Although this will result in a slight performance penalty on small sized
packets (~10%) the use of the API will allow to easily add XDP support.
The penalty won't be visible in network testing i.e ipef/netperf etc, it
only happens during raw packet drops.
Furthermore we intend to add recycling capabilities on the API
in the future. Once the recycling is added the performance penalty will
go away.
The only 'real' penalty is the slightly increased memory usage, since we
now allocate a page per packet instead of the amount of bytes we need +
skb metadata (difference is roughly 2kb per packet).
With a minimum of 4BG of RAM on the only SoC that has this NIC the
extra memory usage is negligible (a bit more on 64K pages)
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 21:32:01 +0000 (17:32 -0400)]
tc-testing: added tdc tests for prio qdisc
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 02:18:04 +0000 (19:18 -0700)]
Merge branch 'mirred-batch-fixes'
Roman Mashak says:
====================
Fix batched event generation for mirred action
When adding or deleting a batch of entries, the kernel sends upto
TCA_ACT_MAX_PRIO entries in an event to user space. However it does not
consider that the action sizes may vary and require different skb sizes.
For example :
% cat tc-batch.sh
TC="sudo /mnt/iproute2.git/tc/tc"
$TC actions flush action mirred
for i in `seq 1 $1`;
do
cmd="action mirred egress redirect dev lo index $i "
args=$args$cmd
done
$TC actions add $args
%
% ./tc-batch.sh 32
Error: Failed to fill netlink attributes while adding TC action.
We have an error talking to the kernel
%
patch 1 adds callback in tc_action_ops of mirred action, which calculates
the action size, and passes size to tcf_add_notify()/tcf_del_notify().
patch 2 updates the TDC test suite with relevant test cases.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 18:30:18 +0000 (14:30 -0400)]
tc-testing: updated mirred action tests with batch create/delete
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Fri, 28 Jun 2019 18:30:17 +0000 (14:30 -0400)]
net sched: update mirred action for batched events operations
Add get_fill_size() routine used to calculate the action size
when building a batch of events.
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jason A. Donenfeld [Fri, 28 Jun 2019 14:40:21 +0000 (16:40 +0200)]
netlink: use 48 byte ctx instead of 6 signed longs for callback
People are inclined to stuff random things into cb->args[n] because it
looks like an array of integers. Sometimes people even put u64s in there
with comments noting that a certain member takes up two slots. The
horror! Really this should mirror the usage of skb->cb, which are just
48 opaque bytes suitable for casting a struct. Then people can create
their usual casting macros for accessing strongly typed members of a
struct.
As a plus, this also gives us the same amount of space on 32bit and 64bit.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Fri, 28 Jun 2019 15:06:20 +0000 (17:06 +0200)]
tipc: embed jiffies in macro TIPC_BC_RETR_LIM
The macro TIPC_BC_RETR_LIM is always used in combination with 'jiffies',
so we can just as well perform the addition in the macro itself. This
way, we get a few shorter code lines and one less line break.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eiichi Tsukata [Fri, 28 Jun 2019 02:37:14 +0000 (11:37 +0900)]
net/ipv6: Fix misuse of proc_dointvec "flowlabel_reflect"
/proc/sys/net/ipv6/flowlabel_reflect assumes written value to be in the
range of 0 to 3. Use proc_dointvec_minmax instead of proc_dointvec.
Fixes: 323a53c41292 ("ipv6: tcp: enable flowlabel reflection in some RST packets")
Signed-off-by: Eiichi Tsukata <devel@etsukata.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yunsheng Lin [Fri, 28 Jun 2019 01:13:19 +0000 (09:13 +0800)]
net: link_watch: prevent starvation when processing linkwatch wq
When user has configured a large number of virtual netdev, such
as 4K vlans, the carrier on/off operation of the real netdev
will also cause it's virtual netdev's link state to be processed
in linkwatch. Currently, the processing is done in a work queue,
which may cause rtnl locking starvation problem and worker
starvation problem for other work queue, such as irqfd_inject wq.
This patch releases the cpu when link watch worker has processed
a fixed number of netdev' link watch event, and schedule the
work queue again when there is still link watch event remaining.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 2 Jul 2019 01:58:35 +0000 (18:58 -0700)]
Merge branch 'mlxsw-PTP-timestamping-support'
Ido Schimmel says:
====================
mlxsw: PTP timestamping support
This is the second patchset adding PTP support in mlxsw. Next patchset
will add PTP shapers which are required to maintain accuracy under rates
lower than 40Gb/s, while subsequent patchsets will add tracepoints and
selftests.
Petr says:
This patch set introduces support for retrieving and processing hardware
timestamps for PTP packets.
The way PTP timestamping works on Spectrum-1 is that there are two queues
associated with each front panel port. When a packet is timestamped, the
timestamp is put to one of the queues: timestamps for transmitted packets
to one and for received packets to the other. Activity on these queues is
signaled through the events PTP_ING_FIFO and PTP_EGR_FIFO.
Packets themselves arrive through two traps: PTP0 and PTP1. It is possible
to configure which PTP messages should be trapped under which PTP trap. On
Spectrum systems, mlxsw will use PTP0 for event messages (which need
timestamping), and PTP1 for general messages (which do not).
There are therefore four relevant traps: receive of PTP event resp. general
message, and receive of timestamp for a transmitted resp. received PTP
packet. The obvious point where to put the new logic is a custom listener
to the mentioned traps.
Besides handling ingress traffic (be in packets or timestamps), the driver
also needs to handle timestamping of transmitted packets. One option would
be to invoke the relevant logic from mlxsw_core_ptp_transmitted(). However
on Spectrum-2, the timestamps are actually delivered through the completion
queue, and for that reason this patchset opts to invoke the logic from the
PCI code, via core and the driver, to a chip-specific operation. That way
the invocation will be done in a place where a Spectrum-2 implementation
will have an opportunity to extract the timestamp.
As indicated above, the PTP FIFO signaling happens independently from
packet delivery. A packet corresponding to any given timestamp could be
delivered sooner or later than the timestamp itself. Additionally, the
queues are only four elements deep, and it is therefore possible that the
timestamp for a delivered packet never arrives at all. Similarly a PTP
packet might be dropped due to CPU traffic pressure, and never be delivered
even if the corresponding timestamp was.
The driver thus needs to hold a cache of as-yet-unmatched SKBs and
timestamps. The first piece to arrive (be it timestamp or SKB) is put to
this cache. When the other piece arrives, the timestamp is attached to the
SKB and that is passed on. A delayed work is run at regular intervals to
prune the old unmatched entries.
As mentioned above, the mechanism for timestamp delivery changes on
Spectrum-2, where timestamps are part of completion queue elements, and all
packets are timestamped. All this bookkeeping is therefore unnecessary on
Spectrum-2. For this reason, this patchset spends some time introducing
Spectrum-1 specific artifacts such as a possibility to register a given
trap only on Spectrum-1.
Patches #1-#4 describe new registers.
Patches #5 and #6 introduce the possibility to register certain traps
only on some systems. The list of Spectrum-1 specific traps is left empty
at this point.
Patch #7 hooks into packet receive path by registering PTP traps
and appropriate handlers (that however do nothing of substance yet).
Patch #8 adds a helper to allow storing custom data to SKB->cb.
Patch #9 adds a call into the PCI completion queue handler that invokes,
via core and spectrum code, a PTP transmit handler. (Which also does not do
anything interesting yet.)
Patch #10 introduces code to invoke PTP initialization and adds data types
for the cache of unmatched entries.
Patches #11 and #12 implement the timestamping itself. In #11, the PHC
spin_locks are converted to _bh variants, because unlike normal PHC path,
which runs in process context, timestamp processing runs as soft interrupt.
Then #12 introduces the code for saving and retrieval of unmatched entries,
invokes PTP classifier to identify packets of interest, registers timestamp
FIFO events, and handles decoding and attaching timestamps to packets.
Patch #13 introduces a garbage collector for left-behind entries that have
not been matched for about a second.
In patch #14, PTP message types are configured to arrive as PTP0
(events) or PTP1 (everything else) as appropriate. At this point, the PTP
packets start arriving through the traps, but because PTP is disabled and
there is no way to enable it yet, they are always just passed to the usual
receive path right away.
Finally patches #15 and #16 add the plumbing to actually make it possible
to enable this code through SIOCSHWTSTAMP ioctl, and to advertise the
hardware timestamping capabilities through ethtool.
v2:
- Patch #12:
- In mlxsw_sp1_ptp_fifo_event_func(), post-increment when iterating over PTP
FIFO records.
- Patch #14:
- Change namespace of message type enumerators from MLXSW_ to MLXSW_SP_.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:05:00 +0000 (09:05 +0300)]
mlxsw: spectrum: PTP: Support ethtool get_ts_info
The get_ts_info callback is used for obtaining information about
timestamping capabilities of a network device. On Spectrum-1, implement
it to advertise the PHC and the capability to do HW timestamping, and
the supported RX and TX filters.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:59 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Support SIOCGHWTSTAMP, SIOCSHWTSTAMP ioctls
The SIOCSHWTSTAMP ioctl configures HW timestamping on a given port.
Dispatch the ioctls to per-chip handler (which add to ptp_ops). Find
which PTP messages need to be timestamped and configure MTPPPC
accordingly.
The SIOCGHWTSTAMP ioctl is getter for the current configuration.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:58 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Configure PTP traps and FIFO events
Configure MTPTPT to set which message types should arrive under which
PTP trap, and MOGCR to clear the timestamp queue after its contents are
reported through PTP_ING_FIFO or PTP_EGR_FIFO.
With this configuration, PTP packets start arriving through the PTP
traps. However since timestamping is disabled by default and there is
currently no way to enable it, they will not be timestamped.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:57 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Garbage-collect unmatched entries
On Spectrum-1, timestamped PTP packets and the corresponding timestamps
need to be kept in caches until both are available, at which point they are
matched up and packets forwarded as appropriate. However, not all packets
will ever see their timestamp, and not all timestamps will ever see their
packet. It is therefore necessary to dispose of such abandoned entries.
To that end, introduce a garbage collector to collect entries that have
not had their counterpart turn up within about a second. The GC
maintains a monotonously-increasing value of GC cycle. Every entry that
is put to the hash table is annotated with the GC cycle at which it
should be collected. When the GC runs, it walks the hash table, and
collects the objects according to their GC cycle annotation.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:56 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Support timestamping on Spectrum-1
On Spectrum-1, timestamps arrive through a pair of dedicated events:
MLXSW_TRAP_ID_PTP_ING_FIFO and _EGR_FIFO. The payload delivered with
those traps is contents of the timestamp FIFO at a given port in a given
direction. Add a Spectrum-1-specific handler for these two events which
decodes the timestamps and forwards them to the PTP module.
Add a function that parses a packet, dispatching to ptp_classify_raw(),
and decodes PTP message type, domain number, and sequence ID. Add a new
mlxsw dependency on the PTP classifier.
Add helpers that can store and retrieve unmatched timestamps and SKBs to
the hash table added in a preceding patch.
Add the matching code itself: upon arrival of a timestamp or a packet,
look up the corresponding unmatched entry, and match it up. If there is
none, add a new unmatched entry. This logic is the same on ingress as on
egress.
Packets and timestamps that never matched need to be eventually disposed
of. A garbage collector added in a follow-up patch will take care of
that. Since currently all this code is turned off, no crud will
accumulate in the hash table.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:55 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Disable BH when working with PHC
Up until now, the PTP hardware clock code was only invoked in the process
context (SYS_clock_adjtime -> do_clock_adjtime -> k_clock::clock_adj ->
pc_clock_adjtime -> posix_clock_operations::clock_adjtime ->
ptp_clock_info::adjtime -> mlxsw_spectrum).
In order to enable HW timestamping, which is tied into trap handling, it
will be necessary to take the clock lock from the PCI queue handler
tasklets as well.
Therefore use the _bh variants when handling the clock lock. Incidentally,
Documentation/ptp/ptp.txt recommends _irqsave variants, but that's
unnecessarily strong for our needs.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:54 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Add PTP initialization / finalization
Add two ptp_ops: init and fini, to initialize and finalize the PTP
subsystem. Call as appropriate from mlxsw_sp_init() and _fini().
Lay the groundwork for Spectrum-1 support. On Spectrum-1, the received
timestamped packets and their corresponding timestamps arrive
independently, and need to be matched up. Introduce the related data types
and add to struct mlxsw_sp_ptp_state the hash table that will keep the
unmatched entries.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:53 +0000 (09:04 +0300)]
mlxsw: pci: PTP: Hook into packet transmit path
On Spectrum-1, timestamps are delivered separately from the packets, and
need to paired up. Therefore, at some point after mlxsw_sp_port_xmit()
is invoked, it is necessary to involve the chip-specific driver code to
allow it to do the necessary bookkeeping and matching.
On Spectrum-2, timestamps are delivered in CQE. For that reason,
position the point of driver involvement into mlxsw_pci_cqe_sdq_handle()
to make it hopefully easier to extend for Spectrum-2 in the future.
To tell the driver what port the packet was sent on, keep tx_info
in SKB control buffer.
Introduce a new driver core interface mlxsw_core_ptp_transmitted(), a
driver callback ptp_transmitted, and a PTP op transmitted. The callee is
responsible for taking care of releasing the SKB passed to the new
interfaces, and correspondingly have the new stub callbacks just call
dev_kfree_skb_any().
Follow-up patches will introduce the actual content into
mlxsw_sp1_ptp_transmitted() in particular.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:52 +0000 (09:04 +0300)]
mlxsw: core: Add support for using SKB control buffer
The SKB control buffer is useful (and used) for bookkeeping of information
related to that SKB. Add helpers so that the mlxsw driver(s) can safely use
the buffer as well. The structure is currently empty, individual users will
add members to it as necessary.
Note that SKB allocation functions already clear the buffer, so the cleanup
is only necessary when ndo_start_xmit is called.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:51 +0000 (09:04 +0300)]
mlxsw: spectrum: PTP: Hook into packet receive path
When configured, the Spectrum hardware can recognize PTP packets and
trap them to the CPU using dedicated traps, PTP0 and PTP1.
One reason to get PTP packets under dedicated traps is to have a
separate policer suitable for the amount of PTP traffic expected when
switch is operated as a boundary clock. For this, add two new trap
groups, MLXSW_REG_HTGT_TRAP_GROUP_SP_PTP0 and _PTP1, and associate the
two PTP traps with these two groups.
In the driver, specifically for Spectrum-1, event PTP packets will need
to be paired up with their timestamps. Those arrive through a different
set of traps, added later in the patch set. To support this future use,
introduce a new PTP op, ptp_receive.
It is possible to configure which PTP messages should be trapped under
which PTP trap. On Spectrum systems, we will use PTP0 for event
packets (which need timestamping), and PTP1 for control packets (which
do not). Thus configure PTP0 trap with a custom callback that defers to
the ptp_receive op.
Additionally, L2 PTP packets are actually trapped through the LLDP trap,
not through any of the PTP traps. So treat the LLDP trap the same way as
the PTP0 trap. Unlike PTP traps, which are currently still disabled,
LLDP trap is active. Correspondingly, have all the implementations of
the ptp_receive op return true, which the handler treats as a signal to
forward the packet immediately.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:50 +0000 (09:04 +0300)]
mlxsw: spectrum: Add support for traps specific to Spectrum-1
On Spectrum-1, timestamps for PTP packets are delivered through queues
of ingress and egress timestamps. There are two event traps
corresponding to activity on each of those queues. This mechanism is
absent on Spectrum-2, and therefore the traps should only be registered
on Spectrum-1.
Carry a chip-specific listener array in mlxsw_sp->listeners and
listeners_count. Register listeners from that array in
mlxsw_sp_traps_init(). Add a new listener array for Spectrum-1 traps and
configure the newly-added mlxsw_sp->listeners with this array.
The listener array is empty for now, the events will be added in a later
patch.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:49 +0000 (09:04 +0300)]
mlxsw: spectrum: Extract a helper for trap registration
On Spectrum-1, timestamps for PTP packets are delivered through queues
of ingress and egress timestamps. There are two event traps
corresponding to activity on each of those queues. This mechanism is
absent on Spectrum-2, and therefore the traps should only be registered
on Spectrum-1.
Extract out of mlxsw_sp_traps_init() a generic helper,
mlxsw_sp_traps_register(), and likewise with _unregister(). The new helpers
will later be called with Spectrum-1-specific traps.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:48 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Global Configuration Register
This register serves to configure global parameters of certain
monitoring operations. The following patches will use it to configure
that when PTP timestamps are delivered through the PTP FIFO traps, the
FIFO in question is cleared as well.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:47 +0000 (09:04 +0300)]
mlxsw: reg: Add Time Precision Packet Timestamping Reading
The MTPPTR is used for reading the per port PTP timestamp FIFO.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:46 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Precision Time Protocol Trap Register
This register is used for configuring under which trap to deliver PTP
packets depending on type of the packet.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 30 Jun 2019 06:04:45 +0000 (09:04 +0300)]
mlxsw: reg: Add Monitoring Time Precision Packet Port Configuration Register
This register serves for configuration of which PTP messages should be
timestamped. This is a global configuration, despite the register name.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel T. Lee [Sat, 29 Jun 2019 13:33:58 +0000 (22:33 +0900)]
samples: pktgen: allow to specify destination port
Currently, kernel pktgen has the feature to specify udp destination port
for sending packet. (e.g. pgset "udp_dst_min 9")
But on samples, each of the scripts doesn't have any option to achieve this.
This commit adds the DST_PORT option to specify the target port(s) in the script.
-p : ($DST_PORT) destination PORT range (e.g. 433-444) is also allowed
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel T. Lee [Sat, 29 Jun 2019 13:33:57 +0000 (22:33 +0900)]
samples: pktgen: add some helper functions for port parsing
This commit adds port parsing and port validate helper function to parse
single or range of port(s) from a given string. (e.g. 1234, 443-444)
Helpers will be used in prior to set target port(s) in samples/pktgen.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 1 Jul 2019 13:39:36 +0000 (06:39 -0700)]
ipv6: icmp: allow flowlabel reflection in echo replies
Extend flowlabel_reflect bitmask to allow conditional
reflection of incoming flowlabels in echo replies.
Note this has precedence against auto flowlabels.
Add flowlabel_reflect enum to replace hard coded
values.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 1 Jul 2019 01:41:13 +0000 (18:41 -0700)]
Merge tag 'mlx5e-updates-2019-06-28' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5e-updates-2019-06-28
This series adds some misc updates for mlx5e driver
1) Allow adding the same mac more than once in MPFS table
2) Move to HW checksumming advertising
3) Report netdevice MPLS features
4) Correct physical port name of the PF representor
5) Reduce stack usage in mlx5_eswitch_termtbl_create
6) Refresh TIR improvement for representors
7) Expose same physical switch_id for all representors
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 30 Jun 2019 23:03:35 +0000 (16:03 -0700)]
Merge branch '10GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2019-06-28
This series contains a smorgasbord of updates to many of the Intel
drivers.
Gustavo A. R. Silva updates the ice and iavf drivers to use the
strcut_size() helper where possible.
Miguel increases the pause and refresh time for flow control in the
e1000e driver during reset for certain devices.
Dann Frazier fixes a potential NULL pointer dereference in ixgbe driver
when using non-IPSec enabled devices.
Colin Ian King fixes a potential overflow during a shift in the ixgbe
driver. Also fixes a potential NULL pointer dereference in the iavf
driver by adding a check.
Venkatesh Srinivas converts the e1000 driver to use dma_wmb() instead of
wmb() for doorbell writes to avoid SFENCEs in the transmit and receive
paths.
Arjan updates the e1000e driver to improve boot time by over 100 msec by
reducing the usleep ranges suring system startup.
Artem updates the igb driver register dump in ethtool, first prepares
the register dump for future additions of registers in the dump, then
secondly, adds the RR2DCDELAY register to the dump. When dealing with
time-sensitive networks, this register is helpful in determining your
latency from the device to the ring.
Alex fixes the ixgbevf driver to use the current cached link state,
rather than trying to re-check the value from the PF.
Harshitha adds support for MACVLAN offloads in i40e by using channels as
MACVLAN interfaces.
Detlev Casanova updates the e1000e driver to use delayed work instead of
timers to run the watchdog.
Vitaly fixes an issue in e1000e, where when disconnecting and
reconnecting the physical cable connection, the NIC enters a DMoff
state. This state causes a mismatch in link and duplexing, so check the
PCIm function state and perform a PHY reset when in this state to
resolve the issue.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:19:09 +0000 (23:19 +0200)]
r8169: remove not needed call to dma_sync_single_for_device
DMA_API_HOWTO.txt includes an example explaining when
dma_sync_single_for_device() is not needed, and that example matches
our use case. The buffer isn't changed by the CPU and direction is
DMA_FROM_DEVICE, so we can remove the call to
dma_sync_single_for_device().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:12:39 +0000 (23:12 +0200)]
r8169: consider that 32 Bit DMA is the default
Documentation/DMA-API-HOWTO.txt states:
By default, the kernel assumes that your device can address 32-bits of
DMA addressing. For a 64-bit capable device, this needs to be increased,
and for a device with limitations, it needs to be decreased.
Therefore we don't need the 32 Bit DMA fallback configuration and can
remove it.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Thu, 27 Jun 2019 21:06:33 +0000 (23:06 +0200)]
r8169: improve handling VLAN tag
The VLAN tag is stored in the descriptor in network byte order.
Using swab16 works on little endian host systems only. Better play safe
and use ntohs or htons respectively.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Thu, 27 Jun 2019 15:12:42 +0000 (17:12 +0200)]
selftests: rtnetlink: skip ipsec offload tests if netdevsim isn't present
running the script on systems without netdevsim now prints:
SKIP: ipsec_offload can't load netdevsim
instead of error message & failed status.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 29 Jun 2019 18:15:12 +0000 (11:15 -0700)]
Merge branch 'em_ipt-add-support-for-addrtype'
Nikolay Aleksandrov says:
====================
em_ipt: add support for addrtype
We would like to be able to use the addrtype from tc for ACL rules and
em_ipt seems the best place to add support for the already existing xt
match. The biggest issue is that addrtype revision 1 (with ipv6 support)
is NFPROTO_UNSPEC and currently em_ipt can't differentiate between v4/v6
if such xt match is used because it passes the match's family instead of
the packet one. The first 3 patches make em_ipt match only on IP
traffic (currently both policy and addrtype recognize such traffic
only) and make it pass the actual packet's protocol instead of the xt
match family when it's unspecified. They also add support for NFPROTO_UNSPEC
xt matches. The last patch allows to add addrtype rules via em_ipt.
We need to keep the user-specified nfproto for dumping in order to be
compatible with libxtables, we cannot dump NFPROTO_UNSPEC as the nfproto
or we'll get an error from libxtables, thus the nfproto is limited to
ipv4/ipv6 in patch 03 and is recorded.
v3: don't use the user nfproto for matching, only for dumping, more
information is available in the commit message in patch 03
v2: change patch 02 to set the nfproto only when unspecified and drop
patch 04 from v1 (Eyal Birger)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:47 +0000 (11:10 +0300)]
net: sched: em_ipt: add support for addrtype matching
Allow em_ipt to use addrtype for matching. Restrict the use only to
revision 1 which has IPv6 support. Since it's a NFPROTO_UNSPEC xt match
we use the user-specified nfproto for matching, in case it's unspecified
both v4/v6 will be matched by the rule.
v2: no changes, was patch 5 in v1
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:46 +0000 (11:10 +0300)]
net: sched: em_ipt: keep the user-specified nfproto and dump it
If we dump NFPROTO_UNSPEC as nfproto user-space libxtables can't handle
it and would exit with an error like:
"libxtables: unhandled NFPROTO in xtables_set_nfproto"
In order to avoid the error return the user-specified nfproto. If we
don't record it then the match family is used which can be
NFPROTO_UNSPEC. Even if we add support to mask NFPROTO_UNSPEC in
iproute2 we have to be compatible with older versions which would be
also be allowed to add NFPROTO_UNSPEC matches (e.g. addrtype after the
last patch).
v3: don't use the user nfproto for matching, only for dumping the rule,
also don't allow the nfproto to be unspecified (explained above)
v2: adjust changes to missing patch, was patch 04 in v1
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:45 +0000 (11:10 +0300)]
net: sched: em_ipt: set the family based on the packet if it's unspecified
Set the family based on the packet if it's unspecified otherwise
protocol-neutral matches will have wrong information (e.g. NFPROTO_UNSPEC).
In preparation for using NFPROTO_UNSPEC xt matches.
v2: set the nfproto only when unspecified
Suggested-by: Eyal Birger <eyal.birger@gmail.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 27 Jun 2019 08:10:44 +0000 (11:10 +0300)]
net: sched: em_ipt: match only on ip/ipv6 traffic
Restrict matching only to ip/ipv6 traffic and make sure we can use the
headers, otherwise matches will be attempted on any protocol which can
be unexpected by the xt matches. Currently policy supports only ipv4/6.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xue Chaojing [Sat, 29 Jun 2019 02:26:27 +0000 (02:26 +0000)]
hinic: add vlan offload support
This patch adds vlan offload support for the HINIC driver.
Signed-off-by: Xue Chaojing <xuechaojing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Blakey [Mon, 24 Jun 2019 12:04:58 +0000 (15:04 +0300)]
net/mlx5e: Disallow tc redirect offload cases we don't support
After changing the parent_id to be the same for both NICs of same
the hardware device, netdev_port_same_parent_id now returns true for
more cases (all the lower devices in the hierarchy are on the same
hardware device).
If merged eswitch isn't enabled, these cases aren't supported, so disallow
them.
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Paul Blakey [Thu, 16 May 2019 12:27:17 +0000 (15:27 +0300)]
net/mlx5e: Expose same physical switch_id for all representors
Report system_image_guid as the E-Switch switch_id, this ensures
that when a NIC contains multiple PCI functions and which
has merged eswitch capability, all representors from
multiple PFs publish same switch_id.
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Thu, 23 May 2019 06:58:56 +0000 (09:58 +0300)]
net/mlx5e: Don't refresh TIRs when updating representor SQs
Refreshing TIRs is done in order to update the TIRs with the current
state of SQs in the transport domain, so that the TIRs can filter out
undesired self-loopback packets based on the source SQ of the packet.
Representor TIRs will only receive packets that originate from their
associated vport, due to dedicated steering, and therefore will never
receive self-loopback packets, whose source vport will be the vport of
the E-Switch manager, and therefore not the vport associated with the
representor. As such, it is not necessary to refresh the representors'
TIRs, since self-loopback packets can't reach them.
Since representors only exist in switchdev mode, and there is no
scenario in which a representor will exist in the transport domain
alongside a non-representor, it is not necessary to refresh the
transport domain's TIRs upon changing the state of a representor's
queues. Therefore, do not refresh TIRs upon such a change. Achieve
this by adding an update_rx callback to the mlx5e_profile, which
refreshes TIRs for non-representors and does nothing for representors,
and replace instances of mlx5e_refresh_tirs() upon changing the state
of the queues with update_rx().
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Reviewed-by: Roi Dayan <roid@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Arnd Bergmann [Tue, 18 Jun 2019 11:15:06 +0000 (13:15 +0200)]
net/mlx5e: reduce stack usage in mlx5_eswitch_termtbl_create
Putting an empty 'mlx5_flow_spec' structure on the stack is a bit
wasteful and causes a warning on 32-bit architectures when building
with clang -fsanitize-coverage:
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c: In function 'mlx5_eswitch_termtbl_create':
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads_termtbl.c:90:1: error: the frame size of 1032 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Since the structure is never written to, we can statically allocate
it to avoid the stack usage. To be on the safe side, mark all
subsequent function arguments that we pass it into as 'const'
as well.
Fixes: 10caabdaad5a ("net/mlx5e: Use termination table for VLAN push actions")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Acked-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Mon, 27 May 2019 09:47:10 +0000 (04:47 -0500)]
net/mlx5e: Set drvinfo in generic manner
Consider PCI and non PCI device types while setting device name
in get_drvinfo() callback using existing generic device.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Parav Pandit [Wed, 5 Jun 2019 06:29:05 +0000 (01:29 -0500)]
net/mlx5e: Correct phys_port_name for PF port
Currently PF phys_port_name is named as pfNvf-1 as vport number for PF
vport is 65535.
Correct PF's phys_port name as agreed upon name as pfN.
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Vu Pham <vuhuong@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Ariel Levkovich [Wed, 5 Jun 2019 17:01:08 +0000 (20:01 +0300)]
net/mlx5e: Report netdevice MPLS features
Set supported device features in the netdevice MPLS features mask.
This will enable HW checksumming and TSO for MPLS tagged traffic.
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Ariel Levkovich [Wed, 5 Jun 2019 16:40:09 +0000 (19:40 +0300)]
net/mlx5e: Move to HW checksumming advertising
This patch changes the way the driver advertises its checksum offload
capabilities within the net device features bit mask.
Instead of advertising protocol specific checksumming capabilities
which are limited today to IPv4 and IPv6, we move to reporing
generic HW checksumming capabilities.
This will allow the network stack to let mlx5 device offload checksum
for cases where the IP header is encapsulated within another protocol
and the skb->protocol doesn't indicate one of the IP versions protocol,
specifically in the case of MPLS label encapsulating the IP header and
the skb->protocol indiciates MPLS ethertype rather than IP.
Moving the HW_CSUM reporting is required in the basic net device hw
features mask and also in the extensions (vlan and encpasulation
features) since the extensions are always multiplied by the basic
features set during the packet's traversal through the stack's tx flow.
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Tue, 11 Jun 2019 09:29:41 +0000 (12:29 +0300)]
net/mlx5: MPFS, Allow adding the same MAC more than once
Remove the limitation preventing adding a vport's MAC address to the
Multi-Physical Function Switch (MPFS) more than once per E-switch, as
there is no difference in the MPFS if an address is being used by an
E-switch more than once.
This allows the E-switch to have multiple vports with the same MAC
address, allowing vports to be classified by VLAN id instead of by MAC
if desired.
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Gavi Teitz [Tue, 11 Jun 2019 08:54:36 +0000 (11:54 +0300)]
net/mlx5: MPFS, Cleanup add MAC flow
Unify and isolate the error handling flow in mlx5_mpfs_add_mac(),
removing code duplication.
Signed-off-by: Gavi Teitz <gavi@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Saeed Mahameed [Fri, 28 Jun 2019 22:49:59 +0000 (15:49 -0700)]
Merge branch 'mlx5-next' of git://git./linux/kernel/git/mellanox/linux
Misc updates from mlx5-next branch:
1) E-Switch vport metadata support for source vport matching
2) Convert mkey_table to XArray
3) Shared IRQs and to use single IRQ for all async EQs
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Vitaly Lifshits [Tue, 25 Jun 2019 14:39:11 +0000 (17:39 +0300)]
e1000e: PCIm function state support
Due to commit:
5d8682588605 ("[misc] mei: me: allow runtime
pm for platform with D0i3")
When disconnecting the cable and reconnecting it the NIC
enters DMoff state. This caused wrong link indication
and duplex mismatch. This bug is described in:
https://bugzilla.redhat.com/show_bug.cgi?id=
1689436
Checking PCIm function state and performing PHY reset after a
timeout in watchdog task solves this issue.
Signed-off-by: Vitaly Lifshits <vitaly.lifshits@intel.com>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Detlev Casanova [Sun, 23 Jun 2019 03:14:37 +0000 (23:14 -0400)]
e1000e: Make watchdog use delayed work
Use delayed work instead of timers to run the watchdog of the e1000e
driver.
Simplify the code with one less middle function.
Signed-off-by: Detlev Casanova <detlev.casanova@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Harshitha Ramamurthy [Wed, 19 Jun 2019 18:45:40 +0000 (14:45 -0400)]
i40e: Add macvlan support on i40e
This patch enables macvlan offloads for i40e. The idea is to use
channels as macvlan interfaces. The channels are VSIs of
type VMDQ. When the first macvlan is created, the maximum number of
channels possible are created. From then on, as a macvlan interface
is created, a macvlan filter is added to these already created
channels (VSIs).
This patch utilizes subordinate device traffic classes to make queue
groups(channels) available for an upper device like a macvlan.
Steps to configure macvlan offloads:
1. ethtool -K ethx l2-fwd-offload on
2. ip link add link ethx name macvlan1 type macvlan
3. ip addr add <address> dev macvlan1
4. ip link set macvlan1 up
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Wed, 19 Jun 2019 16:58:53 +0000 (09:58 -0700)]
ixgbevf: Use cached link state instead of re-reading the value for ethtool
Change the ethtool link settings call to just read the cached state out of
the adapter structure instead of trying to recheck the value from the PF.
Doing this should prevent excessive reading of the mailbox.
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Wed, 19 Jun 2019 14:30:44 +0000 (15:30 +0100)]
iavf: fix dereference of null rx_buffer pointer
A recent commit
efa14c3985828d ("iavf: allow null RX descriptors") added
a null pointer sanity check on rx_buffer, however, rx_buffer is being
dereferenced before that check, which implies a null pointer dereference
bug can potentially occur. Fix this by only dereferencing rx_buffer
until after the null pointer check.
Addresses-Coverity: ("Dereference before null check")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Artem Bityutskiy [Tue, 18 Jun 2019 11:55:13 +0000 (14:55 +0300)]
igb: add RR2DCDELAY to ethtool registers dump
This patch adds the RR2DCDELAY register to the ethtool registers dump.
RR2DCDELAY exists on I210 and I211 Intel Gigabit Ethernet chips and it stands
for "Read Request To Data Completion Delay". Here is how this register is
described in the I210 datasheet:
"This field captures the maximum PCIe split time in 16 ns units, which is the
maximum delay between the read request to the first data completion. This is
giving an estimation of the PCIe round trip time."
In other words, whenever I210 reads from the host memory (e.g., fetches a
descriptor from the ring), the chip measures every PCI DMA read transaction and
captures the maximum value. So it ends up containing the longest DMA
transaction time.
This register is very useful for troubleshooting and research purposes. If you
are dealing with time-sensitive networks, this register can help you get
an idea of your "I210-to-ring" latency. This helps answering questions like
"should I have PCIe ASPM enabled?" or "should I enable deep C-states?" on
my system.
It is safe to read this register at any point, reading it has no effect on
the I210 chip functionality.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Artem Bityutskiy [Tue, 18 Jun 2019 11:55:12 +0000 (14:55 +0300)]
igb: minor ethool regdump amendment
This patch has no functional impact and it is just a preparation
for the following patch. It removes an early return from the
'igb_get_regs()' function by moving the 82576-only registers
dump into an "if" block. With this preparation, we can dump more
non-82576 registers at the end of this function.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeff Kirsher [Mon, 17 Jun 2019 23:10:58 +0000 (16:10 -0700)]
iavf: Fix up debug print macro
This aligns the iavf_debug() macro with the other Intel drivers.
Add the bus number, bus_id field to i40e_bus_info so output shows
each physical port(i.e func) in following format:
[[[[<domain>]:]<bus>]:][<slot>][.[<func>]]
domains are numbered from 0 to ffff), bus (0-ff), slot (0-1f) and
function (0-7).
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Arjan van de Ven [Sat, 15 Jun 2019 00:29:35 +0000 (17:29 -0700)]
e1000e: Reduce boot time by tightening sleep ranges
The e1000e driver is a great user of the usleep_range() API,
and has nice ranges that in principle help power management.
However the ranges that are used only during system startup are
very long (and can add easily 100 msec to the boot time) while
the power savings of such long ranges is irrelevant due to the
one-off, boot only, nature of these functions.
This patch shrinks some of the longest ranges to be shorter
(while still using a power friendly 1 msec range); this saves
100msec+ of boot time on my BDW NUCs
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Gustavo A. R. Silva [Fri, 14 Jun 2019 23:23:20 +0000 (16:23 -0700)]
iavf: use struct_size() helper
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, replace code of the following form:
sizeof(struct virtchnl_ether_addr_list) + (count * sizeof(struct virtchnl_ether_addr))
with:
struct_size(veal, list, count)
and so on...
This code was detected with the help of Coccinelle.
Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Venkatesh Srinivas [Mon, 10 Jun 2019 21:27:50 +0000 (14:27 -0700)]
e1000: Use dma_wmb() instead of wmb() before doorbell writes
e1000 writes to doorbells to post transmit descriptors and fill the
receive ring. After writing descriptors to memory but before
writing to doorbells, use dma_wmb() rather than wmb(). wmb() is more
heavyweight than necessary for a device to see descriptor writes.
On x86, this avoids SFENCEs before doorbell writes in both the
Tx and Rx paths. On ARM, this converts DSB ST -> DMB OSHST.
Tested:
82576EB / x86; QEMU (qemu emulates an 8257x)
Signed-off-by: Venkatesh Srinivas <venkateshs@google.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Fri, 7 Jun 2019 18:19:20 +0000 (19:19 +0100)]
ixgbe: fix potential u32 overflow on shift
The u32 variable rem is being shifted using u32 arithmetic however
it is being passed to div_u64 that expects the expression to be a u64.
The 32 bit shift may potentially overflow, so cast rem to a u64 before
shifting to avoid this. Also remove comment about overflow.
Addresses-Coverity: ("Unintentional integer overflow")
Fixes: cd4583206990 ("ixgbe: implement support for SDP/PPS output on X550 hardware")
Fixes: 68d9676fc04e ("ixgbe: fix PTP SDP pin setup on X540 hardware")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Dann Frazier [Wed, 22 May 2019 23:22:58 +0000 (17:22 -0600)]
ixgbe: Avoid NULL pointer dereference with VF on non-IPsec hw
An ipsec structure will not be allocated if the hardware does not support
offload. Fixes the following Oops:
[ 191.045452] Unable to handle kernel NULL pointer dereference at virtual address
0000000000000000
[ 191.054232] Mem abort info:
[ 191.057014] ESR = 0x96000004
[ 191.060057] Exception class = DABT (current EL), IL = 32 bits
[ 191.065963] SET = 0, FnV = 0
[ 191.069004] EA = 0, S1PTW = 0
[ 191.072132] Data abort info:
[ 191.074999] ISV = 0, ISS = 0x00000004
[ 191.078822] CM = 0, WnR = 0
[ 191.081780] user pgtable: 4k pages, 48-bit VAs, pgdp =
0000000043d9e467
[ 191.088382] [
0000000000000000] pgd=
0000000000000000
[ 191.093252] Internal error: Oops:
96000004 [#1] SMP
[ 191.098119] Modules linked in: vhost_net vhost tap vfio_pci vfio_virqfd vfio_iommu_type1 vfio xt_CHECKSUM iptable_mangle ipt_MASQUERADE iptable_nat nf_nat_ipv4 nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter devlink ebtables ip6table_filter ip6_tables iptable_filter bpfilter ipmi_ssif nls_iso8859_1 input_leds joydev ipmi_si hns_roce_hw_v2 ipmi_devintf hns_roce ipmi_msghandler cppc_cpufreq sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 ses enclosure btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_generic usbhid hid raid6_pq libcrc32c raid1 raid0 multipath linear ixgbevf hibmc_drm ttm
[ 191.168607] drm_kms_helper aes_ce_blk aes_ce_cipher syscopyarea crct10dif_ce sysfillrect ghash_ce qla2xxx sysimgblt sha2_ce sha256_arm64 hisi_sas_v3_hw fb_sys_fops sha1_ce uas nvme_fc mpt3sas ixgbe drm hisi_sas_main nvme_fabrics usb_storage hclge scsi_transport_fc ahci libsas hnae3 raid_class libahci xfrm_algo scsi_transport_sas mdio aes_neon_bs aes_neon_blk crypto_simd cryptd aes_arm64
[ 191.202952] CPU: 94 PID: 0 Comm: swapper/94 Not tainted 4.19.0-rc1+ #11
[ 191.209553] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI RC0 - V1.20.01 04/26/2019
[ 191.218064] pstate:
20400089 (nzCv daIf +PAN -UAO)
[ 191.222873] pc : ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe]
[ 191.228093] lr : ixgbe_msg_task+0x2d0/0x1088 [ixgbe]
[ 191.233044] sp :
ffff000009b3bcd0
[ 191.236346] x29:
ffff000009b3bcd0 x28:
0000000000000000
[ 191.241647] x27:
ffff000009628000 x26:
0000000000000000
[ 191.246946] x25:
ffff803f652d7600 x24:
0000000000000004
[ 191.252246] x23:
ffff803f6a718900 x22:
0000000000000000
[ 191.257546] x21:
0000000000000000 x20:
0000000000000000
[ 191.262845] x19:
0000000000000000 x18:
0000000000000000
[ 191.268144] x17:
0000000000000000 x16:
0000000000000000
[ 191.273443] x15:
0000000000000000 x14:
0000000100000026
[ 191.278742] x13:
0000000100000025 x12:
ffff8a5f7fbe0df0
[ 191.284042] x11:
000000010000000b x10:
0000000000000040
[ 191.289341] x9 :
0000000000001100 x8 :
ffff803f6a824fd8
[ 191.294640] x7 :
ffff803f6a825098 x6 :
0000000000000001
[ 191.299939] x5 :
ffff000000f0ffc0 x4 :
0000000000000000
[ 191.305238] x3 :
ffff000028c00000 x2 :
ffff803f652d7600
[ 191.310538] x1 :
0000000000000000 x0 :
ffff000000f205f0
[ 191.315838] Process swapper/94 (pid: 0, stack limit = 0x00000000addfed5a)
[ 191.322613] Call trace:
[ 191.325055] ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe]
[ 191.329927] ixgbe_msg_task+0x2d0/0x1088 [ixgbe]
[ 191.334536] ixgbe_msix_other+0x274/0x330 [ixgbe]
[ 191.339233] __handle_irq_event_percpu+0x78/0x270
[ 191.343924] handle_irq_event_percpu+0x40/0x98
[ 191.348355] handle_irq_event+0x50/0xa8
[ 191.352180] handle_fasteoi_irq+0xbc/0x148
[ 191.356263] generic_handle_irq+0x34/0x50
[ 191.360259] __handle_domain_irq+0x68/0xc0
[ 191.364343] gic_handle_irq+0x84/0x180
[ 191.368079] el1_irq+0xe8/0x180
[ 191.371208] arch_cpu_idle+0x30/0x1a8
[ 191.374860] do_idle+0x1dc/0x2a0
[ 191.378077] cpu_startup_entry+0x2c/0x30
[ 191.381988] secondary_start_kernel+0x150/0x1e0
[ 191.386506] Code:
6b15003f 54000320 f1404a9f 54000060 (
79400260)
Fixes: eda0333ac2930 ("ixgbe: add VF IPsec management")
Signed-off-by: Dann Frazier <dann.frazier@canonical.com>
Acked-by: Shannon Nelson <snelson@pensando.io>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Miguel Bernal Marin [Mon, 27 Mar 2017 22:01:56 +0000 (16:01 -0600)]
e1000e: Increase pause and refresh time
Suggested-by: Tim Pepper <timothy.c.pepper@linux.intel.com>
Signed-off-by: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Gustavo A. R. Silva [Fri, 29 Mar 2019 23:38:47 +0000 (16:38 -0700)]
ice: Use struct_size() helper
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
struct boo entry[];
};
size = sizeof(struct foo) + count * sizeof(struct boo);
instance = alloc(size, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
size = struct_size(instance, entry, count);
This code was detected with the help of Coccinelle.
Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
David S. Miller [Fri, 28 Jun 2019 21:45:34 +0000 (14:45 -0700)]
Merge branch 'net-sched-Add-txtime-assist-support-for-taprio'
Vedang Patel says:
====================
net/sched: Add txtime-assist support for taprio.
Changes in v6:
- Use _BITUL() instead of BIT() in UAPI for etf. (patch #1)
- Fix a bug reported by kbuild test bot in length_to_duration(). (patch #6)
- Remove an unused function (get_cycle_start()). (Patch #6)
Changes in v5:
- Commit message improved for the igb patch (patch #1).
- Fixed typo in commit message for etf patch (patch #2).
Changes in v4:
- Remove inline directive from functions in foo.c.
- Fix spacing in pkt_sched.h (for etf patch).
Changes in v3:
- Simplify implementation for taprio flags.
- txtime_delay can only be set if txtime-assist mode is enabled.
- txtime_delay and flags will only be visible in tc output if set by user.
- Minor changes in error reporting.
Changes in v2:
- Txtime-offload has now been renamed to txtime-assist mode.
- Renamed the offload parameter to flags.
- Removed the code which introduced the hardware offloading functionality.
Original Cover letter (with above changes included)
--------------------------------------------------
Currently, we are seeing packets being transmitted outside their
timeslices. We can confirm that the packets are being dequeued at the right
time. So, the delay is induced after the packet is dequeued, because
taprio, without any offloading, has no control of when a packet is actually
transmitted.
In order to solve this, we are making use of the txtime feature provided by
ETF qdisc. Hardware offloading needs to be supported by the ETF qdisc in
order to take advantage of this feature. The taprio qdisc will assign
txtime (in skb->tstamp) for all the packets which do not have the txtime
allocated via the SO_TXTIME socket option. For the packets which already
have SO_TXTIME set, taprio will validate whether the packet will be
transmitted in the correct interval.
In order to support this, the following parameters have been added:
- flags (taprio): This is added in order to support different offloading
modes which will be added in the future.
- txtime-delay (taprio): This indicates the minimum time it will take for
the packet to hit the wire after it reaches taprio_enqueue(). This is
useful in determining whether we can transmit the packet in the remaining
time if the gate corresponding to the packet is currently open.
- skip_skb_check (ETF): ETF currently drops any packet which does not have
the SO_TXTIME socket option set. This check can be skipped by specifying
this option.
Following is an example configuration:
tc qdisc replace dev $IFACE parent root handle 100 taprio \\
num_tc 3 \\
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \\
queues 1@0 1@0 1@0 \\
base-time $BASE_TIME \\
sched-entry S 01 300000 \\
sched-entry S 02 300000 \\
sched-entry S 04 400000 \\
flags 0x1 \\
txtime-delay 200000 \\
clockid CLOCK_TAI
tc qdisc replace dev $IFACE parent 100:1 etf \\
offload delta 200000 clockid CLOCK_TAI skip_skb_check
Here, the "flags" parameter is indicating that the txtime-assist mode is
enabled. Also, all the traffic classes have been assigned the same queue.
This is to prevent the traffic classes in the lower priority queues from
getting starved. Note that this configuration is specific to the i210
ethernet card. Other network cards where the hardware queues are given the
same priority, might be able to utilize more than one queue.
Following are some of the other highlights of the series:
- Fix a bug where hardware timestamping and SO_TXTIME options cannot be
used together. (Patch 1)
- Introduces the skip_skb_check option. (Patch 2)
- Make TxTime assist mode work with TCP packets (Patch 7).
The following changes are recommended to be done in order to get the best
performance from taprio in this mode:
ip link set dev enp1s0 mtu 1514
ethtool -K eth0 gso off
ethtool -K eth0 tso off
ethtool --set-eee eth0 eee off
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:19 +0000 (15:07 -0700)]
taprio: Adjust timestamps for TCP packets
When the taprio qdisc is running in "txtime offload" mode, it will
set the launchtime value (in skb->tstamp) for all the packets which do
not have the SO_TXTIME socket option. But, the TCP packets already have
this value set and it indicates the earliest departure time represented
in CLOCK_MONOTONIC clock.
We need to respect the timestamp set by the TCP subsystem. So, convert
this time to the clock which taprio is using and ensure that the packet
is not transmitted before the deadline set by TCP.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:18 +0000 (15:07 -0700)]
taprio: make clock reference conversions easier
Later in this series we will need to transform from
CLOCK_MONOTONIC (used in TCP) to the clock reference used in TAPRIO.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:17 +0000 (15:07 -0700)]
taprio: Add support for txtime-assist mode
Currently, we are seeing non-critical packets being transmitted outside of
their timeslice. We can confirm that the packets are being dequeued at the
right time. So, the delay is induced in the hardware side. The most likely
reason is the hardware queues are starving the lower priority queues.
In order to improve the performance of taprio, we will be making use of the
txtime feature provided by the ETF qdisc. For all the packets which do not
have the SO_TXTIME option set, taprio will set the transmit timestamp (set
in skb->tstamp) in this mode. TAPrio Qdisc will ensure that the transmit
time for the packet is set to when the gate is open. If SO_TXTIME is set,
the TAPrio qdisc will validate whether the timestamp (in skb->tstamp)
occurs when the gate corresponding to skb's traffic class is open.
Following two parameters added to support this mode:
- flags: used to enable txtime-assist mode. Will also be used to enable
other modes (like hardware offloading) later.
- txtime-delay: This indicates the minimum time it will take for the packet
to hit the wire. This is useful in determining whether we can transmit
the packet in the remaining time if the gate corresponding to the packet is
currently open.
An example configuration for enabling txtime-assist:
tc qdisc replace dev eth0 parent root handle 100 taprio \\
num_tc 3 \\
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \\
queues 1@0 1@0 1@0 \\
base-time
1558653424279842568 \\
sched-entry S 01 300000 \\
sched-entry S 02 300000 \\
sched-entry S 04 400000 \\
flags 0x1 \\
txtime-delay 40000 \\
clockid CLOCK_TAI
tc qdisc replace dev $IFACE parent 100:1 etf skip_sock_check \\
offload delta 200000 clockid CLOCK_TAI
Note that all the traffic classes are mapped to the same queue. This is
only possible in taprio when txtime-assist is enabled. Also, note that the
ETF Qdisc is enabled with offload mode set.
In this mode, if the packet's traffic class is open and the complete packet
can be transmitted, taprio will try to transmit the packet immediately.
This will be done by setting skb->tstamp to current_time + the time delta
indicated in the txtime-delay parameter. This parameter indicates the time
taken (in software) for packet to reach the network adapter.
If the packet cannot be transmitted in the current interval or if the
packet's traffic is not currently transmitting, the skb->tstamp is set to
the next available timestamp value. This is tracked in the next_launchtime
parameter in the struct sched_entry.
The behaviour w.r.t admin and oper schedules is not changed from what is
present in software mode.
The transmit time is already known in advance. So, we do not need the HR
timers to advance the schedule and wakeup the dequeue side of taprio. So,
HR timer won't be run when this mode is enabled.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:16 +0000 (15:07 -0700)]
taprio: Remove inline directive
Remove inline directive from length_to_duration(). We will let the compiler
make the decisions.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:15 +0000 (15:07 -0700)]
taprio: calculate cycle_time when schedule is installed
cycle time for a particular schedule is calculated only when it is first
installed. So, it makes sense to just calculate it once right after the
'cycle_time' parameter has been parsed and store it in cycle_time.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:14 +0000 (15:07 -0700)]
etf: Add skip_sock_check
Currently, etf expects a socket with SO_TXTIME option set for each packet
it encounters. So, it will drop all other packets. But, in the future
commits we are planning to add functionality where tstamp value will be set
by another qdisc. Also, some packets which are generated from within the
kernel (e.g. ICMP packets) do not have any socket associated with them.
So, this commit adds support for skip_sock_check. When this option is set,
etf will skip checking for a socket and other associated options for all
skbs.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:13 +0000 (15:07 -0700)]
etf: Don't use BIT() in UAPI headers.
The BIT() macro isn't exported as part of the UAPI interface. So, the
compile-test to ensure they are self contained fails. So, use _BITUL()
instead.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vedang Patel [Tue, 25 Jun 2019 22:07:12 +0000 (15:07 -0700)]
igb: clear out skb->tstamp after reading the txtime
If a packet which is utilizing the launchtime feature (via SO_TXTIME socket
option) also requests the hardware transmit timestamp, the hardware
timestamp is not delivered to the userspace. This is because the value in
skb->tstamp is mistaken as the software timestamp.
Applications, like ptp4l, request a hardware timestamp by setting the
SOF_TIMESTAMPING_TX_HARDWARE socket option. Whenever a new timestamp is
detected by the driver (this work is done in igb_ptp_tx_work() which calls
igb_ptp_tx_hwtstamps() in igb_ptp.c[1]), it will queue the timestamp in the
ERR_QUEUE for the userspace to read. When the userspace is ready, it will
issue a recvmsg() call to collect this timestamp. The problem is in this
recvmsg() call. If the skb->tstamp is not cleared out, it will be
interpreted as a software timestamp and the hardware tx timestamp will not
be successfully sent to the userspace. Look at skb_is_swtx_tstamp() and the
callee function __sock_recv_timestamp() in net/socket.c for more details.
Signed-off-by: Vedang Patel <vedang.patel@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 28 Jun 2019 21:36:25 +0000 (14:36 -0700)]
Merge branch 'mirred-recurse'
John Hurley says:
====================
Track recursive calls in TC act_mirred
These patches aim to prevent act_mirred causing stack overflow events from
recursively calling packet xmit or receive functions. Such events can
occur with poor TC configuration that causes packets to travel in loops
within the system.
Florian Westphal advises that a recursion crash and packets looping are
separate issues and should be treated as such. David Miller futher points
out that pcpu counters cannot track the precise skb context required to
detect loops. Hence these patches are not aimed at detecting packet loops,
rather, preventing stack flows arising from such loops.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Mon, 24 Jun 2019 22:13:36 +0000 (23:13 +0100)]
net: sched: protect against stack overflow in TC act_mirred
TC hooks allow the application of filters and actions to packets at both
ingress and egress of the network stack. It is possible, with poor
configuration, that this can produce loops whereby an ingress hook calls
a mirred egress action that has an egress hook that redirects back to
the first ingress etc. The TC core classifier protects against loops when
doing reclassifies but there is no protection against a packet looping
between multiple hooks and recursively calling act_mirred. This can lead
to stack overflow panics.
Add a per CPU counter to act_mirred that is incremented for each recursive
call of the action function when processing a packet. If a limit is passed
then the packet is dropped and CPU counter reset.
Note that this patch does not protect against loops in TC datapaths. Its
aim is to prevent stack overflow kernel panics that can be a consequence
of such loops.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>