openwrt/staging/blogic.git
6 years agosfc: protect list of RSS contexts under a mutex
Edward Cree [Tue, 27 Mar 2018 16:44:36 +0000 (17:44 +0100)]
sfc: protect list of RSS contexts under a mutex

Otherwise races are possible between ethtool ops and
 efx_ef10_rx_restore_rss_contexts().
Also, don't try to perform the restore on every reset, only after an MC
 reboot, otherwise we'll leak RSS contexts on the NIC.

Fixes: 42356d9a137b ("sfc: support RSS spreading of ethtool ntuple filters")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosfc: return a better error if filter insertion collides with MC reboot
Edward Cree [Tue, 27 Mar 2018 16:44:21 +0000 (17:44 +0100)]
sfc: return a better error if filter insertion collides with MC reboot

If some other operation gets the MCDI lock ahead of us and performs an MC
 reboot, then our attempt to insert the filter will fail with EINVAL,
 because the destination VI (spec->dmaq_id, MC_CMD_FILTER_OP_IN_RX_QUEUE) does
 not exist.  But the caller's request (which might e.g. be an ethtool ntuple
 request from userland) isn't invalid, it just got unlucky; so return EAGAIN.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosfc: use a semaphore to lock farch filters too
Edward Cree [Tue, 27 Mar 2018 16:42:57 +0000 (17:42 +0100)]
sfc: use a semaphore to lock farch filters too

With this change, the spinlock efx->filter_lock is no longer used and is
 thus removed.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosfc: give ef10 its own rwsem in the filter table instead of filter_lock
Edward Cree [Tue, 27 Mar 2018 16:42:28 +0000 (17:42 +0100)]
sfc: give ef10 its own rwsem in the filter table instead of filter_lock

efx->filter_lock remains in place for use on farch, but EF10 now ignores it.
EFX_EF10_FILTER_FLAG_BUSY is no longer needed, hence it is removed.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosfc: replace asynchronous filter operations
Edward Cree [Tue, 27 Mar 2018 16:41:59 +0000 (17:41 +0100)]
sfc: replace asynchronous filter operations

Instead of having an efx->type->filter_rfs_insert() method, just use
 workitems with a worker function that calls efx->type->filter_insert().
The only user of this is efx_filter_rfs(), which now queues a call to
 efx_filter_rfs_work().
Similarly, efx_filter_rfs_expire() is now a worker function called on a
 new channel->filter_work work_struct, so the method
 efx->type->filter_rfs_expire_one() is no longer called in atomic context.
 We also add a new mutex efx->rps_mutex to protect the RPS state (efx->
 rps_expire_channel, efx->rps_expire_index, and channel->rps_flow_id) so
 that the taking of efx->filter_lock can be moved to
 efx->type->filter_rfs_expire_one().
Thus, all filter table functions are now called in a sleepable context,
 allowing them to use sleeping locks in a future patch.

Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'pernet-all-async'
David S. Miller [Tue, 27 Mar 2018 17:18:10 +0000 (13:18 -0400)]
Merge branch 'pernet-all-async'

Kirill Tkhai says:

====================
Make pernet_operations always read locked

All the pernet_operations are converted, and the last one
is in this patchset (nfsd_net_ops acked by J. Bruce Fields).
So, it's the time to kill pernet_operations::async field,
and make setup_net() and cleanup_net() always require
the rwsem only read locked.

All further pernet_operations have to be developed to fit
this rule. Some of previous patches added a comment to
struct pernet_operations about that.

Also, this patchset renames net_sem to pernet_ops_rwsem
to make the target area of the rwsem is more clear visible,
and adds more comments.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Add more comments
Kirill Tkhai [Tue, 27 Mar 2018 15:02:32 +0000 (18:02 +0300)]
net: Add more comments

This adds comments to different places to improve
readability.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Rename net_sem to pernet_ops_rwsem
Kirill Tkhai [Tue, 27 Mar 2018 15:02:23 +0000 (18:02 +0300)]
net: Rename net_sem to pernet_ops_rwsem

net_sem is some undefined area name, so it will be better
to make the area more defined.

Rename it to pernet_ops_rwsem for better readability and
better intelligibility.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Drop pernet_operations::async
Kirill Tkhai [Tue, 27 Mar 2018 15:02:13 +0000 (18:02 +0300)]
net: Drop pernet_operations::async

Synchronous pernet_operations are not allowed anymore.
All are asynchronous. So, drop the structure member.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Reflect all pernet_operations are converted
Kirill Tkhai [Tue, 27 Mar 2018 15:02:01 +0000 (18:02 +0300)]
net: Reflect all pernet_operations are converted

All pernet_operations are reviewed and converted, hooray!
Reflect this in core code: setup_net() and cleanup_net()
will take down_read() always.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Convert nfsd_net_ops
Kirill Tkhai [Tue, 27 Mar 2018 15:01:51 +0000 (18:01 +0300)]
net: Convert nfsd_net_ops

These pernet_operations look similar to rpcsec_gss_net_ops,
they just create and destroy another caches. So, they also
can be async.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: mvpp2: Use relaxed I/O in data path
Yan Markman [Tue, 27 Mar 2018 14:49:05 +0000 (16:49 +0200)]
net: mvpp2: Use relaxed I/O in data path

Use relaxed I/O on the hot path. This achieves significant performance
improvements. On a 10G link, this makes a basic iperf TCP test go from
an average of 4.5 Gbits/sec to about 9.40 Gbits/sec.

Signed-off-by: Yan Markman <ymarkman@marvell.com>
[Maxime: Commit message, cosmetic changes]
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge tag 'mlx5-updates-2018-03-22' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Tue, 27 Mar 2018 15:05:23 +0000 (11:05 -0400)]
Merge tag 'mlx5-updates-2018-03-22' of git://git./linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2018-03-22 (Misc updates)

This series includes misc updates for mlx5 core and netdev dirver,

Highlights:

From Inbar, three patches to add support for PFC stall prevention
statistics and enable/disable through new ethtool tunable, as requested
from previous submission.

From Moshe, four patches, added more drop counters:
- drop counter for netdev steering miss
- drop counter for when VF logical link is down
        - drop counter for when netdev logical link is down.

From Or, three patches to support vlan push/pop offload via tc HW action,
for newer HW (Connectx-5 and onward) via HW steering flow actions rather
than the emulated path for the older HW brands.

And five more misc small trivial patches.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoliquidio: Removed duplicate Tx queue status check
Intiyaz Basha [Mon, 26 Mar 2018 20:40:27 +0000 (13:40 -0700)]
liquidio: Removed duplicate Tx queue status check

Napi is checking Tx queue status and waking the Tx queue if required.
Same operation is being done while freeing every Tx buffer.
So removed the duplicate operation of checking Tx queue status from the Tx
buffer free functions.

Signed-off-by: Intiyaz Basha <intiyaz.basha@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoipv6: addrconf: Use normal debugging style
Joe Perches [Mon, 26 Mar 2018 15:35:01 +0000 (08:35 -0700)]
ipv6: addrconf: Use normal debugging style

Remove local ADBG macro and use netdev_dbg/pr_debug

Miscellanea:

o Remove unnecessary debug message after allocation failure as there
  already is a dump_stack() on the failure paths
o Leave the allocation failure message on snmp6_alloc_dev as there
  is one code path that does not do a dump_stack()

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agotc-testing: Correct compound statements for namespace execution
Lucas Bates [Mon, 26 Mar 2018 14:46:14 +0000 (10:46 -0400)]
tc-testing: Correct compound statements for namespace execution

If tdc is executing test cases inside a namespace, only the
first command in a compound statement will be executed inside
the namespace by tdc. As a result, the subsequent commands
are not executed inside the namespace and the test will fail.

Example:

for i in {x..y}; do args="foo"; done && tc actions add $args

The namespace execution feature will prepend 'ip netns exec'
to the command:

ip netns exec tcut for i in {x..y}; do args="foo"; done && \
  tc actions add $args

So the actual tc command is not parsed by the shell as being
part of the namespace execution.

Enclosing these compound statements inside a bash invocation
with proper escape characters resolves the problem by creating
a subshell inside the namespace.

Signed-off-by: Lucas Bates <lucasb@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agotipc: tipc_node_create() can be static
Wei Yongjun [Mon, 26 Mar 2018 14:33:13 +0000 (14:33 +0000)]
tipc: tipc_node_create() can be static

Fixes the following sparse warning:

net/tipc/node.c:336:18: warning:
 symbol 'tipc_node_create' was not declared. Should it be static?

Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agotipc: fix error handling in tipc_udp_enable()
Wei Yongjun [Mon, 26 Mar 2018 14:32:44 +0000 (14:32 +0000)]
tipc: fix error handling in tipc_udp_enable()

Release alloced resource before return from the error handling
case in tipc_udp_enable(), otherwise will cause memory leak.

Fixes: 52dfae5c85a4 ("tipc: obtain node identity from interface by default")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: aquantia: Make function hw_atl_utils_mpi_set_speed() static
Wei Yongjun [Mon, 26 Mar 2018 14:32:27 +0000 (14:32 +0000)]
net: aquantia: Make function hw_atl_utils_mpi_set_speed() static

Fixes the following sparse warning:

drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c:508:5: warning:
 symbol 'hw_atl_utils_mpi_set_speed' was not declared. Should it be static?

Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'net-mvpp2-Remove-unnecessary-dynamic-allocs'
David S. Miller [Tue, 27 Mar 2018 14:47:23 +0000 (10:47 -0400)]
Merge branch 'net-mvpp2-Remove-unnecessary-dynamic-allocs'

Maxime Chevallier says:

====================
net: mvpp2: Remove unnecessary dynamic allocs

Some utility functions in mvpp2 make use of dynamic alloc to exchange temporary
objects representing Parser Entries (which are generic filtering entries in the
PPv2 controller).

These objects are small (44 bytes each), we can use the stack to exchange them.

Some previous discussion on this topic showed that the mvpp2_prs_hw_read, which
initializes a struct mvpp2_prs_entry based on one of its fields, can easily lead
to erroneous code if we don't zero-out the struct beforehand :

https://lkml.org/lkml/2018/3/21/739

To fix this, I propose to rename mvpp2_prs_hw_read into mvpp2_prs_init_from_hw,
make it zero-out the struct and take the index as a parameter. That's what's
done in the first patch of the series.

The second patch is the V3 of
("net: mvpp2: Don't use dynamic allocs for local variables"), making use of
mvpp2_prs_init_from_hw and taking previous comments into account.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: mvpp2: Don't use dynamic allocs for local variables
Maxime Chevallier [Mon, 26 Mar 2018 13:34:23 +0000 (15:34 +0200)]
net: mvpp2: Don't use dynamic allocs for local variables

Some helper functions that search for given entries in the TCAM filter
on PPv2 controller make use of dynamically alloced temporary variables,
allocated with GFP_KERNEL. These functions can be called in atomic
context, and dynamic alloc is not really needed in these cases anyways.

This commit gets rid of dynamic allocs and use stack allocation in the
following functions, and where they're used :
 - mvpp2_prs_flow_find
 - mvpp2_prs_vlan_find
 - mvpp2_prs_double_vlan_find
 - mvpp2_prs_mac_da_range_find

For all these functions, instead of returning an temporary object
representing the TCAM entry, we simply return the TCAM id that matches
the requested entry.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: mvpp2: Make mvpp2_prs_hw_read a parser entry init function
Maxime Chevallier [Mon, 26 Mar 2018 13:34:22 +0000 (15:34 +0200)]
net: mvpp2: Make mvpp2_prs_hw_read a parser entry init function

The mvpp2_prs_hw_read function uses the 'index' field of the struct
mvpp2_prs_entry to initialize the rest of the fields. This makes it
unclear from a caller's perspective, who needs to manipulate a struct
that is not entirely initialized.

This commit makes it an init function for prs_entry, by passing it the
index as a parameter. The function now zeroes the entry, and sets the
index field before doing all other init from HW.

The function is renamed 'mvpp2_prs_init_from_hw' to make that clear.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet/ncsi: check for null return from call to nla_nest_start
Colin Ian King [Mon, 26 Mar 2018 11:27:12 +0000 (12:27 +0100)]
net/ncsi: check for null return from call to nla_nest_start

The call to nla_nest_start calls nla_put which can lead to a NULL
return so it's possible for attr to become NULL and we can potentially
get a NULL pointer dereference on attr.  Fix this by checking for
a NULL return.

Detected by CoverityScan, CID#1466125 ("Dereference null return")

Fixes: 955dc68cb9b2 ("net/ncsi: Add generic netlink family")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosctp: remove unnecessary asoc in sctp_has_association
Xin Long [Mon, 26 Mar 2018 08:55:00 +0000 (16:55 +0800)]
sctp: remove unnecessary asoc in sctp_has_association

After Commit dae399d7fdee ("sctp: hold transport instead of assoc
when lookup assoc in rx path"), it put transport instead of asoc
in sctp_has_association. Variable 'asoc' is not used any more.

So this patch is to remove it, while at it,  it also changes the
return type of sctp_has_association to bool, and does the same
for it's caller sctp_endpoint_is_peeled_off.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Tue, 27 Mar 2018 13:56:13 +0000 (09:56 -0400)]
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2018-03-26

This series contains updates to i40e only.

Jake provides several patches which remove the need for cmpxchg64(),
starting with moving I40E_FLAG_[UDP]_FILTER_SYNC from pf->flags to pf->state
since they are modified during run time possibly when the RTNL lock is not
held so they should be a state bits and not flags.  Moved additional
"flags" which should be state fields, into pf->state.  Ensure we hold
the RTNL lock for the entire sequence of preparing for reset and when
resuming, which will protect the flags related to interrupt scheme under
RTNL lock so that their modification is properly threaded.  Finally,
cleanup the use of cmpxchg64() since it is no longer needed.  Cleaned up
the holes in the feature flags created my moving some flags to the state
field.

Björn Töpel adds XDP_REDIRECT support as well as tweaking the page
counting for XDP_REDIRECT so that it will function properly.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Mon, 26 Mar 2018 22:20:02 +0000 (18:20 -0400)]
Merge branch '100GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
100GbE Intel Wired LAN Driver Updates 2018-03-26

This patch series adds the ice driver, which will support the Intel(R)
E800 Series of network devices.

This is the first phase in the release of this driver where we implement
basic transmit and receive. The idea behind the multi-phase release is to
aid in code review as well as testing. Subsequent phases will implement
advanced features (like SR-IOV, tunnelling, flow director, QoS, etc.) that
build upon the previous phase(s). Each phase will be submitted as a patch
series.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoi40e: add support for XDP_REDIRECT
Björn Töpel [Thu, 22 Mar 2018 15:14:34 +0000 (16:14 +0100)]
i40e: add support for XDP_REDIRECT

The driver now acts upon the XDP_REDIRECT return action. Two new ndos
are implemented, ndo_xdp_xmit and ndo_xdp_flush.

XDP_REDIRECT action enables XDP program to redirect frames to other
netdevs.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: tweak page counting for XDP_REDIRECT
Björn Töpel [Thu, 22 Mar 2018 15:14:33 +0000 (16:14 +0100)]
i40e: tweak page counting for XDP_REDIRECT

This commit tweaks the page counting for XDP_REDIRECT to function
properly. XDP_REDIRECT support will be added in a future commit.

The current page counting scheme assumes that the reference count
cannot decrease until the received frame is sent to the upper layers
of the networking stack. This assumption does not hold for the
XDP_REDIRECT action, since a page (pointed out by xdp_buff) can have
its reference count decreased via the xdp_do_redirect call.

To work around that, we now start off by a large page count and then
don't allow a refcount less than two.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: re-number feature flags to remove gaps
Jacob Keller [Fri, 16 Mar 2018 08:26:37 +0000 (01:26 -0700)]
i40e: re-number feature flags to remove gaps

Remove the gaps created by the recent refactor of various feature flags
that have moved to the state field. Use only a u32 now that we have
fewer than 32 flags in the field.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: stop using cmpxchg flow in i40e_set_priv_flags()
Jacob Keller [Fri, 16 Mar 2018 08:26:36 +0000 (01:26 -0700)]
i40e: stop using cmpxchg flow in i40e_set_priv_flags()

Now that the only places which modify flags are either (a) during
initialization prior to creating a netdevice, or (b) while holding the
rtnl lock, we no longer need the cmpxchg64 call in i40e_set_priv_flags.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: hold the RTNL lock while changing interrupt schemes
Jacob Keller [Fri, 16 Mar 2018 08:26:35 +0000 (01:26 -0700)]
i40e: hold the RTNL lock while changing interrupt schemes

When we suspend and resume, we need to clear and re-enable the interrupt
scheme. This was previously not done while holding the RTNL lock, which
could be problematic, because we are actually destroying and re-creating
queues.

Hold the RTNL lock for the entire sequence of preparing for reset, and
when resuming. This additionally protects the flags related to interrupt
scheme under RTNL lock so that their modification is properly threaded.

This is part of a larger effort to remove the need for cmpxchg64 in
i40e_set_priv_flags().

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: move client flags into state bits
Jacob Keller [Fri, 16 Mar 2018 08:26:34 +0000 (01:26 -0700)]
i40e: move client flags into state bits

The iWarp client flags are all potentially changed when the RTNL lock is
not held, so they should not be part of the pf->flags variable. Instead,
move them into the state field so that we can use atomic bit operations.

This is part of a larger effort to remove cmpxchg64 in
i40e_set_priv_flags()

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: move I40E_FLAG_TEMP_LINK_POLLING to state field
Jacob Keller [Fri, 16 Mar 2018 08:26:33 +0000 (01:26 -0700)]
i40e: move I40E_FLAG_TEMP_LINK_POLLING to state field

This flag is modified outside of the RTNL lock and thus should not be
part of the pf->flags variable.

Use a state bit instead, so that we can use atomic bit operations.

This is part of a larger effort to remove cmpxchg64 in
i40e_set_priv_flags()

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agonet/mlx5e: Add VLAN offload features to hw_enc_features
Aviv Heller [Thu, 17 Aug 2017 13:44:16 +0000 (16:44 +0300)]
net/mlx5e: Add VLAN offload features to hw_enc_features

We support outer VLAN offload in driver and HW regardless of whether
an encapsulation is present in the next headers.

Exposing this in hw_enc_features will allow us to offload outer VLANs
in cases where encapsulation protocols like VXLAN and IPsec are used.

Signed-off-by: Aviv Heller <avivh@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Add a helper macro in set features ndo
Gal Pressman [Thu, 11 Jan 2018 16:46:20 +0000 (18:46 +0200)]
net/mlx5e: Add a helper macro in set features ndo

Add a new macro to prevent copy-pasting the same code for each new
feature.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Make choose LRO timeout function static
Gal Pressman [Wed, 31 Jan 2018 12:45:40 +0000 (14:45 +0200)]
net/mlx5e: Make choose LRO timeout function static

The function is used in en_main.c only, we can make it static and remove
its declaration from en.h

Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Remove redundant check in get ethtool stats
Gal Pressman [Thu, 11 Jan 2018 14:24:06 +0000 (16:24 +0200)]
net/mlx5e: Remove redundant check in get ethtool stats

ethtool core code makes sure data isn't NULL before calling
get_ethtool_stats, testing it again in the driver is redundant.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Protect from command bit overflow
Leon Romanovsky [Tue, 2 Jan 2018 14:49:56 +0000 (16:49 +0200)]
net/mlx5: Protect from command bit overflow

The system with CONFIG_UBSAN enabled on produces the following error
during driver initialization. The reason to it that max_reg_cmds can be
larger enough to cause to "1 << max_reg_cmds" overflow the unsigned long.

================================================================================
UBSAN: Undefined behaviour in drivers/net/ethernet/mellanox/mlx5/core/cmd.c:1805:42
signed integer overflow:
-2147483648 - 1 cannot be represented in type 'int'
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2-00032-g06cda2358d9b-dirty #724
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
Call Trace:
 dump_stack+0xe9/0x18f
 ? dma_virt_alloc+0x81/0x81
 ubsan_epilogue+0xe/0x4e
 handle_overflow+0x187/0x20c
 mlx5_cmd_init+0x73a/0x12b0
 mlx5_load_one+0x1c3d/0x1d30
 init_one+0xd02/0xf10
 pci_device_probe+0x26c/0x3b0
 driver_probe_device+0x622/0xb40
 __driver_attach+0x175/0x1b0
 bus_for_each_dev+0xef/0x190
 bus_add_driver+0x2db/0x490
 driver_register+0x16b/0x1e0
 __pci_register_driver+0x177/0x1b0
 init+0x6d/0x92
 do_one_initcall+0x15b/0x270
 kernel_init_freeable+0x2d8/0x3d0
 kernel_init+0x14/0x190
 ret_from_fork+0x24/0x30
================================================================================

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Offload tc vlan push/pop using HW action
Or Gerlitz [Wed, 31 Jan 2018 16:36:03 +0000 (18:36 +0200)]
net/mlx5e: Offload tc vlan push/pop using HW action

Currently, we are emulating the offload of vlan push/pop actions using
global setup as done by commit f5f82476090f ("net/mlx5: E-Switch, Support
VLAN actions in the offloads mode"). With newer NICs, we can apply a flow
action for that matter, do that while keeping the emulated path for the
older HW brands.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add core support for vlan push/pop steering action
Or Gerlitz [Sun, 28 Jan 2018 18:14:20 +0000 (20:14 +0200)]
net/mlx5: Add core support for vlan push/pop steering action

Newer NICs (ConnectX-5 and onward) can apply vlan pop or push as an
action taking place during flow steering. Add the core bits for that.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: E-Switch, Use same source for offloaded actions check
Or Gerlitz [Tue, 30 Jan 2018 14:13:28 +0000 (14:13 +0000)]
net/mlx5: E-Switch, Use same source for offloaded actions check

Align the checks for modify header and encap actions with the
rest of the code.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Add interface down dropped packets statistics
Moshe Shemesh [Thu, 8 Feb 2018 13:09:57 +0000 (15:09 +0200)]
net/mlx5e: Add interface down dropped packets statistics

Added the following packets drop counter:
Rx interface down dropped packets - counts packets which were received
while the ETH interface was down.
This counter will be shown on ethtool as a new counter called
rx_if_down_packets.

The implementation allocates a q_counter for drop rq which gets all the
received traffic while the interface is down.

Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add packet dropped while vport down statistics
Moshe Shemesh [Sat, 13 Jan 2018 22:56:25 +0000 (00:56 +0200)]
net/mlx5: Add packet dropped while vport down statistics

Added the following packets dropped while vport down statistics:

Rx dropped while vport down - counts packets which were steered by
e-switch to a vport, but dropped since the vport was down. This counter
will be shown on ip link tool as part of the vport rx_dropped counter.

Tx dropped while vport down - counts packets which were transmitted by
a vport, but dropped due to vport logical link down. This counter
will be shown on ip link tool as part of the vport tx_dropped counter.

The counters are read from FW by command QUERY_VNIC_ENV.

Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: Add vnic steering drop statistics
Moshe Shemesh [Tue, 26 Dec 2017 14:46:29 +0000 (16:46 +0200)]
net/mlx5e: Add vnic steering drop statistics

Added the following packets drop counter:
Rx steering missed dropped packets - counts packets which were dropped
due to miss on NIC rx steering rules.
This counter will be shown on ethtool as a new counter called
rx_steer_missed_packets.

Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5: Add support for QUERY_VNIC_ENV command
Moshe Shemesh [Sun, 7 Jan 2018 14:45:27 +0000 (16:45 +0200)]
net/mlx5: Add support for QUERY_VNIC_ENV command

Add support for new FW command QUERY_VNIC_ENV.
The command is used by the driver to query vnic diagnostic statistics
from FW.

Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agonet/mlx5e: PFC stall prevention support
Inbar Karmy [Mon, 20 Nov 2017 16:06:20 +0000 (18:06 +0200)]
net/mlx5e: PFC stall prevention support

Implement set/get functions to configure PFC stall prevention
timeout by tunables api through ethtool.
By default the stall prevention timeout is configured to 8 sec.
Timeout range is: 80-8000 msec.

Enabling stall prevention with the auto timeout will set
the timeout to 100 msec.

Signed-off-by: Inbar Karmy <inbark@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoethtool: Add support for configuring PFC stall prevention in ethtool
Inbar Karmy [Mon, 20 Nov 2017 14:14:30 +0000 (16:14 +0200)]
ethtool: Add support for configuring PFC stall prevention in ethtool

In the event where the device unexpectedly becomes unresponsive
for a long period of time, flow control mechanism may propagate
pause frames which will cause congestion spreading to the entire
network.
To prevent this scenario, when the device is stalled for a period
longer than a pre-configured timeout, flow control mechanisms are
automatically disabled.

This patch adds support for the ETHTOOL_PFC_STALL_PREVENTION
as a tunable.
This API provides support for configuring flow control storm prevention
timeout (msec).

Signed-off-by: Inbar Karmy <inbark@mellanox.com>
Cc: Michal Kubecek <mkubecek@suse.cz>
Cc: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoi40e: move AUTO_DISABLED flags into the state field
Jacob Keller [Fri, 16 Mar 2018 08:26:32 +0000 (01:26 -0700)]
i40e: move AUTO_DISABLED flags into the state field

The two Flow Directory auto disable flags are used at run time to mark
when the flow director features needed to be disabled. Thus the flags
could change even when the RTNL lock is not held.

They also have some code constructions which really should be
test_and_set or test_and_clear using atomic bit operations.

Create new state fields to mark this, and stop including them in
pf->flags.

This is part of a larger effort to remove the need for cmpxchg64 in
i40e_set_priv_flags().

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: move I40E_FLAG_UDP_FILTER_SYNC to the state field
Jacob Keller [Fri, 16 Mar 2018 08:26:31 +0000 (01:26 -0700)]
i40e: move I40E_FLAG_UDP_FILTER_SYNC to the state field

This flag is modified during run time, possibly even when the RTNL lock
is not held. Additionally it has a few places which should be using
test_and_set or test_and_clear atomic bit operations.

Create a new state bit __I40E_UDP_SYNC_PENDING and use it instead of the
ole I40E_FLAG_UDP_FILTER_SYNC flag.

This is part of a larger effort to remove the need for using cmpxchg64
in i40e_set_priv_flags.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agonet/mlx5e: Expose PFC stall prevention counters
Inbar Karmy [Thu, 17 Aug 2017 13:39:47 +0000 (16:39 +0300)]
net/mlx5e: Expose PFC stall prevention counters

Add the needed capability bit and counters to device spec description.
Expose the following two counters in ethtool:

tx_pause_storm_warning_events: when the device is stalled for a period
longer than a pre-configured watermark, the counter increase, allowing
the debug utility an insight into current device status.

tx_pause_storm_error_events: when the device is stalled for a period
longer than a pre-configured timeout, the pause transmission is disabled,
and the counter increase.

Signed-off-by: Inbar Karmy <inbark@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
6 years agoi40e: move I40E_FLAG_FILTER_SYNC to a state bit
Jacob Keller [Fri, 16 Mar 2018 08:26:30 +0000 (01:26 -0700)]
i40e: move I40E_FLAG_FILTER_SYNC to a state bit

The I40E_FLAG_FILTER_SYNC flag is modified during run time possibly when
the RTNL lock is not held. Thus, it should not be part of pf->flags, and
instead should be using atomic bit operations in the pf->state field.

Create a __I40E_MACVLAN_SYNC_PENDING state bit, and use it instead of
the old I40E_FLAG_FILTER_SYNC flag.

This is part of a larger effort to remove the need for cmpxchg64 in
i40e_set_priv_flags().

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Implement filter sync, NDO operations and bump version
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:19 +0000 (07:58 -0700)]
ice: Implement filter sync, NDO operations and bump version

This patch implements multiple pieces of functionality:

1. Added ice_vsi_sync_filters, which is called through the service task
   to push filter updates to the hardware.

2. Add support to enable/disable promiscuous mode on an interface.
   Enabling/disabling promiscuous mode on an interface results in
   addition/removal of a promisc filter rule through ice_vsi_sync_filters.

3. Implement handlers for ndo_set_mac_address, ndo_change_mtu,
   ndo_poll_controller and ndo_set_rx_mode.

This patch also marks the end of the driver addition by bumping up the
driver version.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Support link events, reset and rebuild
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:18 +0000 (07:58 -0700)]
ice: Support link events, reset and rebuild

Link events are posted to a PF's admin receive queue (ARQ). This patch
adds the ability to detect and process link events.

This patch also adds the ability to process resets.

The driver can process the following resets:
    1) EMP Reset (EMPR)
    2) Global Reset (GLOBR)
    3) Core Reset (CORER)
    4) Physical Function Reset (PFR)

EMPR is the largest level of reset that the driver can handle. An EMPR
resets the manageability block and also the data path, including PHY and
link for all the PFs. The affected PFs are notified of this event through
a miscellaneous interrupt.

GLOBR is a subset of EMPR. It does everything EMPR does except that it
doesn't reset the manageability block.

CORER is a subset of GLOBR. It does everything GLOBR does but doesn't
reset PHY and link.

PFR is a subset of CORER and affects only the given physical function.
In other words, PFR can be thought of as a CORER for a single PF. Since
only the issuing PF is affected, a PFR doesn't result in the miscellaneous
interrupt being triggered.

All the resets have the following in common:
1) Tx/Rx is halted and all queues are stopped.
2) All the VSIs and filters programmed for the PF are lost and have to be
   reprogrammed.
3) Control queue interfaces are reset and have to be reprogrammed.

In the rebuild flow, control queues are reinitialized, VSIs are reallocated
and filters are restored.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Update Tx scheduler tree for VSI multi-Tx queue support
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:17 +0000 (07:58 -0700)]
ice: Update Tx scheduler tree for VSI multi-Tx queue support

This patch adds the ability for a VSI to use multiple Tx queues. More
specifically, the patch
    1) Provides the ability to update the Tx scheduler tree in the
       firmware. The driver can configure the Tx scheduler tree by
       adding/removing multiple Tx queues per TC per VSI.

    2) Allows a VSI to reconfigure its Tx queues during runtime.

    3) Synchronizes the Tx scheduler update operations using locks.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Add stats and ethtool support
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:16 +0000 (07:58 -0700)]
ice: Add stats and ethtool support

This patch implements a watchdog task to get packet statistics from
the device.

This patch also adds support for the following ethtool operations:

ethtool devname
ethtool -s devname [msglvl N] [msglevel type on|off]
ethtool -g|--show-ring devname
ethtool -G|--set-ring devname [rx N] [tx N]
ethtool -i|--driver devname
ethtool -d|--register-dump devname [raw on|off] [hex on|off] [file name]
ethtool -k|--show-features|--show-offload devname
ethtool -K|--features|--offload devname feature on|off
ethtool -P|--show-permaddr devname
ethtool -S|--statistics devname
ethtool -a|--show-pause devname
ethtool -A|--pause devname [autoneg on|off] [rx on|off] [tx on|off]
ethtool -r|--negotiate devname

CC: Andrew Lunn <andrew@lunn.ch>
CC: Jakub Kicinski <kubakici@wp.pl>
CC: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Add support for VLANs and offloads
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:15 +0000 (07:58 -0700)]
ice: Add support for VLANs and offloads

This patch adds support for VLANs. When a VLAN is created a switch filter
is added to direct the VLAN traffic to the corresponding VSI. When a VLAN
is deleted, the filter is deleted as well.

This patch also adds support for the following hardware offloads.
    1) VLAN tag insertion/stripping
    2) Receive Side Scaling (RSS)
    3) Tx checksum and TCP segmentation
    4) Rx checksum

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Implement transmit and NAPI support
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:14 +0000 (07:58 -0700)]
ice: Implement transmit and NAPI support

This patch implements ice_start_xmit (the handler for ndo_start_xmit) and
related functions. ice_start_xmit ultimately calls ice_tx_map, where the
Tx descriptor is built and posted to the hardware by bumping the ring tail.

This patch also implements ice_napi_poll, which is invoked when there's an
interrupt on the VSI's queues. The interrupt can be due to either a
completed Tx or an Rx event. In case of a completed Tx/Rx event, resources
are reclaimed. Additionally, in case of an Rx event, the skb is fetched
and passed up to the network stack.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Configure VSIs for Tx/Rx
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:13 +0000 (07:58 -0700)]
ice: Configure VSIs for Tx/Rx

This patch configures the VSIs to be able to send and receive
packets by doing the following:

1) Initialize flexible parser to extract and include certain
   fields in the Rx descriptor.

2) Add Tx queues by programming the Tx queue context (implemented in
   ice_vsi_cfg_txqs). Note that adding the queues also enables (starts)
   the queues.

3) Add Rx queues by programming Rx queue context (implemented in
   ice_vsi_cfg_rxqs). Note that this only adds queues but doesn't start
   them. The rings will be started by calling ice_vsi_start_rx_rings on
   interface up.

4) Configure interrupts for VSI queues.

5) Implement ice_open and ice_stop.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Add support for switch filter programming
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:12 +0000 (07:58 -0700)]
ice: Add support for switch filter programming

A VSI needs traffic directed towards it. This is done by programming
filter rules on the switch (embedded vSwitch) element in the hardware,
which connects the VSI to the ingress/egress port.

This patch introduces data structures and functions necessary to add
remove or update switch rules on the switch element. This is a pretty low
level function that is generic enough to add a whole range of filters.

This patch also introduces two top level functions ice_add_mac and
ice_remove mac which through a series of intermediate helper functions
eventually call ice_aq_sw_rules to add/delete simple MAC based filters.
It's worth noting that one invocation of ice_add_mac/ice_remove_mac
is capable of adding/deleting multiple MAC filters.

Also worth noting is the fact that the driver maintains a list of currently
active filters, so every filter addition/removal causes an update to this
list. This is done for a couple of reasons:

1) If two VSIs try to add the same filters, we need to detect it and do
   things a little differently (i.e. use VSI lists, described below) as
   the same filter can't be added more than once.

2) In the event of a hardware reset we can simply walk through this list
   and restore the filters.

VSI Lists:
In a multi-VSI situation, it's possible that multiple VSIs want to add the
same filter rule. For example, two VSIs that want to receive broadcast
traffic would both add a filter for destination MAC ff:ff:ff:ff:ff:ff.
This can become cumbersome to maintain and so this is handled using a
VSI list.

A VSI list is resource that can be allocated in the hardware using the
ice_aq_alloc_free_res admin queue command. Simply put, a VSI list can
be thought of as a subscription list containing a set of VSIs to which
the packet should be forwarded, should the filter match.

For example, if VSI-0 has already added a broadcast filter, and VSI-1
wants to do the same thing, the filter creation flow will detect this,
allocate a VSI list and update the switch rule so that broadcast traffic
will now be forwarded to the VSI list which contains VSI-0 and VSI-1.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Add support for VSI allocation and deallocation
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:11 +0000 (07:58 -0700)]
ice: Add support for VSI allocation and deallocation

This patch introduces data structures and functions to alloc/free
VSIs. The driver represents a VSI using the ice_vsi structure.

Some noteworthy points about VSI allocation:

1) A VSI is allocated in the firmware using the "add VSI" admin queue
   command (implemented as ice_aq_add_vsi). The firmware returns an
   identifier for the allocated VSI. The VSI context is used to program
   certain aspects (loopback, queue map, etc.) of the VSI's configuration.

2) A VSI is deleted using the "free VSI" admin queue command (implemented
   as ice_aq_free_vsi).

3) The driver represents a VSI using struct ice_vsi. This is allocated
   and initialized as part of the ice_vsi_alloc flow, and deallocated
   as part of the ice_vsi_delete flow.

4) Once the VSI is created, a netdev is allocated and associated with it.
   The VSI's ring and vector related data structures are also allocated
   and initialized.

5) A VSI's queues can either be contiguous or scattered. To do this, the
   driver maintains a bitmap (vsi->avail_txqs) which is kept in sync with
   the firmware's VSI queue allocation imap. If the VSI can't get a
   contiguous queue allocation, it will fallback to scatter. This is
   implemented in ice_vsi_get_qs which is called as part of the VSI setup
   flow. In the release flow, the VSI's queues are released and the bitmap
   is updated to reflect this by ice_vsi_put_qs.

CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Initialize PF and setup miscellaneous interrupt
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:10 +0000 (07:58 -0700)]
ice: Initialize PF and setup miscellaneous interrupt

This patch continues the initialization flow as follows:

1) Allocate and initialize necessary fields (like vsi, num_alloc_vsi,
   irq_tracker, etc) in the ice_pf instance.

2) Setup the miscellaneous interrupt handler. This also known as the
   "other interrupt causes" (OIC) handler and is used to handle non
   hotpath interrupts (like control queue events, link events,
   exceptions, etc.

3) Implement a background task to process admin queue receive (ARQ)
   events received by the driver.

CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Get MAC/PHY/link info and scheduler topology
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:09 +0000 (07:58 -0700)]
ice: Get MAC/PHY/link info and scheduler topology

This patch adds code to continue the initialization flow as follows:

1) Get PHY/link information and store it
2) Get default scheduler tree topology and store it
3) Get the MAC address associated with the port and store it

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoice: Get switch config, scheduler config and device capabilities
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:08 +0000 (07:58 -0700)]
ice: Get switch config, scheduler config and device capabilities

This patch adds to the initialization flow by getting switch
configuration, scheduler configuration and device capabilities.

Switch configuration:
On boot, an L2 switch element is created in the firmware per physical
function. Each physical function is also mapped to a port, to which its
switch element is connected. In other words, this switch can be visualized
as an embedded vSwitch that can connect a physical function's virtual
station interfaces (VSIs) to the egress/ingress port. Egress/ingress
filters will be eventually created and applied on this switch element.
As part of the initialization flow, the driver gets configuration data
from this switch element and stores it.

Scheduler configuration:
The Tx scheduler is a subsystem responsible for setting and enforcing QoS.
As part of the initialization flow, the driver queries and stores the
default scheduler configuration for the given physical function.

Device capabilities:
As part of initialization, the driver has to determine what the device is
capable of (ex. max queues, VSIs, etc). This information is obtained from
the firmware and stored by the driver.

CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoMerge branch 'mlxsw-Offload-IPv6-multicast-routes'
David S. Miller [Mon, 26 Mar 2018 17:14:45 +0000 (13:14 -0400)]
Merge branch 'mlxsw-Offload-IPv6-multicast-routes'

Ido Schimmel says:

====================
mlxsw: Offload IPv6 multicast routes

Yuval says:

The series is intended to allow offloading IPv6 multicast routes
and is split into two parts:

  - First half of the patches continue extending ip6mr [& refactor ipmr]
    with missing bits necessary for the offloading - fib-notifications,
    mfc refcounting and default rule identification.

  - Second half of the patches extend functionality inside mlxsw,
    beginning with extending lower-parts to support IPv6 mroutes
    to host and later extending the router/mr internal APIs within
    the driver to accommodate support in ipv6 configurations.
    Lastly it adds support in the RTNL_FAMILY_IP6MR notifications,
    allowing driver to react and offload related routes.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum: Add multicast router trap for PIMv6
Yuval Mintz [Mon, 26 Mar 2018 12:01:45 +0000 (15:01 +0300)]
mlxsw: spectrum: Add multicast router trap for PIMv6

Add a new trap for PIMv6 packets. As PIM already has a designated trap
group [ & rate limiter], simply use the same for PIMv6 as well.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_router: Process IP6MR fib notification
Yuval Mintz [Mon, 26 Mar 2018 12:01:44 +0000 (15:01 +0300)]
mlxsw: spectrum_router: Process IP6MR fib notification

Following previous patches driver is ready to handle notifications
arriving from ip6mr - start processing those when they arrive following
the same manner ipmr currently goes through.

This should enable driver to start offloading ipv6 multicast routes.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_mr: Add ipv6 specific operations
Yuval Mintz [Mon, 26 Mar 2018 12:01:43 +0000 (15:01 +0300)]
mlxsw: spectrum_mr: Add ipv6 specific operations

Populate the various operation structures meant for IPv6 with logic
unique to that protocol suite.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_router: Make IPMR-related APIs family agnostic
Yuval Mintz [Mon, 26 Mar 2018 12:01:42 +0000 (15:01 +0300)]
mlxsw: spectrum_router: Make IPMR-related APIs family agnostic

spectrum_router and spectrum_mr have several APIs that are used to
manipulate configurations originating from ipmr fib notifications.
Following previous patches all the protocol-specifics that are necessary
for the configuration are hidden within spectrum_mr. This allows us to
clean the API and make sure that other than choosing the mr_table based
on the fib notification family, spectrum_router wouldn't care about the
source of the notification when passing it onward to spectrum_mr.

This would later allow us to leverage the same code for fib
notifications originating from ip6mr.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_mr: Convert into using mr_mfc
Yuval Mintz [Mon, 26 Mar 2018 12:01:41 +0000 (15:01 +0300)]
mlxsw: spectrum_mr: Convert into using mr_mfc

Current multicast routing logic in driver assumes it's always meant to
deal with IPv4 multicast routes, leaving several placeholders for
later IPv6 support [currently usually WARN()].

This patch changes the driver's internal multicast route struct into
holding a common mr_mfc instead of the IPv4 mfc_cache.
The various placeholders are grouped into 2:
  - Functions that require only the common bits; These remain and the
    restriction for IPv4-only is lifted.
  - Function that require IPv4-specifics - for handling these functions
    we add sets of operations that encapsulate the protocol differences

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_router: Support IPv6 multicast to host CPU
Yuval Mintz [Mon, 26 Mar 2018 12:01:40 +0000 (15:01 +0300)]
mlxsw: spectrum_router: Support IPv6 multicast to host CPU

A step toward offloading IPv6 routing, this adds an additional
multicast routing table meant for IPv6 [with its underlying TCAM
region] and populates the default rule for IPv6 multicast packets.

Following this, ingress IPv6 multicast packets would be trapped and
delivered to the host CPU.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: spectrum_mr: Pass protocol as part of catchall route params
Yuval Mintz [Mon, 26 Mar 2018 12:01:39 +0000 (15:01 +0300)]
mlxsw: spectrum_mr: Pass protocol as part of catchall route params

Since commit c011ec1bbfd6 ("mlxsw: spectrum: Add the multicast routing
offloading logic") spectrum_mr did not populate the protocol portion of
the catcahall_route_params; mr-tcam logic worked correctly for ipv4
since the enum value for MLXSW_SP_L3_PROTO_IPV4 is '0'.

Explicitly fill the protocol as we'll soon need to differentiate between
ipv4 and ipv6.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: reg: Add register settings for IPv6 multicast routing
Yuval Mintz [Mon, 26 Mar 2018 12:01:38 +0000 (15:01 +0300)]
mlxsw: reg: Add register settings for IPv6 multicast routing

Add new fields for the rmft register necessary for setting the IPv6
multicast FIB table. Add a matching wrapper function for filling
the register in the IPv6 scenario.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agomlxsw: reg: Configure RIF to forward IPv6 multicast packets
Yuval Mintz [Mon, 26 Mar 2018 12:01:37 +0000 (15:01 +0300)]
mlxsw: reg: Configure RIF to forward IPv6 multicast packets

Similarly to what was done in commit 4af5964e5888 ("mlxsw: reg:
Configure RIF to forward IPv4 multicast packets by default") we now set
two additional bits to allow IPv6 multicast forwarding.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoip6mr: Add refcounting to mfc
Yuval Mintz [Mon, 26 Mar 2018 12:01:36 +0000 (15:01 +0300)]
ip6mr: Add refcounting to mfc

Since ipmr and ip6mr are using the same mr_mfc struct at their core, we
can now refactor the ipmr_cache_{hold,put} logic and apply refcounting
to both ipmr and ip6mr.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoip6mr: Add API for default_rule fib
Yuval Mintz [Mon, 26 Mar 2018 12:01:35 +0000 (15:01 +0300)]
ip6mr: Add API for default_rule fib

Add the ability to discern whether a given FIB rule notification relates
to the default rule inserted when registering ip6mr or a different one.

Would later be used by drivers wishing to offload ipv6 multicast routes
but unable to offload rules other than the default one.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoip6mr: Support fib notifications
Yuval Mintz [Mon, 26 Mar 2018 12:01:34 +0000 (15:01 +0300)]
ip6mr: Support fib notifications

In similar fashion to ipmr, support fib notifications for ip6mr mfc and
vif related events. This would later allow drivers to react to said
notifications and offload the IPv6 mroutes.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoipmr: Make ipmr_dump() common
Yuval Mintz [Mon, 26 Mar 2018 12:01:33 +0000 (15:01 +0300)]
ipmr: Make ipmr_dump() common

Since all the primitive elements used for the notification done by ipmr
are now common [mr_table, mr_mfc, vif_device] we can refactor the logic
for dumping them to a common file.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoipmr: Make MFC fib notifiers common
Yuval Mintz [Mon, 26 Mar 2018 12:01:32 +0000 (15:01 +0300)]
ipmr: Make MFC fib notifiers common

Like vif notifications, move the notifier struct for MFC as well as its
helpers into a common file; Currently they're only used by ipmr.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoipmr: Make vif fib notifiers common
Yuval Mintz [Mon, 26 Mar 2018 12:01:31 +0000 (15:01 +0300)]
ipmr: Make vif fib notifiers common

The fib-notifiers are tightly coupled with the vif_device which is
already common. Move the notifier struct definition and helpers to the
common file; Currently they're only used by ipmr.

Signed-off-by: Yuval Mintz <yuvalm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'pernet-convert-part7.1'
David S. Miller [Mon, 26 Mar 2018 17:03:27 +0000 (13:03 -0400)]
Merge branch 'pernet-convert-part7.1'

Kirill Tkhai says:

====================
Converting pernet_operations (part #7.1)

this is a resending of the 4 patches from path #7.

Anna kindly reviewed them and suggested to take the patches
through net tree, since there is pernet_operations::async only
in net-next.git.

There is Anna's acks on every header, the rest of patch
has no changes.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Convert nfs4blocklayout_net_ops
Kirill Tkhai [Mon, 26 Mar 2018 09:29:13 +0000 (12:29 +0300)]
net: Convert nfs4blocklayout_net_ops

These pernet_operations create and destroy per-net pipe
and dentry, and they seem safe to be marked as async.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Convert nfs4_dns_resolver_ops
Kirill Tkhai [Mon, 26 Mar 2018 09:29:04 +0000 (12:29 +0300)]
net: Convert nfs4_dns_resolver_ops

These pernet_operations look similar to rpcsec_gss_net_ops,
they just create and destroy another cache. Also they create
and destroy directory. So, they also look safe to be async.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Convert sunrpc_net_ops
Kirill Tkhai [Mon, 26 Mar 2018 09:28:55 +0000 (12:28 +0300)]
net: Convert sunrpc_net_ops

These pernet_operations look similar to rpcsec_gss_net_ops,
they just create and destroy another caches. So, they also
can be async.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: Convert rpcsec_gss_net_ops
Kirill Tkhai [Mon, 26 Mar 2018 09:28:47 +0000 (12:28 +0300)]
net: Convert rpcsec_gss_net_ops

These pernet_operations initialize and destroy sunrpc_net_id
refered per-net items. Only used global list is cache_list,
and accesses already serialized.

sunrpc_destroy_cache_detail() check for list_empty() without
cache_list_lock, but when it's called from unregister_pernet_subsys(),
there can't be callers in parallel, so we won't miss list_empty()
in this case.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'nfp-flower-add-ip-fragmentation-offloading-support'
David S. Miller [Mon, 26 Mar 2018 17:01:10 +0000 (13:01 -0400)]
Merge branch 'nfp-flower-add-ip-fragmentation-offloading-support'

Pieter Jansen van Vuuren says:

====================
nfp: flower: add ip fragmentation offloading support

This set allows offloading IP fragmentation classification. It Implements
ip fragmentation match offloading for both IPv4 and IPv6 and offloads
frag, nofrag, first and nofirstfrag classification.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonfp: flower: implement ip fragmentation match offload
Pieter Jansen van Vuuren [Mon, 26 Mar 2018 08:16:38 +0000 (10:16 +0200)]
nfp: flower: implement ip fragmentation match offload

Implement ip fragmentation match offloading for both IPv4 and IPv6. Allows
offloading frag, nofrag, first and nofirstfrag classification.

Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonfp: flower: refactor shared ip header in match offload
Pieter Jansen van Vuuren [Mon, 26 Mar 2018 08:16:37 +0000 (10:16 +0200)]
nfp: flower: refactor shared ip header in match offload

Refactored shared ip header code for IPv4 and IPv6 in match offload.

Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoice: Start hardware initialization
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:07 +0000 (07:58 -0700)]
ice: Start hardware initialization

This patch implements multiple pieces of the initialization flow
as follows:

1) A reset is issued to ensure a clean device state, followed
   by initialization of admin queue interface.

2) Once the admin queue interface is up, clear the PF config
   and transition the device to non-PXE mode.

3) Get the NVM configuration stored in the device's non-volatile
   memory (NVM) using ice_init_nvm.

CC: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@oracle.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoMerge branch 'net-driver-barriers'
David S. Miller [Mon, 26 Mar 2018 16:47:57 +0000 (12:47 -0400)]
Merge branch 'net-driver-barriers'

Sinan Kaya says:

====================
netdev: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel() in multiple places. writel()
already has a barrier on some architectures like arm64.

This ends up CPU observing two barriers back to back before executing the
register write.

Since code already has an explicit barrier call, changing writel() to
writel_relaxed().

I did a regex search for wmb() followed by writel() in each drivers
directory.
I scrubbed the ones I care about in this series.

I considered "ease of change", "popular usage" and "performance critical
path" as the determining criteria for my filtering.

We used relaxed API heavily on ARM for a long time but
it did not exist on other architectures. For this reason, relaxed
architectures have been paying double penalty in order to use the common
drivers.

Now that relaxed API is present on all architectures, we can go and scrub
all drivers to see what needs to change and what can remain.

We start with mostly used ones and hope to increase the coverage over time.
It will take a while to cover all drivers.

Feel free to apply patches individually.

Changes since v6:
- bring back amazon ena and add mmiowb, remove
  ena_com_write_sq_doorbell_rel().
- remove extra mmiowb in bnx2x
- correct spelling mistake in  bnx2x: Replace doorbell barrier() with wmb()
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: ena: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:21 +0000 (10:39 -0400)]
net: ena: Eliminate duplicate barriers on weakly-ordered archs

Code includes barrier() followed by writel(). writel() already has a
barrier on some architectures like arm64.

This ends up CPU observing two barriers back to back before executing the
register write.

Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a barrier().

Since code already has an explicit barrier call, changing writel() to
writel_relaxed() and adding mmiowb() for ordering protection.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agobnxt_en: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:20 +0000 (10:39 -0400)]
bnxt_en: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel(). writel() already has a barrier on
some architectures like arm64.

This ends up CPU observing two barriers back to back before executing the
register write.

Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a wmb().

Since code already has an explicit barrier call, changing writel() to
writel_relaxed().

Also add mmiowb() so that write code doesn't move outside of scope.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: qlge: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:19 +0000 (10:39 -0400)]
net: qlge: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel(). writel() already has a barrier on
some architectures like arm64.

This ends up CPU observing two barriers back to back before executing the
register write.

Create a new wrapper function with relaxed write operator. Use the new
wrapper when a write is following a wmb().

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agobnx2x: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:18 +0000 (10:39 -0400)]
bnx2x: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.

This ends up CPU observing two barriers back to back before executing
the register write.

Since code already has an explicit barrier call, changing writel() to
writel_relaxed().

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agobnx2x: Replace doorbell barrier() with wmb()
Sinan Kaya [Sun, 25 Mar 2018 14:39:17 +0000 (10:39 -0400)]
bnx2x: Replace doorbell barrier() with wmb()

barrier() doesn't guarantee memory writes to be observed by the hardware on
all architectures. barrier() only tells compiler not to move this code
with respect to other read/writes.

If memory write needs to be observed by the HW, wmb() is the right choice.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoqlcnic: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:16 +0000 (10:39 -0400)]
qlcnic: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.

This ends up CPU observing two barriers back to back before executing
the register write.

Since code already has an explicit barrier call, changing writel() to
writel_relaxed().

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Manish Chopra <manish.chopra@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: qla3xxx: Eliminate duplicate barriers on weakly-ordered archs
Sinan Kaya [Sun, 25 Mar 2018 14:39:15 +0000 (10:39 -0400)]
net: qla3xxx: Eliminate duplicate barriers on weakly-ordered archs

Code includes wmb() followed by writel(). writel() already has a
barrier on some architectures like arm64.

This ends up CPU observing two barriers back to back before executing
the register write.

Since code already has an explicit barrier call, changing code to

wmb()
writel_relaxed()
mmiowb()

for multi-arch support.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoice: Add support for control queues
Anirudh Venkataramanan [Tue, 20 Mar 2018 14:58:06 +0000 (07:58 -0700)]
ice: Add support for control queues

A control queue is a hardware interface which is used by the driver
to interact with other subsystems (like firmware, PHY, etc.). It is
implemented as a producer-consumer ring. More specifically, an
"admin queue" is a type of control queue used to interact with the
firmware.

This patch introduces data structures and functions to initialize
and teardown control/admin queues. Once the admin queue is initialized,
the driver uses it to get the firmware version.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoMerge branch 'sh_eth-unify-the-SoC-feature-checks'
David S. Miller [Mon, 26 Mar 2018 16:34:20 +0000 (12:34 -0400)]
Merge branch 'sh_eth-unify-the-SoC-feature-checks'

Sergei Shtylyov says:

====================
sh_eth: unify the SoC feature checks

Here's a set of 5 patches against DaveM's 'net-next.git' repo.

The Ether driver sometimes uses the bit fields in 'struct sh_eth_cpu_data'
to check which Ether registers exist in a certain SoC and sometimes it uses
sh_eth_is_{gether|rz_fast_ether}() which basically compares 2 pointers (1 of
them being constant) -- the latter is definitely not a strongest feature of
the RISC CPUs (be it SH or ARM), so I decided to get rid of this type of
the feature checks in favour of the bit fields (I've also made use of a
32-bit value and method pointer where appropriate)...

[1/5] sh_eth: add sh_eth_cpu_data::soft_reset() method
[2/5] sh_eth: add sh_eth_cpu_data::edtrr_trns value
[3/5] sh_eth: add sh_eth_cpu_data::xdfar_rw flag
[4/5] sh_eth: add sh_eth_cpu_data::no_tx_cntr flag
[5/5] sh_eth: add sh_eth_cpu_data::cexcr flag
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosh_eth: add sh_eth_cpu_data::cexcr flag
Sergei Shtylyov [Sat, 24 Mar 2018 20:12:54 +0000 (23:12 +0300)]
sh_eth: add sh_eth_cpu_data::cexcr flag

GEther controllers have CERCR/CEECR instead of CNDCR on the others.
Currently we are calling sh_eth_is_gether() in order to check for this,
however it would be simpler  to check the new 'cexcr' bitfield in the
'struct sh_eth_cpu_data';  then we'd be able to remove sh_eth_is_gether()
as there would be no callers left...

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agosh_eth: add sh_eth_cpu_data::no_tx_cntrs flag
Sergei Shtylyov [Sat, 24 Mar 2018 20:11:19 +0000 (23:11 +0300)]
sh_eth: add sh_eth_cpu_data::no_tx_cntrs flag

RZ/A1H (R7S72100) Ether controller doesn't  seem to have the TX counter
registers like TROCR/CDCR/LCCR (or at least they are still undocumented
like some TSU registers), so we bail out of sh_eth_get_stats() early in
this case.  Currently we are calling sh_eth_is_rz_fast_ether() in order
to check for this, but it would be simpler to check the new 'no_tx_cntrs'
bitfield in the 'struct sh_eth_cpu_data'; then we'd be able  to remove
sh_eth_is_rz_fast_ether() as there would be no callers left...

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>