openwrt/staging/blogic.git
8 years agonet: hns: add attribute port-idx-in-ae in enet node.
Yisen.Zhuang\(Zhuangyuzeng\) [Sat, 23 Apr 2016 09:05:07 +0000 (17:05 +0800)]
net: hns: add attribute port-idx-in-ae in enet node.

This patch parse port-idx-in-ae in enet node. In NIC mode of DSAF, all 6
PHYs of service DSAF are taken as ethernet ports to the CPU. The
port-idx-in-ae can be 0 to 5. Here is the diagram:
            +-----+---------------+
            |            CPU      |
            +-+-+-+---+-+-+-+-+-+-+
              |    |   | | | | | |
           debug debug   service
           port  port     port
           (0)   (0)     (0-5)

In Switch mode of DSAF, all 6 PHYs of service DSAF are taken as physical
ports connect to a LAN Switch while the CPU side assume itself have one
single NIC connect to this switch. In this case, the port-idx-in-ae will
be 0 only.
            +-----+-----+------+------+
            |                CPU      |
            +-+-+-+-+-+-+-+-+-+-+-+-+-+
              |    |     service| port(0)
            debug debug  +------------+
            port  port   |   switch   |
            (0)   (0)    +-+-+-+-+-+-++
                          | | | | | |
                         external port

when port-idx-in-ae is not exists, old attribute port-id will be used
(only for compatible purpose, not recommended to use port-id in new code).

Signed-off-by: Daode Huang <huangdaode@hisilicon.com>
Signed-off-by: Yisen Zhuang <yisen.zhuang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: hns: set debug port irq index to 0
Daode Huang [Sat, 23 Apr 2016 09:05:06 +0000 (17:05 +0800)]
net: hns: set debug port irq index to 0

As debug ports are moved from service dsaf to debug dsaf,
the interrupts offset should start from 0, So this patch
re-defines the offset index of debug ports.

Signed-off-by: Daode Huang <huangdaode@hisilicon.com>
Signed-off-by: Yisen Zhuang <yisen.zhuang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: hns: add a new dsaf mode for debug port
Yisen.Zhuang\(Zhuangyuzeng\) [Sat, 23 Apr 2016 09:05:05 +0000 (17:05 +0800)]
net: hns: add a new dsaf mode for debug port

This patch adds a new dsaf mode named "single-port" mode for debug port.
This mode only contains one debug port. This patch also changes the
method of distinguishing the port type.

Signed-off-by: Daode Huang <huangdaode@hisilicon.com>
Signed-off-by: Yisen Zhuang <yisen.zhuang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'pskb_extract'
David S. Miller [Mon, 25 Apr 2016 20:54:15 +0000 (16:54 -0400)]
Merge branch 'pskb_extract'

Sowmini Varadhan says:

====================
pskb_extract() helper function.

This patchset follows up on the discussion in
 https://www.mail-archive.com/netdev@vger.kernel.org/msg105090.html

For RDS-TCP, we have to deal with the full gamut of
nonlinear sk_buffs, including all the frag_list variants.
Also, the parent skb has to remain unchanged, while the clone
is queued for Rx on the PF_RDS socket.

Patch 1 of this patchset adds a pskb_extract() function that
does all this without the redundant memcpy's in pskb_expand_head()
and __pskb_pull_tail().

v2: Marcelo Leitner review comments
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoRDS: TCP: Call pskb_extract() helper function
Sowmini Varadhan [Sat, 23 Apr 2016 01:36:36 +0000 (18:36 -0700)]
RDS: TCP: Call pskb_extract() helper function

rds-stress experiments with request size 256 bytes, 8K acks,
using 16 threads show a 40% improvment when pskb_extract()
replaces the {skb_clone(..); pskb_pull(..); pskb_trim(..);}
pattern in the Rx path, so we leverage the perf gain with
this commit.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoskbuff: Add pskb_extract() helper function
Sowmini Varadhan [Sat, 23 Apr 2016 01:36:35 +0000 (18:36 -0700)]
skbuff: Add pskb_extract() helper function

A pattern of skb usage seen in modules such as RDS-TCP is to
extract `to_copy' bytes from the received TCP segment, starting
at some offset `off' into a new skb `clone'. This is done in
the ->data_ready callback, where the clone skb is queued up for rx on
the PF_RDS socket, while the parent TCP segment is returned unchanged
back to the TCP engine.

The existing code uses the sequence
clone = skb_clone(..);
pskb_pull(clone, off, ..);
pskb_trim(clone, to_copy, ..);
with the intention of discarding the first `off' bytes. However,
skb_clone() + pskb_pull() implies pksb_expand_head(), which ends
up doing a redundant memcpy of bytes that will then get discarded
in __pskb_pull_tail().

To avoid this inefficiency, this commit adds pskb_extract() that
creates the clone, and memcpy's only the relevant header/frag/frag_list
to the start of `clone'. pskb_trim() is then invoked to trim clone
down to the requested to_copy bytes.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agofq: add fair queuing framework
Michal Kazior [Fri, 22 Apr 2016 12:20:13 +0000 (14:20 +0200)]
fq: add fair queuing framework

This works on the same implementation principle as
codel*.h, i.e. there's a generic header with
structures and macros and a implementation header
carrying function definitions to include in given,
e.g. driver or module.

The fairness logic comes from
net/sched/sch_fq_codel.c but is generalized so it
is more flexible and easier to re-use.

Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'reusable-codel'
David S. Miller [Mon, 25 Apr 2016 20:44:28 +0000 (16:44 -0400)]
Merge branch 'reusable-codel'

Michal Kazior says:

====================
codel: make it reuseable beyond qdiscs

There's an ongoing effort in fixing wireless
bufferbloat. As part of that fq_codel is being
ported into mac80211. To prevent code duplication
codel.h needs to be slightly modified before it
can be used in mac80211 (or other drivers FWIW).

For more background please see:

  https://www.spinics.net/lists/linux-wireless/msg149976.html
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agocodel: split into multiple files
Michal Kazior [Fri, 22 Apr 2016 12:15:59 +0000 (14:15 +0200)]
codel: split into multiple files

It was impossible to include codel.h for the
purpose of having access to codel_params or
codel_vars structure definitions and using them
for embedding in other more complex structures.

This splits allows codel.h itself to be treated
like any other header file while codel_qdisc.h and
codel_impl.h contain function definitions with
logic that was previously in codel.h.

This copies over copyrights and doesn't involve
code changes other than adding a few additional
include directives to net/sched/sch*codel.c.

Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agocodel: generalize the implementation
Michal Kazior [Fri, 22 Apr 2016 12:15:58 +0000 (14:15 +0200)]
codel: generalize the implementation

This strips out qdisc specific bits from the code
and makes it slightly more reusable. Codel will be
used by wireless/mac80211 in the future.

Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomacsec: Convert to using IFF_NO_QUEUE
Phil Sutter [Fri, 22 Apr 2016 12:02:42 +0000 (14:02 +0200)]
macsec: Convert to using IFF_NO_QUEUE

Signed-off-by: Phil Sutter <phil@nwl.cc>
Acked-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoroute: move lwtunnel state to a single place
Jiri Benc [Fri, 22 Apr 2016 10:40:02 +0000 (12:40 +0200)]
route: move lwtunnel state to a single place

Commit 751a587ac9f9 ("route: fix breakage after moving lwtunnel state")
moved lwtstate to the end of dst_entry for 32bit archs. This makes it share
the cacheline with __refcnt which had an unkown effect on performance. For
this reason, the pointer was kept in place for 64bit archs.

However, later performance measurements showed this is of no concern. It
turns out that every performance sensitive path that accesses lwtstate
accesses also struct rtable or struct rt6_info which share the same cache
line.

Thus, to get rid of a few #ifdefs, move the field to the end of the struct
also for 64bit.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'qed-next'
David S. Miller [Mon, 25 Apr 2016 19:59:17 +0000 (15:59 -0400)]
Merge branch 'qed-next'

Yuval Mintz says:

====================
qed*: driver updates

[Was previous termed 'eeprom access et al.', but seemed a bit
inappropriate given we've dropped the eeprom patch for now.
Still waiting for some inputs on that one, BTW]

This patch series contains some ethtool-related enhancements.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed: add support for link pause configuration.
Sudarsana Reddy Kalluru [Fri, 22 Apr 2016 05:41:04 +0000 (08:41 +0300)]
qed: add support for link pause configuration.

The APIs for making this sort of configuration [e.g., via ethtool] are
already present in qede, but the current configuration flow in qed doesn't
respect it.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed*: Conditions for changing link
Yuval Mintz [Fri, 22 Apr 2016 05:41:03 +0000 (08:41 +0300)]
qed*: Conditions for changing link

There's some inconsistency in current logic determining whether the
link settings of a given interface can be changed; I.e., in all modes
other than the so-called `deault' mode the interfaces are forbidden from
changing the configuration - but even this rule is not applied to all
user APIs that may change the configuration.

Instead, let the core-module [qed] decide whether an interface can change
the configuration by supporting a new API function. We also revise the
current rule, allowing all interfaces to change their configurations while
laying the infrastructure for future modes where an interface would be
blocked from making such a configuration.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqede: Add support for ethtool private flags
Yuval Mintz [Fri, 22 Apr 2016 05:41:02 +0000 (08:41 +0300)]
qede: Add support for ethtool private flags

Adds a getter for the interfaces private flags.
The only parameter currently supported is whether the interface is a
coupled function [required for supporting 100g].

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed*: Align statistics names
Yuval Mintz [Fri, 22 Apr 2016 05:41:01 +0000 (08:41 +0300)]
qed*: Align statistics names

There's a difference in statsitics' names starting at qed and
propagating to qede, where egress counters indicate ranges while ingress
counters indiciate high-end.
Align all statistcs to follow the same conventions - name indicates range.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: better drop monitoring in ip{6}_recv_error()
Eric Dumazet [Fri, 22 Apr 2016 05:27:32 +0000 (22:27 -0700)]
net: better drop monitoring in ip{6}_recv_error()

We should call consume_skb(skb) when skb is properly consumed,
or kfree_skb(skb) when skb must be dropped in error case.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: SYN packets are now simply consumed
Eric Dumazet [Fri, 22 Apr 2016 05:13:01 +0000 (22:13 -0700)]
tcp: SYN packets are now simply consumed

We now have proper per-listener but also per network namespace counters
for SYN packets that might be dropped.

We replace the kfree_skb() by consume_skb() to be drop monitor [1]
friendly, and remove an obsolete comment.
FastOpen SYN packets can carry payload in them just fine.

[1] perf record -a -g -e skb:kfree_skb sleep 1; perf report

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Mon, 25 Apr 2016 19:12:06 +0000 (15:12 -0400)]
Merge branch '10GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
10GbE Intel Wired LAN Driver Updates 2016-04-25

This series contains updates to ixgbe and ixgbevf.

Emil provides several patches, starting with the consolidation of the
logic behind configuring spoof checking.  Fixed an issue which was
causing link issues for backplane devices because x550em_a/x devices
did not have a default value for mac->ops.setup_link.  Refactored the
ethtool stats to bring the logic closer to how ixgbe handles stats and
sets up per-queue stats for ixgbevf.

Mark adds a new register to wait for previous register writes to complete
before issuing a register read, which is needed when slower links are
in use.  Fixed the flow control setup for x550em_a, the incorrect
fc_setup function was being used.

Don added a workaround for empty SFP+ cage crosstalk, since on some
systems the crosstalk could lead to link flap on empty SFP+ cages.

Jake converts ixgbe and ixgbevf to use the BIT() macro.

Alex Duyck adds support for partial GSO segmentation in the case of
tunnels for ixgbe and ixgbevf.  Then preps for HyperV by moving the API
negotiation into mac_ops.

Arnd Bergmann provides a fix for the ARM compile warnings in linux-next
by converting the use of a udelay() to msleep().
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'nla_align-set-2'
David S. Miller [Mon, 25 Apr 2016 19:09:12 +0000 (15:09 -0400)]
Merge branch 'nla_align-set-2'

Nicolas Dichtel says:

====================
netlink: align attributes when needed (patchset #2)

This is the continuation (series #2) of the work done to align netlink
attributes when these attributes contain some 64-bit fields.

In patch #3, I didn't modify the function ila_encap_nlsize(). I was waiting
feedback for this patch: http://patchwork.ozlabs.org/patch/613766/
If it's approved, there will be an update to switch nla_total_size() to
nla_total_size_64bit() after the merge of net in net-next.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agowireless: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:22 +0000 (10:25 +0200)]
wireless: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonetfilter/ipvs: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:21 +0000 (10:25 +0200)]
netfilter/ipvs: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoieee802154: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:20 +0000 (10:25 +0200)]
ieee802154: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agol2tp: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:19 +0000 (10:25 +0200)]
l2tp: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:18 +0000 (10:25 +0200)]
bridge: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoovs: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:17 +0000 (10:25 +0200)]
ovs: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv6: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:16 +0000 (10:25 +0200)]
ipv6: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosched: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:15 +0000 (10:25 +0200)]
sched: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agortnl: use nla_put_u64_64bit()
Nicolas Dichtel [Mon, 25 Apr 2016 08:25:14 +0000 (10:25 +0200)]
rtnl: use nla_put_u64_64bit()

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosoreuseport: Resolve merge conflict for v4/v6 ordering fix
Craig Gallek [Mon, 25 Apr 2016 14:42:12 +0000 (10:42 -0400)]
soreuseport: Resolve merge conflict for v4/v6 ordering fix

d894ba18d4e4 ("soreuseport: fix ordering for mixed v4/v6 sockets")
was merged as a bug fix to the net tree.  Two conflicting changes
were committed to net-next before the above fix was merged back to
net-next:
ca065d0cf80f ("udp: no longer use SLAB_DESTROY_BY_RCU")
3b24d854cb35 ("tcp/dccp: do not touch listener sk_refcnt under synflood")

These changes switched the datastructure used for TCP and UDP sockets
from hlist_nulls to hlist.  This patch applies the necessary parts
of the net tree fix to net-next which were not automatic as part of the
merge.

Fixes: 1602f49b58ab ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net")
Signed-off-by: Craig Gallek <kraig@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosock: relax WARN_ON() in sock_owned_by_user()
Eric Dumazet [Mon, 25 Apr 2016 13:34:09 +0000 (06:34 -0700)]
sock: relax WARN_ON() in sock_owned_by_user()

Valdis reported tons of stack dumps caused by WARN_ON() in
sock_owned_by_user()

This test needs to be relaxed if/when lockdep disables itself.

Note that other lockdep_sock_is_held() callers are all from
rcu_dereference_protected() sections which already are disabled
if/when lockdep has been disabled.

Fixes: fafc4e1ea1a4 ("sock: tigthen lockdep checks for sock_owned_by_user")
Reported-by: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoixgbe: use msleep for long delays
Arnd Bergmann [Sat, 16 Apr 2016 20:35:08 +0000 (22:35 +0200)]
ixgbe: use msleep for long delays

The newly added x550em_a support causes a link failure on ARM because of
an overly long time passed into udelay():

ERROR: "__bad_udelay" [drivers/net/ethernet/intel/ixgbe/ixgbe.ko] undefined!

There are multiple variants of the ixgbe_acquire_swfw_sync_*() function,
and the other ones all use msleep(), so we can safely assume that all
callers are allowed to sleep, which makes msleep() a better replacement
than mdelay().

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 49425dfc7451 ("ixgbe: Add support for x550em_a 10G MAC type")
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbevf: Move API negotiation function into mac_ops
Alexander Duyck [Fri, 15 Apr 2016 00:37:15 +0000 (20:37 -0400)]
ixgbevf: Move API negotiation function into mac_ops

This patch moves API negotiation into mac_ops.  The general idea here is
that with HyperV on the way we need to make certain that anything that will
have different versions between HyperV and a standard VF needs to be
abstracted enough so that we can have a separate function between the two
so we can avoid changes in one breaking something in the other.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe/ixgbevf: Add support for GSO partial
Alexander Duyck [Thu, 14 Apr 2016 21:19:31 +0000 (17:19 -0400)]
ixgbe/ixgbevf: Add support for GSO partial

This patch adds support for partial GSO segmentation in the case of
tunnels.  Specifically with this change the driver an perform segmentation
as long as the frame either has IPv6 inner headers, or we are allowed to
mangle the IP IDs on the inner header.  This is needed because we will not
be modifying any fields from the start of the start of the outer transport
header to the start of the inner transport header as we are treating them
like they are just a block of IP options.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbevf: make use of BIT() macro to avoid shift of signed values
Jacob Keller [Wed, 13 Apr 2016 23:08:24 +0000 (16:08 -0700)]
ixgbevf: make use of BIT() macro to avoid shift of signed values

Also cleanup a case where we're bit shifting a value into place, and use
an unsigned constant. Make use of the unsigned postfix in places where
BIT() macro is not appropriate.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: resolve shift of negative value warning
Jacob Keller [Wed, 13 Apr 2016 23:08:23 +0000 (16:08 -0700)]
ixgbe: resolve shift of negative value warning

Make use of GENMASK instead of open coding the equivalent operation
incorrectly.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: use BIT() macro
Jacob Keller [Wed, 13 Apr 2016 23:08:22 +0000 (16:08 -0700)]
ixgbe: use BIT() macro

Several areas of ixgbe were written before widespread usage of the
BIT(n) macro. With the impending release of GCC 6 and its associated new
warnings, some usages such as (1 << 31) have been noted within the ixgbe
driver source. Fix these wholesale and prevent future issues by simply
using BIT macro instead of hand coded bit shifts.

Also fix a few shifts that are shifting values into place by using the
'u' prefix to indicate unsigned. It doesn't strictly matter in these
cases because we're not shifting by too large a value, but these are all
unsigned values and should be indicated as such.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: Add work around for empty SFP+ cage crosstalk
Don Skidmore [Tue, 12 Apr 2016 23:25:10 +0000 (19:25 -0400)]
ixgbe: Add work around for empty SFP+ cage crosstalk

It is possible on some systems that crosstalk could lead to link flap
on empty SFP+ cages.  A new NVM bit was defined to let SW know it
needs to implement the work around which consists of verifying that
there is a module in the cage before acting on the LSC.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: Use correct FC setup function for x550em_a
Mark Rustad [Fri, 8 Apr 2016 23:19:29 +0000 (16:19 -0700)]
ixgbe: Use correct FC setup function for x550em_a

Somehow the wrong fc_setup function was used for x550em_a, so
correct that. Also set setup_link to NULL as its value is
determined later, just like it is with X550EM_x.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbevf: add support for per-queue ethtool stats
Emil Tantilov [Thu, 7 Apr 2016 22:58:44 +0000 (15:58 -0700)]
ixgbevf: add support for per-queue ethtool stats

Implement per-queue statistics for packets, bytes and busy poll
specific counters.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbevf: refactor ethtool stats handling
Emil Tantilov [Thu, 7 Apr 2016 22:58:39 +0000 (15:58 -0700)]
ixgbevf: refactor ethtool stats handling

This brings the logic closer to how we handle the stats in ixgbe and it
sets us up for introducing per-queue stats.

Use IXGBEVF_STAT and IXGBEVF_NETDEV_STAT for accessing the driver and
netdev stats respectively. This way we don't have to calculate the
stats based on register values which could lead to the counters not
being initialized properly when the interface is down.

IXGBEVF_QUEUE_STATS_LEN is set to include the number of queues.

Also some defines were renamed to use the IXGBEVF prefix.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: Add register wait for slow links
Mark Rustad [Thu, 7 Apr 2016 17:43:50 +0000 (10:43 -0700)]
ixgbe: Add register wait for slow links

Use a new register to wait for previous register writes to complete
before issuing a register read. This is needed when slower links
are in use.

Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agohv_netvsc: Fix the list processing for network change event
Haiyang Zhang [Thu, 21 Apr 2016 23:13:01 +0000 (16:13 -0700)]
hv_netvsc: Fix the list processing for network change event

RNDIS_STATUS_NETWORK_CHANGE event is handled as two "half events" --
media disconnect & connect. The second half should be added to the list
head, not to the tail. So all events are processed in normal order.

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoixgbe: make 'action' field in struct ixgbe_fdir_filter a u64 value
Sridhar Samudrala [Fri, 1 Apr 2016 17:34:38 +0000 (10:34 -0700)]
ixgbe: make 'action' field in struct ixgbe_fdir_filter a u64 value

This field is used to record the RX queue index for a redirect action
passed via ring_cookie field in struct ethtool_rx_flow_spec which is
a u64 value.

For ex: after adding a filter rule to redirect to a VF using ethtool
  # echo 4 > /sys/class/net/p4p1/device/sriov_numvfs
  # ethtool -N p4p1 flow-type ip4 src-ip 192.168.0.1 action 0x100000000

querying for the rule shows the Action as 'Direct to queue 0'

  # ethtool -n p4p1
  4 RX rings available
  Total 1 rules

  Filter: 2045
  Rule Type: Raw IPv4
Src IP addr: 192.168.0.1 mask: 0.0.0.0
Dest IP addr: 0.0.0.0 mask: 255.255.255.255
TOS: 0x0 mask: 0xff
Protocol: 0 mask: 0xff
L4 bytes: 0x0 mask: 0xffffffff
VLAN EtherType: 0x0 mask: 0xffff
VLAN: 0x0 mask: 0xffff
User-defined: 0x0 mask: 0xffffffffffffffff
Action: Direct to queue 0

With this fix, ethtool will report the right queue index even for VFs.
Action: Direct to queue 4294967296

Here 4294967296 corresponds to 0x100000000.
We need to update 'ethtool' to report the queue index as a Hex value so
that it is more  user friendly and matches with the 'action' value that
is passed when adding the rule.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: fix default mac->ops.setup_link for X550EM
Emil Tantilov [Thu, 24 Mar 2016 16:58:40 +0000 (09:58 -0700)]
ixgbe: fix default mac->ops.setup_link for X550EM

X550EM_a/x did not have a default value for mac->ops.setup_link which
was causing link issues for backplane devices.

This patch sets mac->ops.setup_link to ixgbe_setup_mac_link_X540 for
X550EM_a/x which is also default for X550. This will result in
mac->ops.setup_link calling the link setup function for the respective
PHY type in case we do not need a special function to deal with it.

Reported-by: Ken Cox <jkc@redhat.com>
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: set VLAN spoof checking unconditionally
Emil Tantilov [Fri, 18 Mar 2016 23:11:19 +0000 (16:11 -0700)]
ixgbe: set VLAN spoof checking unconditionally

Previously the PF driver would only set VLAN spoof checking if
the VF had created VLANs. This was done by setting and checking
a counter (vlan_count) whenever a VLAN was created by the VF.
However it is possible for the vlan_count to be !=0 while there are
no VLANs assigned to the VF due to the count incrementing every
time a VLAN 0 is added on ifdown/up, which resulted in VLAN spoofing
always being set for those VFs.

This patch cleans up the logic by unconditionally setting VLAN based on
how the VF is configured (via ip link set ethX vf Y spoofchk on/off).
This change also resolves an issue where the VLAN spoofing can remain
set even after being disabled by the user due to the driver enabling
VLAN spoof checking every time a VLAN is added to the VF, but would
only allow changes in the setting if vlan_count != 0.

Also default_vf_vlan_id and vlans_enabled were removed from the
vf_data_storage structure since they are not being used in the driver.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agoixgbe: consolidate the configuration of spoof checking
Emil Tantilov [Fri, 18 Mar 2016 23:11:14 +0000 (16:11 -0700)]
ixgbe: consolidate the configuration of spoof checking

Consolidate the logic behind configuring spoof checking:

Move the setting of the MAC, VLAN and Ethertype spoof checking into
ixgbe_ndo_set_vf_spoofchk().

Change ixgbe_set_mac_anti_spoofing() to set MAC spoofing per VF similar
to the VLAN and Ethertype functions - this allows us to call the helper
functions in ixgbe_ndo_set_vf_spoofchk() for all spoof check types and
only disable MAC spoof checking when creating MACVLAN.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
8 years agotcp-tso: do not split TSO packets at retransmit time
Eric Dumazet [Thu, 21 Apr 2016 17:55:23 +0000 (10:55 -0700)]
tcp-tso: do not split TSO packets at retransmit time

Linux TCP stack painfully segments all TSO/GSO packets before retransmits.

This was fine back in the days when TSO/GSO were emerging, with their
bugs, but we believe the dark age is over.

Keeping big packets in write queues, but also in stack traversal
has a lot of benefits.
 - Less memory overhead, because write queues have less skbs
 - Less cpu overhead at ACK processing.
 - Better SACK processing, as lot of studies mentioned how
   awful linux was at this ;)
 - Less cpu overhead to send the rtx packets
   (IP stack traversal, netfilter traversal, drivers...)
 - Better latencies in presence of losses.
 - Smaller spikes in fq like packet schedulers, as retransmits
   are not constrained by TCP Small Queues.

1 % packet losses are common today, and at 100Gbit speeds, this
translates to ~80,000 losses per second.
Losses are often correlated, and we see many retransmit events
leading to 1-MSS train of packets, at the time hosts are already
under stress.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: fix stale links after re-enabling bearer
Parthasarathy Bhuvaragan [Thu, 21 Apr 2016 13:51:13 +0000 (15:51 +0200)]
tipc: fix stale links after re-enabling bearer

Commit 42b18f605fea ("tipc: refactor function tipc_link_timeout()"),
introduced a bug which prevents sending of probe messages during
link synchronization phase. This leads to hanging links, if the
bearer is disabled/enabled after links are up.

In this commit, we send the probe messages correctly.

Fixes: 42b18f605fea ("tipc: refactor function tipc_link_timeout()")
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tcp-tcstamp_ack-frag-coalesce'
David S. Miller [Sun, 24 Apr 2016 18:06:56 +0000 (14:06 -0400)]
Merge branch 'tcp-tcstamp_ack-frag-coalesce'

Martin KaFai Lau says:

====================
tcp: Handle txstamp_ack when fragmenting/coalescing skbs

This patchset is to handle the txstamp-ack bit when
fragmenting/coalescing skbs.

The second patch depends on the recently posted series
for the net branch:
"tcp: Merge timestamp info when coalescing skbs"

A BPF prog is used to kprobe to sock_queue_err_skb()
and print out the value of serr->ee.ee_data.  The BPF
prog (run-able from bcc) is attached here:

BPF prog used for testing:
~~~~~

from __future__ import print_function
from bcc import BPF

bpf_text = """

int trace_err_skb(struct pt_regs *ctx)
{
struct sk_buff *skb = (struct sk_buff *)ctx->si;
struct sock *sk = (struct sock *)ctx->di;
struct sock_exterr_skb *serr;
u32 ee_data = 0;

if (!sk || !skb)
return 0;

serr = SKB_EXT_ERR(skb);
bpf_probe_read(&ee_data, sizeof(ee_data), &serr->ee.ee_data);
bpf_trace_printk("ee_data:%u\\n", ee_data);

return 0;
};
"""

b = BPF(text=bpf_text)
b.attach_kprobe(event="sock_queue_err_skb", fn_name="trace_err_skb")
print("Attached to kprobe")
b.trace_print()
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: Merge txstamp_ack in tcp_skb_collapse_tstamp
Martin KaFai Lau [Wed, 20 Apr 2016 05:50:48 +0000 (22:50 -0700)]
tcp: Merge txstamp_ack in tcp_skb_collapse_tstamp

When collapsing skbs, txstamp_ack also needs to be merged.

Retrans Collapse Test:
~~~~~~
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0

0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 730) = 730
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0
0.200 write(4, ..., 11680) = 11680

0.200 > P. 1:731(730) ack 1
0.200 > P. 731:1461(730) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:13141(4380) ack 1

0.300 < . 1:1(0) ack 1 win 257 <sack 1461:2921,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:4381,nop,nop>
0.300 < . 1:1(0) ack 1 win 257 <sack 1461:5841,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 13141 win 257

BPF Output Before:
~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>

BPF Output After:
~~~~~
<...>-2027  [007] d.s.    79.765921: : ee_data:1459

Sacks Collapse Test:
~~~~~
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0

0.200 write(4, ..., 1460) = 1460
+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 13140) = 13140
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0

0.200 > P. 1:1461(1460) ack 1
0.200 > . 1461:8761(7300) ack 1
0.200 > P. 8761:14601(5840) ack 1

0.300 < . 1:1(0) ack 1 win 257 <sack 1461:14601,nop,nop>
0.300 > P. 1:1461(1460) ack 1
0.400 < . 1:1(0) ack 14601 win 257

BPF Output Before:
~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>

BPF Output After:
~~~~~
<...>-2049  [007] d.s.    89.185538: : ee_data:14599

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: Carry txstamp_ack in tcp_fragment_tstamp
Martin KaFai Lau [Wed, 20 Apr 2016 05:50:47 +0000 (22:50 -0700)]
tcp: Carry txstamp_ack in tcp_fragment_tstamp

When a tcp skb is sliced into two smaller skbs (e.g. in
tcp_fragment() and tso_fragment()),  it does not carry
the txstamp_ack bit to the newly created skb if it is needed.
The end result is a timestamping event (SCM_TSTAMP_ACK) will
be missing from the sk->sk_error_queue.

This patch carries this bit to the new skb2
in tcp_fragment_tstamp().

BPF Output Before:
~~~~~~
<No output due to missing SCM_TSTAMP_ACK timestamp>

BPF Output After:
~~~~~~
<...>-2050  [000] d.s.   100.928763: : ee_data:14599

Packetdrill Script:
~~~~~~
+0 `sysctl -q -w net.ipv4.tcp_min_tso_segs=10`
+0 `sysctl -q -w net.ipv4.tcp_no_metrics_save=1`
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0

0.100 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 7>
0.100 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 7>
0.200 < . 1:1(0) ack 1 win 257
0.200 accept(3, ..., ...) = 4
+0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0

+0 setsockopt(4, SOL_SOCKET, 37, [2688], 4) = 0
0.200 write(4, ..., 14600) = 14600
+0 setsockopt(4, SOL_SOCKET, 37, [2176], 4) = 0

0.200 > . 1:7301(7300) ack 1
0.200 > P. 7301:14601(7300) ack 1

0.300 < . 1:1(0) ack 14601 win 257

0.300 close(4) = 0
0.300 > F. 14601:14601(0) ack 1
0.400 < F. 1:1(0) ack 16062 win 257
0.400 > . 14602:14602(0) ack 2

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
David S. Miller [Sun, 24 Apr 2016 04:12:08 +0000 (00:12 -0400)]
Merge git://git./linux/kernel/git/pablo/nf-next

Pablo Neira Ayuso says:

====================
Netfilter updates for net-next

The following patchset contains Netfilter updates for your net-next
tree, mostly from Florian Westphal to sort out the lack of sufficient
validation in x_tables and connlabel preparation patches to add
nf_tables support. They are:

1) Ensure we don't go over the ruleset blob boundaries in
   mark_source_chains().

2) Validate that target jumps land on an existing xt_entry. This extra
   sanitization comes with a performance penalty when loading the ruleset.

3) Introduce xt_check_entry_offsets() and use it from {arp,ip,ip6}tables.

4) Get rid of the smallish check_entry() functions in {arp,ip,ip6}tables.

5) Make sure the minimal possible target size in x_tables.

6) Similar to #3, add xt_compat_check_entry_offsets() for compat code.

7) Check that standard target size is valid.

8) More sanitization to ensure that the target_offset field is correct.

9) Add xt_check_entry_match() to validate that matches are well-formed.

10-12) Three patch to reduce the number of parameters in
    translate_compat_table() for {arp,ip,ip6}tables by using a container
    structure.

13) No need to return value from xt_compat_match_from_user(), so make
    it void.

14) Consolidate translate_table() so it can be used by compat code too.

15) Remove obsolete check for compat code, so we keep consistent with
    what was already removed in the native layout code (back in 2007).

16) Get rid of target jump validation from mark_source_chains(),
    obsoleted by #2.

17) Introduce xt_copy_counters_from_user() to consolidate counter
    copying, and use it from {arp,ip,ip6}tables.

18,22) Get rid of unnecessary explicit inlining in ctnetlink for dump
    functions.

19) Move nf_connlabel_match() to xt_connlabel.

20) Skip event notification if connlabel did not change.

21) Update of nf_connlabels_get() to make the upcoming nft connlabel
    support easier.

23) Remove spinlock to read protocol state field in conntrack.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'nla_align-more'
David S. Miller [Sun, 24 Apr 2016 00:13:29 +0000 (20:13 -0400)]
Merge branch 'nla_align-more'

Nicolas Dichtel says:

====================
netlink: align attributes when needed (patchset #1)

This is the continuation of the work done to align netlink attributes
when these attributes contain some 64-bit fields.

David, if the third patch is too big (or maybe the series), I can split it.
Just tell me what you prefer.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotaskstats: use the libnl API to align nlattr on 64-bit
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:24 +0000 (17:31 +0200)]
taskstats: use the libnl API to align nlattr on 64-bit

Goal of this patch is to use the new libnl API to align netlink attribute
when needed.
The layout of the netlink message will be a bit different after the patch,
because the padattr (TASKSTATS_TYPE_STATS) will be inside the nested
attribute instead of before it.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoxfrm: align nlattr properly when needed
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:23 +0000 (17:31 +0200)]
xfrm: align nlattr properly when needed

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: add nla_put_u64_64bit() helper
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:22 +0000 (17:31 +0200)]
libnl: add nla_put_u64_64bit() helper

With this function, nla_data() is aligned on a 64-bit area.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: nla_put_msecs(): align on a 64-bit area
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:21 +0000 (17:31 +0200)]
libnl: nla_put_msecs(): align on a 64-bit area

nla_data() is now aligned on a 64-bit area.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: nla_put_s64(): align on a 64-bit area
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:20 +0000 (17:31 +0200)]
libnl: nla_put_s64(): align on a 64-bit area

nla_data() is now aligned on a 64-bit area.
In fact, there is no user of this function.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: nla_put_net64(): align on a 64-bit area
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:19 +0000 (17:31 +0200)]
libnl: nla_put_net64(): align on a 64-bit area

nla_data() is now aligned on a 64-bit area.

The temporary function nla_put_be64_32bit() is removed in this patch.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: nla_put_be64(): align on a 64-bit area
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:18 +0000 (17:31 +0200)]
libnl: nla_put_be64(): align on a 64-bit area

nla_data() is now aligned on a 64-bit area.

A temporary version (nla_put_be64_32bit()) is added for nla_put_net64().
This function is removed in the next patch.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: nla_put_le64(): align on a 64-bit area
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:17 +0000 (17:31 +0200)]
libnl: nla_put_le64(): align on a 64-bit area

nla_data() is now aligned on a 64-bit area.

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agolibnl: fix help of _64bit functions
Nicolas Dichtel [Fri, 22 Apr 2016 15:31:16 +0000 (17:31 +0200)]
libnl: fix help of _64bit functions

Fix typo and describe 'padattr'.

Fixes: 089bf1a6a924 ("libnl: add more helpers to align attributes on 64-bit")
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
David S. Miller [Sat, 23 Apr 2016 22:26:24 +0000 (18:26 -0400)]
Merge git://git./linux/kernel/git/davem/net

Conflicts were two cases of simple overlapping changes,
nothing serious.

In the UDP case, we need to add a hlist_add_tail_rcu()
to linux/rculist.h, because we've moved UDP socket handling
away from using nulls lists.

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge tag 'rtc-4.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux
Linus Torvalds [Thu, 21 Apr 2016 22:41:13 +0000 (15:41 -0700)]
Merge tag 'rtc-4.6-3' of git://git./linux/kernel/git/abelloni/linux

Pull RTC fixes from Alexandre Belloni:
 "A few fixes for the RTC subsystem.  The documentation fix already
  missed 4.5 so I think it is worth taking it now:

  A documentation fix for s3c and two fixes for the ds1307"

* tag 'rtc-4.6-3' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux:
  rtc: ds1307: Use irq when available for wakeup-source device
  rtc: ds1307: ds3231 temperature s16 overflow
  rtc: s3c: Document in binding that only s3c6410 needs a src clk

8 years agoMerge tag 'pm+acpi-4.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
Linus Torvalds [Thu, 21 Apr 2016 21:29:34 +0000 (14:29 -0700)]
Merge tag 'pm+acpi-4.6-rc5' of git://git./linux/kernel/git/rafael/linux-pm

Pull power management fixes from Rafael Wysocki:
 "Two fixes for issues introduced recently, one for an intel_pstate
  driver problem uncovered by the recent switch over from using timers
  and the other one for a potential cpufreq core problem related to
  system suspend/resume.

  Specifics:

   - Fix an intel_pstate driver problem causing CPUs to get stuck in the
     highest P-state when completely idle uncovered by the recent switch
     over from using timers (Rafael Wysocki).

   - Avoid attempts to get the current CPU frequency when all devices
     (like I2C controllers that may be nedded for that purpose) have
     been suspended during system suspend/resume (Rafael Wysocki)"

* tag 'pm+acpi-4.6-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpufreq: Abort cpufreq_update_current_freq() for cpufreq_suspended set
  intel_pstate: Avoid getting stuck in high P-states when idle

8 years agortc: ds1307: Use irq when available for wakeup-source device
Nishanth Menon [Tue, 19 Apr 2016 16:23:54 +0000 (11:23 -0500)]
rtc: ds1307: Use irq when available for wakeup-source device

With commit 8bc2a40730ec ("rtc: ds1307: add support for the
DT property 'wakeup-source'") we lost the ability for rtc irq
functionality for devices that are actually hooked on a real IRQ
line and have capability to wakeup as well. This is not an expected
behavior. So, instead of just not requesting IRQ, skip the IRQ
requirement only if interrupts are not defined for the device.

Fixes: 8bc2a40730ec ("rtc: ds1307: add support for the DT property 'wakeup-source'")
Reported-by: Tony Lindgren <tony@atomide.com>
Cc: Michael Lange <linuxstuff@milaw.biz>
Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
8 years agortc: ds1307: ds3231 temperature s16 overflow
Zhuang Yuyao [Mon, 18 Apr 2016 00:21:42 +0000 (09:21 +0900)]
rtc: ds1307: ds3231 temperature s16 overflow

while retrieving temperature from ds3231, the result may be overflow
since s16 is too small for a multiplication with 250.

ie. if temp_buf[0] == 0x2d, the result (s16 temp) will be negative.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Tested-by: Michael Tatarinov <kukabu@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
8 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Linus Torvalds [Thu, 21 Apr 2016 19:57:34 +0000 (12:57 -0700)]
Merge git://git./linux/kernel/git/davem/net

Pull networking fixes from David Miller:

 1) Fix memory leak in iwlwifi, from Matti Gottlieb.

 2) Add missing registration of netfilter arp_tables into initial
    namespace, from Florian Westphal.

 3) Fix potential NULL deref in DecNET routing code.

 4) Restrict NETLINK_URELEASE to truly bound sockets only, from Dmitry
    Ivanov.

 5) Fix dst ref counting in VRF, from David Ahern.

 6) Fix TSO segmenting limits in i40e driver, from Alexander Duyck.

 7) Fix heap leak in PACKET_DIAG_MCLIST, from Mathias Krause.

 8) Ravalidate IPV6 datagram socket cached routes properly, particularly
    with UDP, from Martin KaFai Lau.

 9) Fix endian bug in RDS dp_ack_seq handling, from Qing Huang.

10) Fix stats typing in bcmgenet driver, from Eric Dumazet.

11) Openvswitch needs to orphan SKBs before ipv6 fragmentation handing,
    from Joe Stringer.

12) SPI device reference leak in spi_ks8895 PHY driver, from Mark Brown.

13) atl2 doesn't actually support scatter-gather, so don't advertise the
    feature.  From Ben Hucthings.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (72 commits)
  openvswitch: use flow protocol when recalculating ipv6 checksums
  Driver: Vmxnet3: set CHECKSUM_UNNECESSARY for IPv6 packets
  atl2: Disable unimplemented scatter/gather feature
  net/mlx4_en: Split SW RX dropped counter per RX ring
  net/mlx4_core: Don't allow to VF change global pause settings
  net/mlx4_core: Avoid repeated calls to pci enable/disable
  net/mlx4_core: Implement pci_resume callback
  net: phy: spi_ks8895: Don't leak references to SPI devices
  net: ethernet: davinci_emac: Fix platform_data overwrite
  net: ethernet: davinci_emac: Fix Unbalanced pm_runtime_enable
  qede: Fix single MTU sized packet from firmware GRO flow
  qede: Fix setting Skb network header
  qede: Fix various memory allocation error flows for fastpath
  tcp: Merge tx_flags and tskey in tcp_shifted_skb
  tcp: Merge tx_flags and tskey in tcp_collapse_retrans
  drivers: net: cpsw: fix wrong regs access in cpsw_ndo_open
  tcp: Fix SOF_TIMESTAMPING_TX_ACK when handling dup acks
  openvswitch: Orphan skbs before IPv6 defrag
  Revert "Prevent NUll pointer dereference with two PHYs on cpsw"
  VSOCK: Only check error on skb_recv_datagram when skb is NULL
  ...

8 years agoMerge branch 'geneve-vxlan-deps'
David S. Miller [Thu, 21 Apr 2016 19:36:04 +0000 (15:36 -0400)]
Merge branch 'geneve-vxlan-deps'

Hannes Frederic Sowa says:

====================
net: network drivers should not depend on geneve/vxlan

This patchset removes the dependency of network drivers on vxlan or
geneve, so those don't get autoloaded when the nic driver is loaded.

Also audited the code such that vxlan_get_rx_port and geneve_get_rx_port
are not called without rtnl lock.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agogeneve: break dependency with netdev drivers
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:48 +0000 (21:19 +0200)]
geneve: break dependency with netdev drivers

Equivalent to "vxlan: break dependency with netdev drivers", don't
autoload geneve module in case the driver is loaded. Instead make the
coupling weaker by using netdevice notifiers as proxy.

Cc: Jesse Gross <jesse@kernel.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: break dependency with netdev drivers
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:47 +0000 (21:19 +0200)]
vxlan: break dependency with netdev drivers

Currently all drivers depend and autoload the vxlan module because how
vxlan_get_rx_port is linked into them. Remove this dependency:

By using a new event type in the netdevice notifier call chain we proxy
the request from the drivers to flush and resetup the vxlan ports not
directly via function call but by the already existing netdevice
notifier call chain.

I added a separate new event type, NETDEV_OFFLOAD_PUSH_VXLAN, to do so.
We don't need to save those ids, as the event type field is an unsigned
long and using specialized event types for this purpose seemed to be a
more elegant way. This also comes in beneficial if in future we want to
add offloading knobs for vxlan.

Cc: Jesse Gross <jesse@kernel.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqlcnic: protect qlicnic_attach_func with rtnl_lock
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:46 +0000 (21:19 +0200)]
qlcnic: protect qlicnic_attach_func with rtnl_lock

qlcnic_attach_func requires rtnl_lock to be held.

Cc: Dept-GELinuxNICDev@qlogic.com
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoixgbe: protect vxlan_get_rx_port in ixgbe_service_task with rtnl_lock
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:45 +0000 (21:19 +0200)]
ixgbe: protect vxlan_get_rx_port in ixgbe_service_task with rtnl_lock

vxlan_get_rx_port requires rtnl_lock to be held.

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Cc: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlx4: protect mlx4_en_start_port in mlx4_en_restart with rtnl_lock
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:44 +0000 (21:19 +0200)]
mlx4: protect mlx4_en_start_port in mlx4_en_restart with rtnl_lock

mlx4_en_start_port requires rtnl_lock to be held.

Cc: Eugenia Emantayev <eugenia@mellanox.com>
Cc: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agofm10k: protect fm10k_open in fm10k_io_resume with rtnl_lock
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:43 +0000 (21:19 +0200)]
fm10k: protect fm10k_open in fm10k_io_resume with rtnl_lock

fm10k_open requires rtnl_lock to be held.

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Shannon Nelson <shannon.nelson@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Cc: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobenet: be_resume needs to protect be_open with rtnl_lock
Hannes Frederic Sowa [Mon, 18 Apr 2016 19:19:42 +0000 (21:19 +0200)]
benet: be_resume needs to protect be_open with rtnl_lock

be_open calls down to functions which expects rtnl lock to be held.

Cc: Sathya Perla <sathya.perla@broadcom.com>
Cc: Ajit Khaparde <ajit.khaparde@broadcom.com>
Cc: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Cc: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoopenvswitch: use flow protocol when recalculating ipv6 checksums
Simon Horman [Thu, 21 Apr 2016 01:49:15 +0000 (11:49 +1000)]
openvswitch: use flow protocol when recalculating ipv6 checksums

When using masked actions the ipv6_proto field of an action
to set IPv6 fields may be zero rather than the prevailing protocol
which will result in skipping checksum recalculation.

This patch resolves the problem by relying on the protocol
in the flow key rather than that in the set field action.

Fixes: 83d2b9ba1abc ("net: openvswitch: Support masked set actions.")
Cc: Jarno Rajahalme <jrajahalme@nicira.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoDriver: Vmxnet3: set CHECKSUM_UNNECESSARY for IPv6 packets
Shrikrishna Khare [Thu, 21 Apr 2016 01:12:29 +0000 (18:12 -0700)]
Driver: Vmxnet3: set CHECKSUM_UNNECESSARY for IPv6 packets

For IPv6, if the device indicates that the checksum is correct, set
CHECKSUM_UNNECESSARY.

Reported-by: Subbarao Narahari <snarahari@vmware.com>
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Signed-off-by: Jin Heo <heoj@vmware.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoatl2: Disable unimplemented scatter/gather feature
Ben Hutchings [Wed, 20 Apr 2016 22:23:08 +0000 (23:23 +0100)]
atl2: Disable unimplemented scatter/gather feature

atl2 includes NETIF_F_SG in hw_features even though it has no support
for non-linear skbs.  This bug was originally harmless since the
driver does not claim to implement checksum offload and that used to
be a requirement for SG.

Now that SG and checksum offload are independent features, if you
explicitly enable SG *and* use one of the rare protocols that can use
SG without checkusm offload, this potentially leaks sensitive
information (before you notice that it just isn't working).  Therefore
this obscure bug has been designated CVE-2016-2117.

Reported-by: Justin Yackoski <jyackoski@crypto-nite.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Fixes: ec5f06156423 ("net: Kill link between CSUM and SG features.")
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: Add support for IP ID mangling TSO in cases that require encapsulation
Alexander Duyck [Wed, 20 Apr 2016 20:51:00 +0000 (16:51 -0400)]
net: Add support for IP ID mangling TSO in cases that require encapsulation

This patch adds support for NETIF_F_TSO_MANGLEID if a given tunnel supports
NETIF_F_TSO.  This way if needed a device can then later enable the TSO
with IP ID mangling and the tunnels on top of that device can then also
make use of the IP ID mangling as well.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'mlx5-next'
David S. Miller [Thu, 21 Apr 2016 19:09:06 +0000 (15:09 -0400)]
Merge branch 'mlx5-next'

Saeed Mahameed says:

====================
Mellanox 100G mlx5 driver receive path optimizations

Changes from V2:
- Rebased to 46e7b8d8d53b ("net: dsa: kill circular reference with slave priv")
- Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
* Per Eric Dumazet comment we changed the driver memory handling scheme to
work with order-0 pages rather than order-5 via split_page().
* This means that now a mlx5e rx skb can hold one or (more in case of HW LRO)
                skb frag each pointing to a 4K order-0 page rather than one frag with order-5 page.
- Updated: ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
* Code refactoring and code reuse due the split_page() mechanism,
  now the MPWQE and fragmented MPWQE handling almost look the same,
  and share most of the code.
- In some cases we see 2%-3% packet rate degradation in comparison to the order-5 pages approach,
  due to split_page() cpu consumption, but still we do see 3%-10% improvement in comparison to the
          current linear SKB approach.
- We do believe that now the driver memory scheme is significantly less vulnerable
  to the memory DOS attack Eric pointed at.

Changes from V1:
- Rebased to efde611b0afa ("Merge branch 'nfp-next'")
- Dropped: ("net/mlx5: Refactor mlx5_core_mr to mkey")
                Already merged into 4.6 from rdma tree.
- Dropped: ("net/mlx5_core: Add ConnectX-5 to list of supported devices")
                Will be pushed to net as we want it in 4.6 release.
- Dropped: ("net/mlx5e: Change RX moderation period to be based on CQE")
                Will be pushed in a later series with full software based adaptive moderation.
- Added: ("net/mlx5e: Delay skb->data access")
Small trivial optimization.
- Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
  Changed Striding RQ defaults to:
>  NUM WQEs = 16
>  Strides Per WQE = 1024
>  Stride Size = 128
- Updated: ("net/mlx5e: Use napi_alloc_skb for RX SKB allocations")
Consider the IP packet alignment already done in napi_alloc_skb.

Changes from V0:
- Fixed a typo in commit message reported by Sergei
- Align SKB fragments truesize to stride size
- Use skb_add_rx_frag and remove the use of SKB_TRUESIZE
- Fix: # MTTs alignment on Power PC
- Fix: Free original (unaligned) pointer of MTT array
- Use dev_alloc_pages and dev_alloc_page
- Extend the stats.buff_alloc_err counter
- Reform the copying of packet header into skb linear data
- Add compiler hints for conditional statements
- Prefetch skd->data prior to copying packet header into it
- Rework: mlx5e_complete_rx_fragmented_mpwqe
- Handle SKB fragments before linear data
- Dropped ("net/mlx5e: Prefetch next RX CQE") for now
- Added a small patch that Adds ConnectX-5 devices to the list of supported devices
- Rebased to 1cdba5505555 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next")

This series includes Some RX modifications and optimizations for
the mlx5 Ethernet driver.

From Rana, we have one patch that adds the support for Connectx-4
queue counters.

From Tariq, several patches that are centralized around improving
RX path message rate, CPU and Memory utilization, in each patch
commit message you will find the performance improvements numbers
related to that specific patch.

In the 2nd patch we used a queue counter to report "out of buffer"
dropped packet count, "Dropped packets due to lack of software resources"

3rd patch modifies the driver's to RSS default value to be spread along the
close NUMA node cores only for better out of the box experience.

In the 4th and 5th patches we utilized the use of RX multi-packet WQE
(Striding RQ) for better memory utilization especially in case of hardware
LRO is enabled and for better message rate for small packets.

In the 6th and 7th patches we added a fallback mechanism to use fragmented
memory when allocating large WQE strides fails, using UMR
(User Memory Registration) and ICO (Internal Control Operations) SQs.

In the 8th to 11th patches we did some small modification which show some small
extra improvements.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Add ethtool counter for RX buffer allocation failures
Tariq Toukan [Wed, 20 Apr 2016 19:02:19 +0000 (22:02 +0300)]
net/mlx5e: Add ethtool counter for RX buffer allocation failures

Counts the number of RX buffer allocation failures and shows it
in ethtool statistics.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Delay skb->data access
Saeed Mahameed [Wed, 20 Apr 2016 19:02:18 +0000 (22:02 +0300)]
net/mlx5e: Delay skb->data access

Move mlx5e_handle_csum and eth_type_trans to the end of
mlx5e_build_rx_skb to gain some more time before accessing
skb->data, to reduce cache misses.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Remove redundant barrier
Tariq Toukan [Wed, 20 Apr 2016 19:02:17 +0000 (22:02 +0300)]
net/mlx5e: Remove redundant barrier

The bit-op operation one line before is an explicit barrier
by itself.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Use napi_alloc_skb for RX SKB allocations
Tariq Toukan [Wed, 20 Apr 2016 19:02:16 +0000 (22:02 +0300)]
net/mlx5e: Use napi_alloc_skb for RX SKB allocations

Instead of netdev_alloc_skb, we use the napi_alloc_skb function
which is designated to allocate skbuff's for RX in a
channel-specific NAPI instance, and implies the IP packet alignment.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Add fragmented memory support for RX multi packet WQE
Tariq Toukan [Wed, 20 Apr 2016 19:02:15 +0000 (22:02 +0300)]
net/mlx5e: Add fragmented memory support for RX multi packet WQE

If the allocation of a linear (physically continuous) MPWQE fails,
we allocate a fragmented MPWQE.

This is implemented via device's UMR (User Memory Registration)
which allows to register multiple memory fragments into ConnectX
hardware as a continuous buffer.
UMR registration is an asynchronous operation and is done via
ICO SQs.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Added ICO SQs
Tariq Toukan [Wed, 20 Apr 2016 19:02:14 +0000 (22:02 +0300)]
net/mlx5e: Added ICO SQs

Added ICO (Internal Control Operations) SQ per channel to be used
for driver internal operations such as memory registration for
fragmented memory and nop requests upon ifconfig up.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Support RX multi-packet WQE (Striding RQ)
Tariq Toukan [Wed, 20 Apr 2016 19:02:13 +0000 (22:02 +0300)]
net/mlx5e: Support RX multi-packet WQE (Striding RQ)

Introduce the feature of multi-packet WQE (RX Work Queue Element)
referred to as (MPWQE or Striding RQ), in which WQEs are larger
and serve multiple packets each.

Every WQE consists of many strides of the same size, every received
packet is aligned to a beginning of a stride and is written to
consecutive strides within a WQE.

In the regular approach, each regular WQE is big enough to be capable
of serving one received packet of any size up to MTU or 64K in case of
device LRO is enabled, making it very wasteful when dealing with
small packets or device LRO is enabled.

For its flexibility, MPWQE allows a better memory utilization
(implying improvements in CPU utilization and packet rate) as packets
consume strides according to their size, preserving the rest of
the WQE to be available for other packets.

MPWQE default configuration:
Num of WQEs = 16
Strides Per WQE = 2048
Stride Size = 64 byte

The default WQEs memory footprint went from 1024*mtu (~1.5MB) to
16 * 2048 * 64 = 2MB per ring.
However, HW LRO can now be supported at no additional cost in memory
footprint, and hence we turn it on by default and get an even better
performance.

Performance tested on ConnectX4-Lx 50G.
To isolate the feature under test, the numbers below were measured with
HW LRO turned off. We verified that the performance just improves when
LRO is turned back on.

* Netperf single TCP stream:
- BW raised by 10-15% for representative packet sizes:
  default, 64B, 1024B, 1478B, 65536B.

* Netperf multi TCP stream:
- No degradation, line rate reached.

* Pktgen: packet rate raised by 2-10% for traffic of different message
sizes: 64B, 128B, 256B, 1024B, and 1500B.

* Pktgen: packet loss in bursts of small messages (64byte),
single stream:
- | num packets | packets loss before | packets loss after
  |     2K      |       ~ 1K          |       0
  |     8K      |       ~ 6K          |       0
  |     16K     |       ~13K          |       0
  |     32K     |       ~28K          |       0
  |     64K     |       ~57K          |     ~24K

As expected as the driver can receive as many small packets (<=64B) as
the number of total strides in the ring (default = 2048 * 16) vs. 1024
(default ring size regardless of packets size) before this feature.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Use function pointers for RX data path handling
Tariq Toukan [Wed, 20 Apr 2016 19:02:12 +0000 (22:02 +0300)]
net/mlx5e: Use function pointers for RX data path handling

In preparation for Striding RQ feature, which will need its own
RX handlers.
This patch does not change any functionality.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Achiad Shochat <achiad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Use only close NUMA node for default RSS
Tariq Toukan [Wed, 20 Apr 2016 19:02:11 +0000 (22:02 +0300)]
net/mlx5e: Use only close NUMA node for default RSS

Distribute default RSS table uniformly over the rings of the
close NUMA node, instead of all available channels.
This way we enforce the preference of close rings over far ones.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5e: Allocate set of queue counters per netdev
Rana Shahout [Wed, 20 Apr 2016 19:02:10 +0000 (22:02 +0300)]
net/mlx5e: Allocate set of queue counters per netdev

Connect all netdev RQs to this set of queue counters.
Also, add an "rx_out_of_buffer" counter to ethtool,
which indicates RX packet drops due to lack of receive
buffers.

Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx5: Introduce device queue counters
Tariq Toukan [Wed, 20 Apr 2016 19:02:09 +0000 (22:02 +0300)]
net/mlx5: Introduce device queue counters

A queue counter can collect several statistics for one or more
hardware queues (QPs, RQs, etc ..) that the counter is attached to.

For Ethernet it will provide an "out of buffer" counter which
collects the number of all packets that are dropped due to lack
of software buffers.

Here we add device commands to alloc/query/dealloc queue counters.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Rana Shahout <ranas@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'bcmsysport-napi-updates'
David S. Miller [Thu, 21 Apr 2016 19:06:05 +0000 (15:06 -0400)]
Merge branch 'bcmsysport-napi-updates'

Florian Fainelli says:

====================
net: bcmsysport: utilize newer NAPI APIs

These two patches are very analoguous to what was already submitted for
BCMGENET and switch the SYSTEMPORT driver to utilizing __napi_schedule_irqoff()
and napi_complete_done for the RX NAPI context.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bcmsysport: use napi_complete_done()
Florian Fainelli [Wed, 20 Apr 2016 18:37:09 +0000 (11:37 -0700)]
net: bcmsysport: use napi_complete_done()

By using napi_complete_done(), we allow fine tuning of
/sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation
efficiency for a Gbit NIC.

Check commit 24d2e4a50737 ("tg3: use napi_complete_done()") for details.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bcmsysport: use __napi_schedule_irqoff()
Florian Fainelli [Wed, 20 Apr 2016 18:37:08 +0000 (11:37 -0700)]
net: bcmsysport: use __napi_schedule_irqoff()

Both bcm_sysport_tx_isr() and bcm_sysport_rx_isr() run in hard irq
context, we do not need to block irq again.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'mlx4-fixes'
David S. Miller [Thu, 21 Apr 2016 19:02:41 +0000 (15:02 -0400)]
Merge branch 'mlx4-fixes'

Or Gerlitz says:

====================
Mellaox 40G driver fixes for 4.6-rc

With the fix for ARM bug being under the works, these are
few other fixes for mlx4 we have ready to go.

Eran addressed the problematic/wrong reporting of dropped packets, Daniel
fixed some matters related to PPC EEH's and Jenny's patch makes sure
VFs can't change the port's pause settings.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx4_en: Split SW RX dropped counter per RX ring
Eran Ben Elisha [Wed, 20 Apr 2016 13:01:18 +0000 (16:01 +0300)]
net/mlx4_en: Split SW RX dropped counter per RX ring

Count SW packet drops per RX ring instead of a global counter. This
will allow monitoring the number of rx drops per ring.

In addition, SW rx_dropped counter was overwritten by HW rx_dropped
counter, sum both of them instead to show the accurate value.

Fixes: a3333b35da16 ('net/mlx4_en: Moderate ethtool callback to [...] ')
Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com>
Reported-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet/mlx4_core: Don't allow to VF change global pause settings
Eugenia Emantayev [Wed, 20 Apr 2016 13:01:17 +0000 (16:01 +0300)]
net/mlx4_core: Don't allow to VF change global pause settings

Currently changing global pause settings is done via SET_PORT
command with input modifier GENERAL. This command is allowed
for each VF since MTU setting is done via the same command.

Change the above to the following scheme: before passing the
request to the FW, the PF will check whether it was issued
by a slave. If yes, don't change global pause and warn,
otherwise change to the requested value and store for
further reference.

Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>