openwrt/staging/blogic.git
7 years agoigb: Add support for padding packet
Alexander Duyck [Tue, 7 Feb 2017 02:27:14 +0000 (18:27 -0800)]
igb: Add support for padding packet

With the size of the frame limited we can now write to an offset within the
buffer instead of having to write at the very start of the buffer.  The
advantage to this is that it allows us to leave padding room for things
like supporting XDP in the future.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Add support for using order 1 pages to receive large frames
Alexander Duyck [Tue, 7 Feb 2017 02:27:03 +0000 (18:27 -0800)]
igb: Add support for using order 1 pages to receive large frames

This patch adds support for using 3K buffers in order 1 pages the same way
we were using 2K buffers in 4K pages.  We are reserving 1K of room for now
to have space available for future headroom and tailroom when we enable
build_skb support.

One side effect of this patch is that we can end up using a larger buffer
if jumbo frames is enabled.  The impact shouldn't be too great, but it
could hurt small packet performance for UDP workloads if jumbo frames is
enabled as the truesize of frames will be larger.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Add support for ethtool private flag to allow use of legacy Rx
Alexander Duyck [Tue, 7 Feb 2017 02:26:52 +0000 (18:26 -0800)]
igb: Add support for ethtool private flag to allow use of legacy Rx

Since there are potential drawbacks to the new Rx allocation approach I
thought it best to add a "chicken bit" so that we can turn the feature off
if in the event that a problem is found.

It also provides a means of validating the legacy Rx path in the event that
we are forced to fall back.  At some point in the future when we are
convinced we don't need it anymore we might be able to drop the legacy-rx
flag.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Use page_address offset from page instead of masking virtual address
Alexander Duyck [Tue, 7 Feb 2017 02:26:40 +0000 (18:26 -0800)]
igb: Use page_address offset from page instead of masking virtual address

Update the handling of page addresses so that we always refer to them using
a void pointer, and try to use the consistent name of va indicating we are
working with a virtual address.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Only sync size of expected frame in ethtool testing
Alexander Duyck [Tue, 7 Feb 2017 02:26:26 +0000 (18:26 -0800)]
igb: Only sync size of expected frame in ethtool testing

We only need to sync the size of the frame that is read to test.  We don't
need to sync the entire Rx buffer.  This way the testing is more consistent
with how we handle things in the receive path.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Limit maximum frame Rx based on MTU
Alexander Duyck [Tue, 7 Feb 2017 02:26:15 +0000 (18:26 -0800)]
igb: Limit maximum frame Rx based on MTU

In order to support the use of build_skb going forward it will be necessary
to place a maximum limit on the amount of data we can receive when jumbo
frames is not enabled.  In order to do this I am adding a new upper limit
for receive based on the size of a 2K buffer minus padding.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Don't bother clearing Tx buffer_info in igb_clean_tx_ring
Alexander Duyck [Tue, 7 Feb 2017 02:26:02 +0000 (18:26 -0800)]
igb: Don't bother clearing Tx buffer_info in igb_clean_tx_ring

In the case of the Tx rings we need to only clear the Tx buffer_info when
we are resetting the rings.  Ideally we do this when we configure the ring
to bring it back up instead of when we are taking it down in order to avoid
dirtying pages we don't need to.

In addition we don't need to clear the Tx descriptor ring since we will
fully repopulate it when we begin transmitting frames and next_to_watch can
be cleared to prevent the ring from being cleaned beyond that point instead
of needing to touch anything in the Tx descriptor ring.

Finally with these changes we can avoid having to reset the skb member of
the Tx buffer_info structure in the cleanup path since the skb will always
be associated with the first buffer which has next_to_watch set.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Clear Rx buffer_info in configure instead of clean
Alexander Duyck [Tue, 7 Feb 2017 02:25:50 +0000 (18:25 -0800)]
igb: Clear Rx buffer_info in configure instead of clean

This change makes it so that instead of going through the entire ring on Rx
cleanup we only go through the region that was designated to be cleaned up
and stop when we reach the region where new allocations should start.

In addition we can avoid having to perform a memset on the Rx buffer_info
structures until we are about to start using the ring again.  By deferring
this we can avoid dirtying the cache any more than we have to which can
help to improve the time needed to bring the interface down and then back
up again in a reset or suspend/resume cycle.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Use length to determine if descriptor is done
Alexander Duyck [Tue, 7 Feb 2017 02:25:41 +0000 (18:25 -0800)]
igb: Use length to determine if descriptor is done

This change makes it so that we use the length of the packet instead of the
DD status bit to determine if a new descriptor is ready to be processed.
The obvious advantage is that it cuts down on reads as we don't really even
need the DD bit if going from a 0 to a non-zero value on size is enough to
inform us that the packet has been completed.

In addition I have updated the code so that we only reset the Rx descriptor
length for descriptor zero when resetting a ring instead of having to do a
memset with 0 over the entire ring.  By doing this we can save some time on
initialization.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoigb: Add support for DMA_ATTR_WEAK_ORDERING
Alexander Duyck [Tue, 7 Feb 2017 02:25:26 +0000 (18:25 -0800)]
igb: Add support for DMA_ATTR_WEAK_ORDERING

Since we are already using DMA attributes in igb for Rx there is no reason
why we can't also apply DMA_ATTR_WEAK_ORDERING which is needed on some
platforms to improve performance.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoliquidio: fix wrong information about link modes reported to ethtool
Manish Awasthi [Thu, 16 Mar 2017 23:16:17 +0000 (16:16 -0700)]
liquidio: fix wrong information about link modes reported to ethtool

Information reported to ethtool about link modes is wrong for 25G NIC.  Fix
it by checking for presence of 25G NIC, checking the link speed reported by
NIC firmware, and then assigning proper values to the
ethtool_link_ksettings struct.

Signed-off-by: Manish Awasthi <manish.awasthi@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'netvsc-small-changes'
David S. Miller [Fri, 17 Mar 2017 04:39:51 +0000 (21:39 -0700)]
Merge branch 'netvsc-small-changes'

Stephen Hemminger says:

====================
netvsc: small changes for net-next

One bugfix, and two non-code patches
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: remove unused #define
stephen hemminger [Thu, 16 Mar 2017 23:12:39 +0000 (16:12 -0700)]
netvsc: remove unused #define

Not used anywhere.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: add comments about callback's and NAPI
stephen hemminger [Thu, 16 Mar 2017 23:12:38 +0000 (16:12 -0700)]
netvsc: add comments about callback's and NAPI

Add some short description of how callback's and NAPI interoperate.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: avoid race with callback
stephen hemminger [Thu, 16 Mar 2017 23:12:37 +0000 (16:12 -0700)]
netvsc: avoid race with callback

Change the argument to channel callback from the channel pointer
to the internal data structure containing per-channel info.
This avoids any possible races when callback happens during
initialization and makes IRQ code simpler.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'bpf-inline-lookups'
David S. Miller [Fri, 17 Mar 2017 03:44:12 +0000 (20:44 -0700)]
Merge branch 'bpf-inline-lookups'

Alexei Starovoitov says:

====================
bpf: inline bpf_map_lookup_elem()

bpf_map_lookup_elem() is one of the most frequently used helper functions.
Improve JITed program performance by inlining this helper.

bpf_map_type before  after
hash 58M 74M
array 174M 280M

The values are number of lookups per second in ideal conditions
measured by micro-benchmark in patch 6.

The 'perf report' for HASH map type:
before:
    54.23%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    14.24%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     8.84%  map_perf_test  [kernel.kallsyms]  [k] htab_map_lookup_elem
     5.93%  map_perf_test  [kernel.kallsyms]  [k] bpf_map_lookup_elem
     2.30%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.49%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

after:
    60.03%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    18.07%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     2.91%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.94%  map_perf_test  [kernel.kallsyms]  [k] _einittext
     1.90%  map_perf_test  [kernel.kallsyms]  [k] __audit_syscall_exit
     1.72%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem()
is gone after inlining.

'per-cpu' and 'lru' map types can be optimized similarly in the future.

Note the sparse will complain that bpf is addictive ;)
kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs
kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs
it's not a new warning, just in new places.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosamples/bpf: add map_lookup microbenchmark
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:44 +0000 (18:26 -0700)]
samples/bpf: add map_lookup microbenchmark

$ map_perf_test 128
speed of HASH bpf_map_lookup_elem() in lookups per second
w/o JIT w/JIT
before 46M 58M
after 42M 74M

perf report
before:
    54.23%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    14.24%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     8.84%  map_perf_test  [kernel.kallsyms]  [k] htab_map_lookup_elem
     5.93%  map_perf_test  [kernel.kallsyms]  [k] bpf_map_lookup_elem
     2.30%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.49%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

after:
    60.03%  map_perf_test  [kernel.kallsyms]  [k] __htab_map_lookup_elem
    18.07%  map_perf_test  [kernel.kallsyms]  [k] lookup_elem_raw
     2.91%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     1.94%  map_perf_test  [kernel.kallsyms]  [k] _einittext
     1.90%  map_perf_test  [kernel.kallsyms]  [k] __audit_syscall_exit
     1.72%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

Notice that bpf_map_lookup_elem() and htab_map_lookup_elem() are trivial
functions, yet they take sizeable amount of cpu time.
htab_map_gen_lookup() removes bpf_map_lookup_elem() and converts
htab_map_lookup_elem() into three BPF insns which causing cpu time
for bpf_prog_da4fc6a3f41761a2() slightly increase.

$ map_perf_test 256
speed of ARRAY bpf_map_lookup_elem() in lookups per second
w/o JIT w/JIT
before 97M 174M
after 64M 280M

before:
    37.33%  map_perf_test  [kernel.kallsyms]  [k] array_map_lookup_elem
    13.95%  map_perf_test  [kernel.kallsyms]  [k] bpf_map_lookup_elem
     6.54%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     4.57%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

after:
    32.86%  map_perf_test  [kernel.kallsyms]  [k] bpf_prog_da4fc6a3f41761a2
     6.54%  map_perf_test  [kernel.kallsyms]  [k] kprobe_ftrace_handler

array_map_gen_lookup() removes calls to array_map_lookup_elem()
and bpf_map_lookup_elem() and replaces them with 7 bpf insns.

The performance without JIT is slower, since executing extra insns
in the interpreter is slower than running native C code,
but with JIT the performance gains are obvious,
since native C->x86 code is replaced with fewer bpf->x86 instructions.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: inline htab_map_lookup_elem()
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:43 +0000 (18:26 -0700)]
bpf: inline htab_map_lookup_elem()

Optimize:
bpf_call
  bpf_map_lookup_elem
    map->ops->map_lookup_elem
      htab_map_lookup_elem
        __htab_map_lookup_elem
into:
bpf_call
  __htab_map_lookup_elem

to improve performance of JITed programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: add helper inlining infra and optimize map_array lookup
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:42 +0000 (18:26 -0700)]
bpf: add helper inlining infra and optimize map_array lookup

Optimize bpf_call -> bpf_map_lookup_elem() -> array_map_lookup_elem()
into a sequence of bpf instructions.
When JIT is on the sequence of bpf instructions is the sequence
of native cpu instructions with significantly faster performance
than indirect call and two function's prologue/epilogue.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: adjust insn_aux_data when patching insns
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:41 +0000 (18:26 -0700)]
bpf: adjust insn_aux_data when patching insns

convert_ctx_accesses() replaces single bpf instruction with a set of
instructions. Adjust corresponding insn_aux_data while patching.
It's needed to make sure subsequent 'for(all insn)' loops
have matching insn and insn_aux_data.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: refactor fixup_bpf_calls()
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:40 +0000 (18:26 -0700)]
bpf: refactor fixup_bpf_calls()

reduce indent and make it iterate over instructions similar to
convert_ctx_accesses(). Also convert hard BUG_ON into soft verifier error.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: move fixup_bpf_calls() function
Alexei Starovoitov [Thu, 16 Mar 2017 01:26:39 +0000 (18:26 -0700)]
bpf: move fixup_bpf_calls() function

no functional change.
move fixup_bpf_calls() to verifier.c
it's being refactored in the next patch

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp: remove tcp_tw_recycle
Soheil Hassas Yeganeh [Wed, 15 Mar 2017 20:30:46 +0000 (16:30 -0400)]
tcp: remove tcp_tw_recycle

The tcp_tw_recycle was already broken for connections
behind NAT, since the per-destination timestamp is not
monotonically increasing for multiple machines behind
a single destination address.

After the randomization of TCP timestamp offsets
in commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets
for each connection), the tcp_tw_recycle is broken for all
types of connections for the same reason: the timestamps
received from a single machine is not monotonically increasing,
anymore.

Remove tcp_tw_recycle, since it is not functional. Also, remove
the PAWSPassive SNMP counter since it is only used for
tcp_tw_recycle, and simplify tcp_v4_route_req and tcp_v6_route_req
since the strict argument is only set when tcp_tw_recycle is
enabled.

Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Lutz Vieweg <lvml@5t9.de>
Cc: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp: remove per-destination timestamp cache
Soheil Hassas Yeganeh [Wed, 15 Mar 2017 20:30:45 +0000 (16:30 -0400)]
tcp: remove per-destination timestamp cache

Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection)
randomizes TCP timestamps per connection. After this commit,
there is no guarantee that the timestamps received from the
same destination are monotonically increasing. As a result,
the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts
in struct tcp_metrics_block) is broken and cannot be relied upon.

Remove the per-destination timestamp cache and all related code
paths.

Note that this cache was already broken for caching timestamps of
multiple machines behind a NAT sharing the same address.

Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Lutz Vieweg <lvml@5t9.de>
Cc: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'sunvnet-better-connection-management'
David S. Miller [Fri, 17 Mar 2017 03:29:55 +0000 (20:29 -0700)]
Merge branch 'sunvnet-better-connection-management'

Shannon Nelson says:

====================
sunvnet: better connection management

These patches remove some problems in handling of carrier state
with the ldmvsw vswitch, remove  an xoff misuse in sunvnet, and
add stats for debug and tracking of point-to-point connections
between the ldom VMs.

v2:
 - added ldmvsw ndo_open to reset the LDC channel
 - updated copyrights
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosunvnet: xoff not needed when removing port link
Shannon Nelson [Tue, 14 Mar 2017 17:24:43 +0000 (10:24 -0700)]
sunvnet: xoff not needed when removing port link

The sunvnet netdev is connected to the controlling ldom's vswitch
for network bridging.  However, for higher performance between ldoms,
there also is a channel between each client ldom.  These connections are
represented in the sunvnet driver by a queue for each ldom.  The driver
uses select_queue to tell the stack which queue to use by tracking the mac
addresses on the other end of each port.  When a connected ldom shuts down,
the driver receives an LDC_EVENT_RESET and the port is removed from the
driver, thus a queue with no ldom on the other end will never be selected
for Tx.

The driver was trying to reinforce the "don't use this queue" notion with
netif_tx_stop_queue() and netif_tx_wake_queue(), which really should only
be used to signal a Tx queue is full (aka XOFF).  This misuse of queue
state resulted in NETDEV WATCHDOG messages and lots of unnecessary calls
into the driver's tx_timeout handler.  Simply removing these takes care
of the problem.

Orabug: 25190537

Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosunvnet: count multicast packets
Shannon Nelson [Tue, 14 Mar 2017 17:24:42 +0000 (10:24 -0700)]
sunvnet: count multicast packets

Make sure multicast packets get counted in the device.

Orabug: 25190537

Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosunvnet: track port queues correctly
Shannon Nelson [Tue, 14 Mar 2017 17:24:41 +0000 (10:24 -0700)]
sunvnet: track port queues correctly

Track our used and unused queue indexies correctly.  Otherwise, as ports
dropped out and returned, they all eventually ended up with the same
queue index.

Orabug: 25190537

Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosunvnet: add stats to track ldom to ldom packets and bytes
Shannon Nelson [Tue, 14 Mar 2017 17:24:40 +0000 (10:24 -0700)]
sunvnet: add stats to track ldom to ldom packets and bytes

In this driver, there is a "port" created for the connection to each of
the other ldoms; a netdev queue is mapped to each port, and they are
collected under a single netdev.  The generic netdev statistics show
us all the traffic in and out of our network device, but don't show
individual queue/port stats.  This patch breaks out the traffic counts
for the individual ports and gives us a little view into the state of
those connections.

Orabug: 25190537

Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoldmvsw: better use of link up and down on ldom vswitch
Shannon Nelson [Tue, 14 Mar 2017 17:24:39 +0000 (10:24 -0700)]
ldmvsw: better use of link up and down on ldom vswitch

When an ldom VM is bound, the network vswitch infrastructure is set up for
it, but was being forced 'UP' by the userland switch configuration script.
When 'UP' but not actually connected to a running VM, the ipv6 neighbor
probes fail (not a horrible thing) and start cluttering up the kernel logs.
Funny thing: these are debug messages that never actually show up, but
we do see the net_ratelimited messages that say N callbacks were
suppressed.

This patch defers the netif_carrier_on() until an actual link has been
established with the VM, as indicated by receiving an LDC_EVENT_UP from
the underlying LDC protocol.  Similarly, we take the link down when we
see the LDC_EVENT_RESET.  Now when we see the ndo_open(), we reset the
link to get things talking again.

Orabug: 25525312

Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: add 802.3ad support for 25G speeds
Jarod Wilson [Tue, 14 Mar 2017 15:48:32 +0000 (11:48 -0400)]
bonding: add 802.3ad support for 25G speeds

Cut-n-paste enablement of 802.3ad bonding on 25G NICs, which currently
report 0 as their bandwidth.

CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: netdev@vger.kernel.org
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp_westwood: fix tcp_westwood_info() style mistakes
chun Long [Tue, 14 Mar 2017 07:26:24 +0000 (15:26 +0800)]
tcp_westwood: fix tcp_westwood_info() style mistakes

replace comma to semi colons in tcp_westwood_info().
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoliquidio: use meaningful names for IRQs
Rick Farrington [Mon, 13 Mar 2017 19:58:04 +0000 (12:58 -0700)]
liquidio: use meaningful names for IRQs

All IRQs owned by the PF and VF drivers share the same nondescript name
"octeon"; this makes it difficult to setup interrupt affinity.

Change the IRQ names to reflect their specific purpose:

    LiquidIO<id>-<func>-<type>-<queue pair num>

Examples:
    LiquidIO0-pf0-rxtx-3
    LiquidIO1-vf1-rxtx-0
    LiquidIO0-pf0-aux

We cannot use netdev->name for naming the IRQs because:

    1.  Early during init, the PF and VF drivers require interrupts to
        send/receive control data from the NIC firmware; so the PF and VF
        must request IRQs long before the netdev struct is registered.

    2.  The IRQ name can only be specified at the time it is requested.
        It cannot be changed after that.

Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Satanand Burla <satananda.burla@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoliquidio: remove/replace invalid code
Rick Farrington [Mon, 13 Mar 2017 19:07:32 +0000 (12:07 -0700)]
liquidio: remove/replace invalid code

Remove invalid call to dma_sync_single_for_cpu() because previous DMA
allocation was coherent--not streaming.  Remove code that references fields
in struct list_head; replace it with calls to list_empty() and
list_first_entry().  Also, add comment to clarify complicated if statement.

Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Derek Chickles <derek.chickles@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetem: apply correct delay when rate throttling
Nik Unger [Mon, 13 Mar 2017 17:16:58 +0000 (10:16 -0700)]
netem: apply correct delay when rate throttling

I recently reported on the netem list that iperf network benchmarks
show unexpected results when a bandwidth throttling rate has been
configured for netem. Specifically:

1) The measured link bandwidth *increases* when a higher delay is added
2) The measured link bandwidth appears higher than the specified limit
3) The measured link bandwidth for the same very slow settings varies significantly across
  machines

The issue can be reproduced by using tc to configure netem with a
512kbit rate and various (none, 1us, 50ms, 100ms, 200ms) delays on a
veth pair between network namespaces, and then using iperf (or any
other network benchmarking tool) to test throughput. Complete detailed
instructions are in the original email chain here:
https://lists.linuxfoundation.org/pipermail/netem/2017-February/001672.html

There appear to be two underlying bugs causing these effects:

- The first issue causes long delays when the rate is slow and no
  delay is configured (e.g., "rate 512kbit"). This is because SKBs are
  not orphaned when no delay is configured, so orphaning does not
  occur until *after* the rate-induced delay has been applied. For
  this reason, adding a tiny delay (e.g., "rate 512kbit delay 1us")
  dramatically increases the measured bandwidth.

- The second issue is that rate-induced delays are not correctly
  applied, allowing SKB delays to occur in parallel. The indended
  approach is to compute the delay for an SKB and to add this delay to
  the end of the current queue. However, the code does not detect
  existing SKBs in the queue due to improperly testing sch->q.qlen,
  which is nonzero even when packets exist only in the
  rbtree. Consequently, new SKBs do not wait for the current queue to
  empty. When packet delays vary significantly (e.g., if packet sizes
  are different), then this also causes unintended reordering.

I modified the code to expect a delay (and orphan the SKB) when a rate
is configured. I also added some defensive tests that correctly find
the latest scheduled delivery time, even if it is (unexpectedly) for a
packet in sch->q. I have tested these changes on the latest kernel
(4.11.0-rc1+) and the iperf / ping test results are as expected.

Signed-off-by: Nik Unger <njunger@uwaterloo.ca>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'sched-cleanups'
David S. Miller [Thu, 16 Mar 2017 19:02:15 +0000 (12:02 -0700)]
Merge branch 'sched-cleanups'

Or Gerlitz says:

====================
small set of sched cleanups

Just two cleanups -- but for the 2nd one I think we need ack from
Cong Wang to make sure this isn't actually a bug report..

changes from V1:
  - addressed comment from Sergei to use 12 hex digits etc
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/sched: fq_codel: Avoid set-but-unused variable
Or Gerlitz [Thu, 16 Mar 2017 10:53:42 +0000 (12:53 +0200)]
net/sched: fq_codel: Avoid set-but-unused variable

The code introduced by commit 2ccccf5fb43f ("net_sched: update
hierarchical backlog too") only sets prev_backlog in fq_codel_dequeue()
but not using that anywhere, remove that setting.

Cc: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/sched: act_ife: Staticfy find_decode_metaid()
Or Gerlitz [Thu, 16 Mar 2017 10:53:41 +0000 (12:53 +0200)]
net/sched: act_ife: Staticfy find_decode_metaid()

As it's used only on that file.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ethernet: bgmac: Allow MAC address to be specified in DTB
Steve Lin [Thu, 16 Mar 2017 15:48:58 +0000 (11:48 -0400)]
net: ethernet: bgmac: Allow MAC address to be specified in DTB

Allows the BCMA version of the bgmac driver to obtain MAC address
from the device tree.  If no MAC address is specified there, then
the previous behavior (obtaining MAC address from SPROM) is
used.

Signed-off-by: Steve Lin <steven.lin1@broadcom.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Jon Mason <jon.mason@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ethernet: fs_enet: Remove useless includes
Christophe Leroy [Thu, 16 Mar 2017 09:18:04 +0000 (10:18 +0100)]
net: ethernet: fs_enet: Remove useless includes

CONFIG_8xx is being deprecated. Since the includes dependent on
CONFIG_8xx are useless, just drop them.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoisdn: hardware: mISDN: Remove reference to CONFIG_8xx
Christophe Leroy [Thu, 16 Mar 2017 09:18:02 +0000 (10:18 +0100)]
isdn: hardware: mISDN: Remove reference to CONFIG_8xx

CONFIG_8xx is deprecated and should soon be removed in favor
of CONFIG_PPC_8xx.
Anyway, hfc_multi_8xx.h only uses 8xx I/O ports which are
linked to the CPM1 communication processor included in the 8xx
rather than the 8xx itself.

This patch therefore makes it dependent on CONFIG_CPM1 instead,
like several other drivers.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: mvneta: support suspend and resume
Jane Li [Thu, 16 Mar 2017 08:22:28 +0000 (16:22 +0800)]
net: mvneta: support suspend and resume

Add basic support for handling suspend and resume.

Signed-off-by: Jane Li <jiel@marvell.com>
Reviewed-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mlxsw-vrf'
David S. Miller [Thu, 16 Mar 2017 17:18:35 +0000 (10:18 -0700)]
Merge branch 'mlxsw-vrf'

Jiri Pirko says:

====================
mlxsw: Enable VRF offload

Ido says:

Packets received from netdevs enslaved to different VRF devices are
forwarded using different FIB tables. In the Spectrum ASIC this is
achieved by binding different router interfaces (RIFs) to different
virtual routers (VRs). Each RIF represents an enslaved netdev and each
VR has its own FIB table according to which packets are forwarded.

The first three patches add an helper to check if a FIB rule is a
default rule and extend the FIB notification chain to include the rule's
info as part of the RULE_{ADD,DEL} events. This allows offloading
drivers to sanitize the rules they don't support and flush their tables.

The fourth patch introduces a small change in the VRF driver to allow
capable drivers to more easily offload VRFs.

Finally, the last patches gradually add support for VRFs in the mlxsw
driver. First, on top of port netdevs, stacked LAG and VLAN devices and
then on top of bridges.

Some limitations I would like to point out:

1) The old model where 'oif' / 'iif' rules were programmed for each L3
master device isn't supported. Upon insertion of these rules the driver
will flush its tables and forwarding will be done by the kernel instead.
It's inferior in every way to the single 'l3mdev' rule, so this shouldn't
be an issue.

2) Inter-VRF routes pointing to a VRF device aren't offloaded. Packets
hitting these routes will be forwarded by the kernel. Inter-VRF routes
pointing to netdevs enslaved to a different VRF are offloaded.

3) There's a small discrepancy between the kernel's datapath and the
device's. By default, packets forwarded by the kernel first do a lookup
in the local table and then in the VRF's table (assuming no match). In
the device, lookup is done only in the VRF's table, which is probably
the intended behavior. Changes in v2 allow user to properly re-order the
default rules without triggering the abort mechanism.

Changes in v3:
* Remove 'l3mdev' from the matchall list, as it's related to the action
  and not the selector (David Ahern).
* Use container_of() instead of typecasting (David Ahern).
* Add David's Acked-by to the second patch.
* Add an helper in IPv4 code to check if rule is a default rule (David
  Ahern).

Changes in v2:
* Drop default rule indication and allow re-ordering of default rules
  (David Ahern).
* Remove ifdef around 'struct fib_rule_notifier_info' and drop redundant
  dependency on IP_MULTIPLE_TABLES from rocker and mlxsw.
* Add David's Acked-by to the fourth patch.
* Remove netif_is_vrf_master() and use netif_is_l3_master() instead
  (David Ahern).
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Don't abort on l3mdev rules
Ido Schimmel [Thu, 16 Mar 2017 08:08:20 +0000 (09:08 +0100)]
mlxsw: spectrum_router: Don't abort on l3mdev rules

Now that port netdevs can be enslaved to a VRF master we need to make
sure the device's routing tables won't be flushed upon the insertion of
a l3mdev rule.

Note that we assume the notified l3mdev rule is a simple rule as used by
the VRF master. We don't check for the presence of other selectors such
as 'iif' and 'oif'.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Add support for VRFs on top of bridges
Ido Schimmel [Thu, 16 Mar 2017 08:08:19 +0000 (09:08 +0100)]
mlxsw: spectrum_router: Add support for VRFs on top of bridges

In a similar fashion to the previous patch, allow bridges and VLAN
devices on top of bridges to be enslaved to a VRF master device.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Add support for VRFs
Ido Schimmel [Thu, 16 Mar 2017 08:08:18 +0000 (09:08 +0100)]
mlxsw: spectrum_router: Add support for VRFs

Allow port netdevs, LAG and VLAN devices stacked on top of these to be
enslaved to a VRF master device.

Upon enslavement, create a router interface (RIF) for the enslaved
netdev and associate it with a virtual router (VR) based on the VRF's
table ID.

If a RIF already exists for the netdev (f.e., due to the existence of an
IP address), then it's deleted and a new one is created with the
appropriate VR binding.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Don't destroy RIF if L3 slave
Ido Schimmel [Thu, 16 Mar 2017 08:08:17 +0000 (09:08 +0100)]
mlxsw: spectrum_router: Don't destroy RIF if L3 slave

We usually destroy the netdev's router interface (RIF) when the last IP
address is removed from it.

However, we shouldn't do that if it's enslaved to an L3 master device.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Associate RIFs with correct VR
Ido Schimmel [Thu, 16 Mar 2017 08:08:16 +0000 (09:08 +0100)]
mlxsw: spectrum_router: Associate RIFs with correct VR

When a router interface (RIF) is created due to a netdev being enslaved
to a VRF master, then it should be associated with the appropriate
virtual router (VR) and not the default one.

If netdev is a VRF slave, lookup the VR based on the VRF's table ID.
Otherwise default to the MAIN table.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: vrf: Set slave's private flag before linking
Ido Schimmel [Thu, 16 Mar 2017 08:08:15 +0000 (09:08 +0100)]
net: vrf: Set slave's private flag before linking

Allow listeners of the subsequent CHANGEUPPER notification to retrieve
the VRF's table ID by calling l3mdev_fib_table() with the slave netdev.
Without this change, the netdev won't be considered an L3 slave and the
function would return 0.

This is consistent with other master device such as bridge and bond that
set the slave's private flag before linking. It also makes
do_vrf_{add,del}_slave() symmetric.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv4: fib_rules: Dump FIB rules when registering FIB notifier
Ido Schimmel [Thu, 16 Mar 2017 08:08:14 +0000 (09:08 +0100)]
ipv4: fib_rules: Dump FIB rules when registering FIB notifier

In commit c3852ef7f2f8 ("ipv4: fib: Replay events when registering FIB
notifier") we dumped the FIB tables and replayed the events to the
passed notification block.

However, we merely sent a RULE_ADD notification in case custom rules
were in use. As explained in previous patches, this approach won't work
anymore. Instead, we should notify the caller about all the FIB rules
and let it act accordingly.

Upon registration to the FIB notification chain, replay a RULE_ADD
notification for each programmed FIB rule, custom or not. The integrity
of the dump is ensured by the mechanism introduced in the above
mentioned commit.

Prevent regressions by making sure current listeners correctly sanitize
the notified rules.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv4: fib_rules: Add notifier info to FIB rules notifications
Ido Schimmel [Thu, 16 Mar 2017 08:08:13 +0000 (09:08 +0100)]
ipv4: fib_rules: Add notifier info to FIB rules notifications

Whenever a FIB rule is added or removed, a notification is sent in the
FIB notification chain. However, listeners don't have a way to tell
which rule was added or removed.

This is problematic as we would like to give listeners the ability to
decide which action to execute based on the notified rule. Specifically,
offloading drivers should be able to determine if they support the
reflection of the notified FIB rule and flush their LPM tables in case
they don't.

Do that by adding a notifier info to these notifications and embed the
common FIB rule struct in it.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv4: fib_rules: Check if rule is a default rule
Ido Schimmel [Thu, 16 Mar 2017 08:08:12 +0000 (09:08 +0100)]
ipv4: fib_rules: Check if rule is a default rule

Currently, when non-default (custom) FIB rules are used, devices capable
of layer 3 offloading flush their tables and let the kernel do the
forwarding instead.

When these devices' drivers are loaded they register to the FIB
notification chain, which lets them know about the existence of any
custom FIB rules. This is done by sending a RULE_ADD notification based
on the value of 'net->ipv4.fib_has_custom_rules'.

This approach is problematic when VRF offload is taken into account, as
upon the creation of the first VRF netdev, a l3mdev rule is programmed
to direct skbs to the VRF's table.

Instead of merely reading the above value and sending a single RULE_ADD
notification, we should iterate over all the FIB rules and send a
detailed notification for each, thereby allowing offloading drivers to
sanitize the rules they don't support and potentially flush their
tables.

While l3mdev rules are uniquely marked, the default rules are not.
Therefore, when they are being notified they might invoke offloading
drivers to unnecessarily flush their tables.

Solve this by adding an helper to check if a FIB rule is a default rule.
Namely, its selector should match all packets and its action should
point to the local, main or default tables.

As noted by David Ahern, uniquely marking the default rules is
insufficient. When using VRFs, it's common to avoid false hits by moving
the rule for the local table to just before the main table:

Default configuration:
$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Common configuration with VRFs:
$ ip rule show
1000:   from all lookup [l3mdev-table]
32765:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agor8152: simply the arguments
hayeswang [Thu, 16 Mar 2017 06:32:22 +0000 (14:32 +0800)]
r8152: simply the arguments

Replace &tp->napi with napi and tp->netdev with netdev.

Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'xgene-bug-fixes'
David S. Miller [Thu, 16 Mar 2017 04:52:53 +0000 (21:52 -0700)]
Merge branch 'xgene-bug-fixes'

Iyappan Subramanian says:

====================
drivers: net: xgene: Bug fixes and errata workarounds

This patch set addresses bug fixes and errata workarounds.
====================

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMAINTAINERS: Update X-Gene SoC ethernet maintainer
Iyappan Subramanian [Wed, 15 Mar 2017 20:27:21 +0000 (13:27 -0700)]
MAINTAINERS: Update X-Gene SoC ethernet maintainer

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: xgene: Add workaround for errata 10GE_8/ENET_11
Iyappan Subramanian [Wed, 15 Mar 2017 20:27:20 +0000 (13:27 -0700)]
drivers: net: xgene: Add workaround for errata 10GE_8/ENET_11

This patch implements workaround for errata 10GE_8 and ENET_11:
"HW reports length error for valid 64 byte frames with len <46 bytes"
by recovering them from error.

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: Toan Le <toanle@apm.com>
Tested-by: Fushen Chen <fchen@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: xgene: Add workaround for errata 10GE_1
Quan Nguyen [Wed, 15 Mar 2017 20:27:19 +0000 (13:27 -0700)]
drivers: net: xgene: Add workaround for errata 10GE_1

This patch implements workaround for errata 10GE_1:
10Gb Ethernet port FIFO threshold default values are incorrect.

Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: Toan Le <toanle@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Tested-by: Fushen Chen <fchen@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: xgene: Fix Rx checksum validation logic
Iyappan Subramanian [Wed, 15 Mar 2017 20:27:18 +0000 (13:27 -0700)]
drivers: net: xgene: Fix Rx checksum validation logic

This patch fixes Rx checksum validation logic and
adds NETIF_F_RXCSUM flag.

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: xgene: Fix wrong logical operation
Quan Nguyen [Wed, 15 Mar 2017 20:27:17 +0000 (13:27 -0700)]
drivers: net: xgene: Fix wrong logical operation

This patch fixes the wrong logical OR operation by changing it to
bit-wise OR operation.

Fixes: 3bb502f83080 ("drivers: net: xgene: fix statistics counters race condition")
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: xgene: Fix hardware checksum setting
Quan Nguyen [Wed, 15 Mar 2017 20:27:16 +0000 (13:27 -0700)]
drivers: net: xgene: Fix hardware checksum setting

This patch fixes the hardware checksum settings by properly program
the classifier. Otherwise, packet may be received with checksum error
on X-Gene1 SoC.

Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers: net: phy: xgene: Fix mdio write
Quan Nguyen [Wed, 15 Mar 2017 20:27:15 +0000 (13:27 -0700)]
drivers: net: phy: xgene: Fix mdio write

This patches fixes a typo in the argument to xgene_enet_wr_mdio_csr().

Signed-off-by: Quan Nguyen <qnguyen@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'dsa-check-out-of-range-ageing-time'
David S. Miller [Wed, 15 Mar 2017 22:34:14 +0000 (15:34 -0700)]
Merge branch 'dsa-check-out-of-range-ageing-time'

Vivien Didelot says:

====================
net: dsa: check out-of-range ageing time

The ageing time limits supported by DSA drivers vary depending on the
switch model. If a driver returns -ERANGE for out-of-range values, the
switchdev commit phase will fail with the following stacktrace:

    # brctl setageing br0 4
    [ 8530.082179] WARNING: CPU: 0 PID: 910 at net/switchdev/switchdev.c:291 switchdev_port_attr_set_now+0xbc/0xc0
    [ 8530.090679] br0: Commit of attribute (id=5) failed.
    [ 8530.094256] Modules linked in:
    [ 8530.096032] CPU: 0 PID: 910 Comm: kworker/0:4 Tainted: G        W       4.10.0 #361
    [ 8530.102412] Hardware name: Freescale Vybrid VF5xx/VF6xx (Device Tree)
    [ 8530.107571] Workqueue: events switchdev_deferred_process_work
    [ 8530.112039] Backtrace:
    [ 8530.113224] [<8010ca34>] (dump_backtrace) from [<8010cd3c>] (show_stack+0x20/0x24)
    [ 8530.119521]  r6:00000000 r5:80834da0 r4:80ca7e48 r3:8120ca3c
    [ 8530.123908] [<8010cd1c>] (show_stack) from [<8037ad40>] (dump_stack+0x24/0x28)
    [ 8530.129873] [<8037ad1c>] (dump_stack) from [<80118de4>] (__warn+0xf4/0x10c)
    [ 8530.135545] [<80118cf0>] (__warn) from [<80118e44>] (warn_slowpath_fmt+0x48/0x50)
    [ 8530.141760]  r9:00000000 r8:81252bec r7:80f19d90 r6:9dc3c000 r5:80ca7e7c r4:80834de8
    [ 8530.148235] [<80118e00>] (warn_slowpath_fmt) from [<80670b20>] (switchdev_port_attr_set_now+0xbc/0xc0)
    [ 8530.156240]  r3:9dc3c000 r2:80834de8
    [ 8530.158539]  r4:ffffffde
    [ 8530.159788] [<80670a64>] (switchdev_port_attr_set_now) from [<80670b44>] (switchdev_port_attr_set_deferred+0x20/0x6c)
    [ 8530.169118]  r7:806705a8 r6:9dc3c000 r5:80f19d90 r4:80f19d80
    [ 8530.173500] [<80670b24>] (switchdev_port_attr_set_deferred) from [<80670580>] (switchdev_deferred_process+0x50/0xe8)
    [ 8530.182742]  r6:80ca6000 r5:81252bec r4:80f19d80 r3:80670b24
    [ 8530.187115] [<80670530>] (switchdev_deferred_process) from [<80670930>] (switchdev_deferred_process_work+0x1c/0x24)
    [ 8530.196277]  r8:00000000 r7:9ffdc100 r6:8120ad6c r5:9ddefc00 r4:81252bf4 r3:9de343c0
    [ 8530.202756] [<80670914>] (switchdev_deferred_process_work) from [<8012f770>] (process_one_work+0x120/0x3b0)
    [ 8530.211231] [<8012f650>] (process_one_work) from [<8012fa70>] (worker_thread+0x70/0x534)
    [ 8530.218046]  r10:9ddefc00 r9:8120ad6c r8:80ca6038 r7:8120ad80 r6:81211f80 r5:9ddefc18
    [ 8530.224579]  r4:8120ad6c
    [ 8530.225830] [<8012fa00>] (worker_thread) from [<80135640>] (kthread+0x114/0x144)
    [ 8530.231955]  r10:9f4e9e94 r9:9de1fe58 r8:8012fa00 r7:9ddefc00 r6:9de1fdc0 r5:00000000
    [ 8530.238497]  r4:9de1fe40
    [ 8530.239750] [<8013552c>] (kthread) from [<80108cd8>] (ret_from_fork+0x14/0x3c)
    [ 8530.245679]  r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:8013552c
    [ 8530.252234]  r4:9de1fdc0 r3:80ca6000
    [ 8530.254512] ---[ end trace 87475cc71b80ef73 ]---
    [ 8530.257852] br0: failed (err=-34) to set attribute (id=5)

This patchset fixes this by adding ageing_time_min and ageing_time_max
fields to the dsa_switch structure, which can optionally be set by a DSA
driver.

If provided, the DSA core will check for out-of-range values in the
SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME prepare phase and return -ERANGE
accordingly.

Finally set these limits in the mv88e6xxx driver.
====================

Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: mv88e6xxx: specify ageing time limits
Vivien Didelot [Wed, 15 Mar 2017 19:53:50 +0000 (15:53 -0400)]
net: dsa: mv88e6xxx: specify ageing time limits

Now that DSA has ageing time limits, specify them when registering a
switch so that out-of-range values are handled correctly by the core.

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reported-by: Jason Cobham <jcobham@questertangent.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: check out-of-range ageing time value
Vivien Didelot [Wed, 15 Mar 2017 19:53:49 +0000 (15:53 -0400)]
net: dsa: check out-of-range ageing time value

If a DSA switch driver cannot program an ageing time value due to it
being out-of-range, switchdev will raise a stack trace before failing.

To fix this, add ageing_time_min and ageing_time_max members to the
dsa_switch in order for the switch drivers to optionally specify their
supported ageing time limits.

The DSA core will now check for provided ageing time limits and return
-ERANGE from the switchdev prepare phase if the value is out-of-range.

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: dsa_fastest_ageing_time return unsigned
Vivien Didelot [Wed, 15 Mar 2017 19:53:48 +0000 (15:53 -0400)]
net: dsa: dsa_fastest_ageing_time return unsigned

The ageing time is defined as unsigned int, so make
dsa_fastest_ageing_time return an unsigned int instead of int.

Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mqprio-offload-more-info'
David S. Miller [Wed, 15 Mar 2017 22:20:28 +0000 (15:20 -0700)]
Merge branch 'mqprio-offload-more-info'

Alexander Duyck says:

====================
Add support for passing more information in mqprio offload

This patch series lays the groundwork for future work to allow us to make
full use of the mqprio options when offloading them to hardware.

Currently when we specify the hardware offload for mqprio the queue
configuration is completely ignored and the hardware is only notified of
the total number of traffic classes.  The problem is this leads to multiple
issues, one specific issue being you can pass the queue configuration you
want and it is totally ignored by the hardware.

What I am planning to do is add support for "hw" values in the
configuration greater than 1.  So for example we might have one mode of
mqprio offload that uses 1 and only offloads the TC counts like we
currently do.  Then we might look at adding an option 2 which would factor
in the TCs and the queue count information. This way we can select between
the type of offload we actually want and existing drivers that don't
support this can just fall back to their legacy configuration.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomqprio: Modify mqprio to pass user parameters via ndo_setup_tc.
Amritha Nambiar [Wed, 15 Mar 2017 17:39:25 +0000 (10:39 -0700)]
mqprio: Modify mqprio to pass user parameters via ndo_setup_tc.

The configurable priority to traffic class mapping and the user specified
queue ranges are used to configure the traffic class, overriding the
hardware defaults when the 'hw' option is set to 0. However, when the 'hw'
option is non-zero, the hardware QOS defaults are used.

This patch makes it so that we can pass the data the user provided to
ndo_setup_tc. This allows us to pull in the queue configuration if the
user requested it as well as any additional hardware offload type
requested by using a value other than 1 for the hw value.

Finally it also provides a means for the device driver to return the level
supported for the offload type via the qopt->hw value. Previously we were
just always assuming the value to be 1, in the future values beyond just 1
may be supported.

Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomqprio: Change handling of hw u8 to allow for multiple hardware offload modes
Alexander Duyck [Wed, 15 Mar 2017 17:39:18 +0000 (10:39 -0700)]
mqprio: Change handling of hw u8 to allow for multiple hardware offload modes

This patch is meant to allow for support of multiple hardware offload type
for a single device. There is currently no bounds checking for the hw
member of the mqprio_qopt structure.  This results in us being able to pass
values from 1 to 255 with all being treated the same.  On retreiving the
value it is returned as 1 for anything 1 or greater being set.

With this change we are currently adding limited bounds checking by
defining an enum and using those values to limit the reported hardware
offloads.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetxen_nic: remove redundant check if retries is zero
Colin Ian King [Wed, 15 Mar 2017 15:31:58 +0000 (15:31 +0000)]
netxen_nic: remove redundant check if retries is zero

At the end of the timeout loop, retries will always be zero so
the check for zero is redundant so remove it.  Also replace
printk with pr_err as recommended by checkpatch.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'stmmac-dma-ops-multiqueue'
David S. Miller [Wed, 15 Mar 2017 21:44:33 +0000 (14:44 -0700)]
Merge branch 'stmmac-dma-ops-multiqueue'

Joao Pinto says:

====================
net: stmmac: prepare dma operations for multiple queues

As agreed with David Miller, this patch-set is the second of 3 to enable
multiple queues in stmmac.

This second one concentrates on dma operations adding functionalities as:
a) DMA Operation Mode configuration per channel and done in the multiple
queues configuration function
b) DMA IRQ enable and Disable by channel
c) DMA start and stop by channel
d) RX and TX ring length configuration by channel
e) RX and TX set tail pointer by channel
f) DMA Channel initialization broke into Channel comon, RX and TX
initialization
g) TSO being configured for all available channels
h) DMA interrupt treatment by channel
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: stmmac interrupt treatment prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:55 +0000 (11:04 +0000)]
net: stmmac: stmmac interrupt treatment prepared for multiple queues

This patch prepares the main ISR for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: tso init prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:54 +0000 (11:04 +0000)]
net: stmmac: tso init prepared for multiple queues

This patch configures TSO for all available tx queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: dma channel init prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:53 +0000 (11:04 +0000)]
net: stmmac: dma channel init prepared for multiple queues

This patch prepares the DMA initialization process for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: prepare rx/tx set tail function for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:52 +0000 (11:04 +0000)]
net: stmmac: prepare rx/tx set tail function for multiple queues

This patch prepares RX and TX set tail functions for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: rx and tx ring length prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:51 +0000 (11:04 +0000)]
net: stmmac: rx and tx ring length prepared for multiple queues

This patch prepares tx and rx ring length configuration for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: rx watchdog config prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:50 +0000 (11:04 +0000)]
net: stmmac: rx watchdog config prepared for multiple queues

This patch adds rx watchdog configuration for all queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: prepare dma interrupt treatment for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:49 +0000 (11:04 +0000)]
net: stmmac: prepare dma interrupt treatment for multiple queues

This patch prepares DMA interrupts treatment for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: prepare stmmac_tx_err for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:48 +0000 (11:04 +0000)]
net: stmmac: prepare stmmac_tx_err for multiple queues

This patch prepares stmmac_err for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: rx/tx dma start/stop prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:47 +0000 (11:04 +0000)]
net: stmmac: rx/tx dma start/stop prepared for multiple queues

This patch prepares the RX/TX DMA stop/start process for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: enable/disable dma irq prepared for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:46 +0000 (11:04 +0000)]
net: stmmac: enable/disable dma irq prepared for multiple queues

This patch prepares the DMA IRQ enable/disable process for multiple queues.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: stmmac: prepare dma op mode config for multiple queues
Joao Pinto [Wed, 15 Mar 2017 11:04:45 +0000 (11:04 +0000)]
net: stmmac: prepare dma op mode config for multiple queues

This patch prepares DMA Operation Mode configuration for multiple queues.
The work consisted on breaking the DMA operation Mode configuration function
into RX and TX scope and adapting its mechanism in stmmac_main.

Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Wed, 15 Mar 2017 20:00:28 +0000 (13:00 -0700)]
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2017-03-15

This series contains updates to i40e and i40evf only.

Aaron fixes an issue on x710 devices where simultaneous read accesses
were interfering with each other, so make sure all devices acquire the
NVM lock before reads on all devices.

Shannon adds Wake On LAN support feature for x722 devices and cleaned
up the opcodes so that they are in numerical order.

Mitch adds a client interface to the VF driver, in preparation for the
upcoming RDMA-capable hardware (and client driver).  Cleaned up the
client interface in the PF driver, since it was originally over
engineered to handle multiple clients on multiple netdevs, but that
did not happen and now there will be one client per driver, so apply
the "KISS" (Keep It Simple & Stupid) to the i40e client interface.
Bumped the number of MAC filters an untrusted VF can create.

Jake fixes an issue where a recent refactor of queue pairs accidentally
added all remaining vecotrs to the num_lan_msix which can adversely
affect performance.

Lihong fixes an ethtool issue with x722 devices where "-e" will error
out since its EEPROM has a scope limit at offset 0x5B9FFF, so set the
EEPROM length to the scope limit.  Also fixed an issue where RSS
offloading only worked on PF0.

Filip cleans up and clarifies code comment so there is no confusion
about MAC/VLAN filter initialization routine.

Alex adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING,
which improves performance on architectures that implement either one.

Harshitha cleans up confusion on flags disabled due to hardware limitation
versus featured disabled by the user, so rename auto_disable_flags to
hw_disabled_flags to avoid the confusion.

v2: Merged patch #1 and #4 in first version to make patch #3 in this
    series based on feedback from David Miller.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
David S. Miller [Wed, 15 Mar 2017 18:59:10 +0000 (11:59 -0700)]
Merge git://git./linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/broadcom/genet/bcmgenet.c
net/core/sock.c

Conflicts were overlapping changes in bcmgenet and the
lockdep handling of sockets.

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Wed, 15 Mar 2017 17:44:19 +0000 (10:44 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "This is a rather large set of fixes. The bulk are for lpfc correcting
  a lot of issues in the new NVME driver code which just went in in the
  merge window.

  The others are:

   - fix a hang in the vmware paravirt driver caused by incorrect
     handling of the new MSI vector allocation

   - long standing bug in storvsc, which recent block changes turned
     from being a harmless annoyance into a hang

   - yet more fallout (in mpt3sas) from the changes to device blocking

  The remainder are small fixes and updates"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (34 commits)
  scsi: lpfc: Add shutdown method for kexec
  scsi: storvsc: Workaround for virtual DVD SCSI version
  scsi: lpfc: revise version number to 11.2.0.10
  scsi: lpfc: code cleanups in NVME initiator discovery
  scsi: lpfc: code cleanups in NVME initiator base
  scsi: lpfc: correct rdp diag portnames
  scsi: lpfc: remove dead sli3 nvme code
  scsi: lpfc: correct double print
  scsi: lpfc: Rename LPFC_MAX_EQ_DELAY to LPFC_MAX_EQ_DELAY_EQID_CNT
  scsi: lpfc: Rework lpfc Kconfig for NVME options
  scsi: lpfc: add transport eh_timed_out reference
  scsi: lpfc: Fix eh_deadline setting for sli3 adapters.
  scsi: lpfc: add NVME exchange aborts
  scsi: lpfc: Fix nvme allocation bug on failed nvme_fc_register_localport
  scsi: lpfc: Fix IO submission if WQ is full
  scsi: lpfc: Fix NVME CMD IU byte swapped word 1 problem
  scsi: lpfc: Fix RCTL value on NVME LS request and response
  scsi: lpfc: Fix crash during Hardware error recovery on SLI3 adapters
  scsi: lpfc: fix missing spin_unlock on sql_list_lock
  scsi: lpfc: don't dereference dma_buf->iocbq before null check
  ...

7 years agoMerge tag 'gfs2-4.11-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Wed, 15 Mar 2017 16:33:15 +0000 (09:33 -0700)]
Merge tag 'gfs2-4.11-rc3.fixes' of git://git./linux/kernel/git/gfs2/linux-gfs2

Pull gfs2 fix from Bob Peterson:
 "This is an emergency patch for 4.11-rc3

  The GFS2 developers uncovered a really nasty problem that can lead to
  random corruption and kernel panic, much like the last one. Andreas
  Gruenbacher wrote a simple one-line patch to fix the problem."

* tag 'gfs2-4.11-rc3.fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
  gfs2: Avoid alignment hole in struct lm_lockname

7 years agoMerge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Linus Torvalds [Wed, 15 Mar 2017 16:26:04 +0000 (09:26 -0700)]
Merge branch 'linus' of git://git./linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

 - self-test failure of crc32c on powerpc

 - regressions of ecb(aes) when used with xts/lrw in s5p-sss

 - a number of bugs in the omap RNG driver

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: s5p-sss - Fix spinlock recursion on LRW(AES)
  hwrng: omap - Do not access INTMASK_REG on EIP76
  hwrng: omap - use devm_clk_get() instead of of_clk_get()
  hwrng: omap - write registers after enabling the clock
  crypto: s5p-sss - Fix completing crypto request in IRQ handler
  crypto: powerpc - Fix initialisation of crc32c context

7 years agogfs2: Avoid alignment hole in struct lm_lockname
Andreas Gruenbacher [Mon, 6 Mar 2017 17:58:42 +0000 (12:58 -0500)]
gfs2: Avoid alignment hole in struct lm_lockname

Commit 88ffbf3e03 switches to using rhashtables for glocks, hashing over
the entire struct lm_lockname instead of its individual fields.  On some
architectures, struct lm_lockname contains a hole of uninitialized
memory due to alignment rules, which now leads to incorrect hash values.
Get rid of that hole.

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
CC: <stable@vger.kernel.org> #v4.3+
7 years agoi40e: rename auto_disable_flags to hw_disabled_flags
Harshitha Ramamurthy [Fri, 3 Feb 2017 18:57:42 +0000 (10:57 -0800)]
i40e: rename auto_disable_flags to hw_disabled_flags

A previous commit introduced a field that tracks the features
that are disabled due to HW resource limitations as opposed
to the featured disabled by the user. This patch changes the
name of the field to make it more readable since it might get
confusing when looking at code containing both the flags
field and the auto_disable_features field together.

Change-ID: Idcc9888659698f6fe3ccff17c8c3f09b5026f708
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e/i40evf: Change version from 1.6.27 to 2.1.7
Bimmy Pujari [Mon, 30 Jan 2017 20:29:37 +0000 (12:29 -0800)]
i40e/i40evf: Change version from 1.6.27 to 2.1.7

Signed-off-by: Bimmy Pujari <bimmy.pujari@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Allow untrusted VFs to have more filters
Mitch Williams [Mon, 30 Jan 2017 20:29:36 +0000 (12:29 -0800)]
i40e: Allow untrusted VFs to have more filters

Our original filter limit of 8 was based on behavior that we saw from
Linux VMs. Now we're running Other Operating Systems under KVM and we
see that they commonly use more MAC filters. Since it seems weird to
require people to enable trusted VFs just to boot their OS, bump the
number of filters allowed by default.

Change-ID: I76b2dcb2ad6017e39231ad3096c3fb6f065eef5e
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e/i40evf: Add support for mapping pages with DMA attributes
Alexander Duyck [Mon, 30 Jan 2017 20:29:35 +0000 (12:29 -0800)]
i40e/i40evf: Add support for mapping pages with DMA attributes

This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and
DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we
are able to see performance improvements on architectures that implement
either one due to the fact that page mapping and unmapping only has to
sync what is actually being used instead of the entire buffer. In addition
by enabling the weak ordering attribute enables a performance improvement
for architectures that can associate a memory ordering with a DMA buffer
such as Sparc.

Change-ID: If176824e8231c5b24b8a5d55b339a6026738fc75
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Clarify steps in MAC/VLAN filters initialization routine
Filip Sadowski [Mon, 30 Jan 2017 20:29:34 +0000 (12:29 -0800)]
i40e: Clarify steps in MAC/VLAN filters initialization routine

This patch clarifies the reason for removal of automatically
firmware-generated filter and explicit addition of filter which
accepts frames with any VLAN id.

Change-ID: Iabf180b6d61c4d8a36d3bcf8457c377a6f2aca0e
Signed-off-by: Filip Sadowski <filip.sadowski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: fix RSS queues only operating on PF0
Lihong Yang [Mon, 30 Jan 2017 20:29:33 +0000 (12:29 -0800)]
i40e: fix RSS queues only operating on PF0

This patch fixes the issue that RSS offloading only works on PF0 by
using the direct register writing of the hash keys for the VFs instead
of using the admin queue command to do so.

Change-ID: Ia02cda7dbaa23def342e8786097a2c03db6f580b
Signed-off-by: Lihong Yang <lihong.yang@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: fix ethtool to get EEPROM data from X722 interface
Lihong Yang [Mon, 30 Jan 2017 20:29:32 +0000 (12:29 -0800)]
i40e: fix ethtool to get EEPROM data from X722 interface

Currently ethtool -e will error out with a X722 interface
as its EEPROM has a scope limit at offset 0x5B9FFF.
This patch fixes the issue by setting the EEPROM length to
the scope limit to avoid NVM read failure beyond that.

Change-ID: I0b7d4dd6c7f2a57cace438af5dffa0f44c229372
Signed-off-by: Lihong Yang <lihong.yang@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: don't add more vectors to num_lan_msix than number of CPUs
Jacob Keller [Tue, 24 Jan 2017 18:24:01 +0000 (10:24 -0800)]
i40e: don't add more vectors to num_lan_msix than number of CPUs

This is a solution to avoid adding too many queues to num_lan_msix.
A recent refactor of queue pairs accidentally added all remaining
vectors to the num_lan_msix which can have adverse performance issues,
due to enabling more queues than the number of CPU cores.

This patch removes the old calculation, and replaces it with a simple
algorithm.

1) add queue pairs up to num_online_cpus(), but capped at half of total
   vectors
2) then add alternative features such as flow directory and similar
3) finally, add the remaining vectors back to queue pairs, but capped
   such that the total number of queue pairs does not exceed
   num_online_cpus().

Change-ID: I668abf67d5011a1248866daba8885f4ff00cb8d9
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: KISS the client interface
Mitch Williams [Tue, 24 Jan 2017 18:24:00 +0000 (10:24 -0800)]
i40e: KISS the client interface

(KISS is Keep It Simple, Stupid. Or is it?)

The client interface vastly overengineered for what it needs to do.
It was originally designed to support multiple clients on multiple
netdevs, possibly even with multiple drivers. None of this happened,
and now we know that there will only ever be one client for i40e
(i40iw) and one for i40evf (i40iwvf). So, time for some KISS. Since
i40e and i40evf are a Dynasty, we'll simplify this one to match the
VF interface.

First, be a Destroyer and remove all of the lists and locks required
to support multiple clients. Keep one static around to keep track of
one client, and track the client instances for each netdev in the
driver's pf (or adapter) struct. Now it's Almost Human.

Since we already know the client type is iWarp, get rid of any checks
for this. Same for VSI type - it's always going to be the same type,
so it's just a Parasite.

While we're at it, fix up some comments. This makes the function
headers actually match the functions.

These changes reduce code complexity, simplify maintenance,
squash some lurking timing bugs, and allow us to Rock and Roll All
Nite.

Change-ID: I1ea79948ad73b8685272451440a34507f9a9012e
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40evf: add client interface
Mitch Williams [Tue, 24 Jan 2017 18:23:59 +0000 (10:23 -0800)]
i40evf: add client interface

In preparation for upcoming RDMA-capable hardware, add a client
interface to the VF driver. This is a slightly-simplified version
of the PF client interface, with the names changed to protect the
innocent.

Due to the nature of the VF<->PF interactions, the client interface
sometimes needs to call back into itself to pass messages. Because
of this, we can't use the coarse-grained locking like the PF's
client interface uses. Instead, we handle all client interactions
in a separate thread so the watchdog can still run and process
virtual channel messages.

Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Avinash Dayanand <avinash.dayanand@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Linus Torvalds [Wed, 15 Mar 2017 04:31:23 +0000 (21:31 -0700)]
Merge git://git./linux/kernel/git/davem/net

Pull networking fixes from David Miller:

 1) Ensure that mtu is at least IPV6_MIN_MTU in ipv6 VTI tunnel driver,
    from Steffen Klassert.

 2) Fix crashes when user tries to get_next_key on an LPM bpf map, from
    Alexei Starovoitov.

 3) Fix detection of VLAN fitlering feature for bnx2x VF devices, from
    Michal Schmidt.

 4) We can get a divide by zero when TCP socket are morphed into
    listening state, fix from Eric Dumazet.

 5) Fix socket refcounting bugs in skb_complete_wifi_ack() and
    skb_complete_tx_timestamp(). From Eric Dumazet.

 6) Use after free in dccp_feat_activate_values(), also from Eric
    Dumazet.

 7) Like bonding team needs to use ETH_MAX_MTU as netdev->max_mtu, from
    Jarod Wilson.

 8) Fix use after free in vrf_xmit(), from David Ahern.

 9) Don't do UDP Fragmentation Offload on IPComp ipsec packets, from
    Alexey Kodanev.

10) Properly check napi_complete_done() return value in order to decide
    whether to re-enable IRQs or not in amd-xgbe driver, from Thomas
    Lendacky.

11) Fix double free of hwmon device in marvell phy driver, from Andrew
    Lunn.

12) Don't crash on malformed netlink attributes in act_connmark, from
    Etienne Noss.

13) Don't remove routes with a higher metric in ipv6 ECMP route replace,
    from Sabrina Dubroca.

14) Don't write into a cloned SKB in ipv6 fragmentation handling, from
    Florian Westphal.

15) Fix routing redirect races in dccp and tcp, basically the ICMP
    handler can't modify the socket's cached route in it's locked by the
    user at this moment. From Jon Maxwell.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (108 commits)
  qed: Enable iSCSI Out-of-Order
  qed: Correct out-of-bound access in OOO history
  qed: Fix interrupt flags on Rx LL2
  qed: Free previous connections when releasing iSCSI
  qed: Fix mapping leak on LL2 rx flow
  qed: Prevent creation of too-big u32-chains
  qed: Align CIDs according to DORQ requirement
  mlxsw: reg: Fix SPVMLR max record count
  mlxsw: reg: Fix SPVM max record count
  net: Resend IGMP memberships upon peer notification.
  dccp: fix memory leak during tear-down of unsuccessful connection request
  tun: fix premature POLLOUT notification on tun devices
  dccp/tcp: fix routing redirect race
  ucc/hdlc: fix two little issue
  vxlan: fix ovs support
  net: use net->count to check whether a netns is alive or not
  bridge: drop netfilter fake rtable unconditionally
  ipv6: avoid write to a possibly cloned skb
  net: wimax/i2400m: fix NULL-deref at probe
  isdn/gigaset: fix NULL-deref at probe
  ...

7 years agoi40e: fix up recent proxy and wol bits for X722_SUPPORT
Shannon Nelson [Tue, 24 Jan 2017 18:23:58 +0000 (10:23 -0800)]
i40e: fix up recent proxy and wol bits for X722_SUPPORT

Some opcodes added & reordered to be in numerical order with the
rest of the opcodes.
This patch adds admin queue structs to support Wake on LAN feature
for X722.

Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Acquire NVM lock before reads on all devices
Aaron Salter [Fri, 2 Dec 2016 20:33:02 +0000 (12:33 -0800)]
i40e: Acquire NVM lock before reads on all devices

Acquire NVM lock before reads on all devices.  Previously, locks were
only used for X722 and later.  Fixes an issue where simultaneous X710
NVM accesses were interfering with each other.

Change-ID: If570bb7acf958cef58725ec2a2011cead6f80638
Signed-off-by: Aaron Salter <aaron.k.salter@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>