openwrt/staging/blogic.git
7 years agobpf: create samples/bpf/tcp_bpf.readme
Lawrence Brakmo [Fri, 20 Oct 2017 18:05:43 +0000 (11:05 -0700)]
bpf: create samples/bpf/tcp_bpf.readme

Readme file explaining how to create a cgroupv2 and attach one
of the tcp_*_kern.o socket_ops BPF program.

Signed-off-by: Lawrence Brakmo <brakmo@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked_by: Alexei Starovoitov <ast@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: sample BPF_SOCKET_OPS_BASE_RTT program
Lawrence Brakmo [Fri, 20 Oct 2017 18:05:42 +0000 (11:05 -0700)]
bpf: sample BPF_SOCKET_OPS_BASE_RTT program

Sample socket_ops BPF program to test the BPF helper function
bpf_getsocketops and the new socket_ops op BPF_SOCKET_OPS_BASE_RTT.

The program provides a base RTT of 80us when the calling flow is
within a DC (as determined by the IPV6 prefix) and the congestion
algorithm is "nv".

Signed-off-by: Lawrence Brakmo <brakmo@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked_by: Alexei Starovoitov <ast@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: Add BPF_SOCKET_OPS_BASE_RTT support to tcp_nv
Lawrence Brakmo [Fri, 20 Oct 2017 18:05:41 +0000 (11:05 -0700)]
bpf: Add BPF_SOCKET_OPS_BASE_RTT support to tcp_nv

TCP_NV will try to get the base RTT from a socket_ops BPF program if one
is loaded. NV will then use the base RTT to bound its min RTT (its
notion of the base RTT). It uses the base RTT as an upper bound and 80%
of the base RTT as its lower bound.

In other words, NV will consider filtered RTTs larger than base RTT as a
sign of congestion. As a result, there is no minRTT inflation when there
is a lot of congestion. For example, in a DC where the RTTs are less
than 40us when there is no congestion, a base RTT value of 80us improves
the performance of NV. The difference between the uncongested RTT and
the base RTT provided represents how much queueing we are willing to
have (in practice it can be higher).

NV has been tunned to reduce congestion when there are many flows at the
cost of one flow not achieving full bandwith utilization. When a
reasonable base RTT is provided, one NV flow can now fully utilize the
full bandwidth. In addition, the performance is also improved when there
are many flows.

In the following examples the NV results are using a kernel with this
patch set (i.e. both NV results are using the new nv_loss_dec_factor).

With one host sending to another host and only one flow the
goodputs are:
  Cubic: 9.3 Gbps, NV: 5.5 Gbps, NV (baseRTT=80us): 9.2 Gbps

With 2 hosts sending to one host (1 flow per host, the goodput per flow
is:
  Cubic: 4.6 Gbps, NV: 4.5 Gbps, NV (baseRTT=80us)L 4.6 Gbps

But the RTTs seen by a ping process in the sender is:
  Cubic: 3.3ms  NV: 97us,  NV (baseRTT=80us): 146us

With a lot of flows things look even better for NV with baseRTT. Here we
have 3 hosts sending to one host. Each sending host has 6 flows: 1
stream, 4x1MB RPC, 1x10KB RPC. Cubic, NV and NV with baseRTT all fully
utilize the full available bandwidth. However, the distribution of
bandwidth among the flows is very different. For the 10KB RPC flow:
  Cubic: 27Mbps, NV: 111Mbps, NV (baseRTT=80us): 222Mbps

The 99% latencies for the 10KB flows are:
  Cubic: 26ms,  NV: 1ms,  NV (baseRTT=80us): 500us

The RTT seen by a ping process at the senders:
  Cubic: 3.2ms  NV: 720us,  NV (baseRTT=80us): 330us

Signed-off-by: Lawrence Brakmo <brakmo@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: Adding helper function bpf_getsockops
Lawrence Brakmo [Fri, 20 Oct 2017 18:05:40 +0000 (11:05 -0700)]
bpf: Adding helper function bpf_getsockops

Adding support for helper function bpf_getsockops to socket_ops BPF
programs. This patch only supports TCP_CONGESTION.

Signed-off-by: Vlad Vysotsky <vlad@cs.ucla.edu>
Acked-by: Lawrence Brakmo <brakmo@fb.com>
Acked-by: Alexei Starovoitov <ast@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: add support for BPF_SOCK_OPS_BASE_RTT
Lawrence Brakmo [Fri, 20 Oct 2017 18:05:39 +0000 (11:05 -0700)]
bpf: add support for BPF_SOCK_OPS_BASE_RTT

A congestion control algorithm can make a call to the BPF socket_ops
program to request the base RTT. The base RTT can be congestion control
dependent and is meant to represent a congestion threshold such that
RTTs above it indicate congestion. This is especially useful for flows
within a DC where the base RTT is easy to obtain.

Being provided a base RTT solves a basic problem in RTT based congestion
avoidance algorithms (such as Vegas, NV and BBR). Although it is easy
to get the base RTT when the network is not congested, it is very
diffcult to do when it is very congested. Newer connections get an
inflated value of the base RTT leading to unfariness (newer flows with a
larger base RTT get more bandwidth). As a result, RTT based congestion
avoidance algorithms tend to update their base RTTs to improve fairness.
In very congested networks this can lead to base RTT inflation, reducing
the ability of these RTT based congestion control algorithms to prevent
congestion.

Note that in my experiments with TCP-NV, the base RTT provided can be
much larger than the actual hardware RTT. For example, experimenting
with hosts within a rack where the hardware RTT is 16-20us, I've used
base RTTs up to 150us. The effect of using a larger base RTT is that the
congestion avoidance algorithm will allow more queueing. When there are
only a few flows the main effect is larger measured RTTs and RPC
latencies due to the increased queueing. When there are a lot of flows,
a larger base RTT can lead to more congestion and more packet drops.
For this case, where the hardware RTT is 20us, a base RTT of 80us
produces good results.

This patch only introduces BPF_SOCK_OPS_BASE_RTT, a later patch in this
set adds support for using it in TCP-NV. Further study and testing is
needed before support can be added to other delay based congestion
avoidance algorithms.

Signed-off-by: Lawrence Brakmo <brakmo@fb.com>
Acked-by: Alexei Starovoitov <ast@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonfp: use struct fields for 8 bit-wide access
Pieter Jansen van Vuuren [Fri, 20 Oct 2017 17:49:52 +0000 (19:49 +0200)]
nfp: use struct fields for 8 bit-wide access

Use direct access struct fields rather than PREP_FIELD()
macros to manipulate the jump ID and length, both of which
are exactly 8-bits wide. This simplifies the code somewhat.

Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: x25: mark expected switch fall-throughs
Gustavo A. R. Silva [Fri, 20 Oct 2017 17:37:52 +0000 (12:37 -0500)]
net: x25: mark expected switch fall-throughs

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: af_unix: mark expected switch fall-through
Gustavo A. R. Silva [Fri, 20 Oct 2017 17:05:30 +0000 (12:05 -0500)]
net: af_unix: mark expected switch fall-through

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agogeneve: Get rid of is_all_zero(), streamline is_tnl_info_zero()
Stefano Brivio [Fri, 20 Oct 2017 11:31:36 +0000 (13:31 +0200)]
geneve: Get rid of is_all_zero(), streamline is_tnl_info_zero()

No need to re-invent memchr_inv() with !is_all_zero(). While at
it, replace conditional and return clauses with a single return
clause in is_tnl_info_zero().

Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'dsa-lan9303-Add-fdb-mdb-methods'
David S. Miller [Sun, 22 Oct 2017 01:41:30 +0000 (02:41 +0100)]
Merge branch 'dsa-lan9303-Add-fdb-mdb-methods'

Egil Hjelmeland says:

====================
net: dsa: lan9303: Add fdb/mdb methods

This series add support for accessing and managing the lan9303 ALR
(Address Logic Resolution).

The first patch add low level functions for accessing the ALR, along
with port_fast_age and port_fdb_dump methods.

The second patch add functions for managing ALR entires, along with
remaining fdb/mdb methods.

Note that to complete STP support, a special ALR entry with the STP eth
address must be added too. This must be addressed later.

Comments welcome!

Changes v2 -> v3:
 - Whitespace polishing. Removed some "section" comments.
 - Prefixed ALR constants with LAN9303_ for consistency.
 - Patch 2: lan9303_port_fast_age() wrap the "port" into a struct for passing
   as context to alr_loop_cb_del_port_learned. Safer in event of type change.
 - Patch 2: Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>

Changes v1 -> v2:
 - Patch 2: Removed question comment
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: lan9303: Add fdb/mdb manipulation
Egil Hjelmeland [Fri, 20 Oct 2017 10:19:10 +0000 (12:19 +0200)]
net: dsa: lan9303: Add fdb/mdb manipulation

Add functions for managing the lan9303 ALR (Address Logic
Resolution).

Implement DSA methods: port_fdb_add, port_fdb_del, port_mdb_prepare,
port_mdb_add and port_mdb_del.

Since the lan9303 do not offer reading specific ALR entry, the driver
caches all static entries - in a flat table.

Signed-off-by: Egil Hjelmeland <privat@egil-hjelmeland.no>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: lan9303: Add port_fast_age and port_fdb_dump methods
Egil Hjelmeland [Fri, 20 Oct 2017 10:19:09 +0000 (12:19 +0200)]
net: dsa: lan9303: Add port_fast_age and port_fdb_dump methods

Add DSA method port_fast_age as a step to STP support.

Add low level functions for accessing the lan9303 ALR (Address Logic
Resolution).

Added DSA method port_fdb_dump

Signed-off-by: Egil Hjelmeland <privat@egil-hjelmeland.no>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotipc: refactor tipc_sk_timeout() function
Jon Maloy [Fri, 20 Oct 2017 09:21:32 +0000 (11:21 +0200)]
tipc: refactor tipc_sk_timeout() function

The function tipc_sk_timeout() is more complex than necessary, and
even seems to contain an undetected bug. At one of the occurences
where we renew the timer we just order it with (HZ / 20), instead
of (jiffies + HZ / 20);

In this commit we clean up the function.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'net-driver-refcont_t'
David S. Miller [Sun, 22 Oct 2017 01:22:40 +0000 (02:22 +0100)]
Merge branch 'net-driver-refcont_t'

Elena Reshetova says:

====================
networking drivers refcount_t conversions

Note: these are the last patches related to networking that perform
conversion of refcounters from atomic_t to refcount_t.
In contrast to the core network refcounter conversions that
were merged earlier, these are much more straightforward ones.

This series, for various networking drivers, replaces atomic_t reference
counters with the new refcount_t type and API (see include/linux/refcount.h).
By doing this we prevent intentional or accidental
underflows or overflows that can led to use-after-free vulnerabilities.

The patches are fully independent and can be cherry-picked separately.
Patches are based on top of net-next.
If there are no objections to the patches, please merge them via respective trees
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, connector: convert cn_callback_entry.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:49 +0000 (10:23 +0300)]
drivers, connector: convert cn_callback_entry.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable cn_callback_entry.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, ppp: convert syncppp.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:48 +0000 (10:23 +0300)]
drivers, net, ppp: convert syncppp.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable syncppp.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, ppp: convert ppp_file.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:47 +0000 (10:23 +0300)]
drivers, net, ppp: convert ppp_file.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable ppp_file.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, ppp: convert asyncppp.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:46 +0000 (10:23 +0300)]
drivers, net, ppp: convert asyncppp.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable asyncppp.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net: convert masces_tx_sa.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:45 +0000 (10:23 +0300)]
drivers, net: convert masces_tx_sa.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable masces_tx_sa.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net: convert masces_rx_sc.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:44 +0000 (10:23 +0300)]
drivers, net: convert masces_rx_sc.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable masces_rx_sc.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net: convert masces_rx_sa.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:43 +0000 (10:23 +0300)]
drivers, net: convert masces_rx_sa.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable masces_rx_sa.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, hamradio: convert sixpack.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:42 +0000 (10:23 +0300)]
drivers, net, hamradio: convert sixpack.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable sixpack.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, mlx5: convert fs_node.refcount from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:41 +0000 (10:23 +0300)]
drivers, net, mlx5: convert fs_node.refcount from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable fs_node.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, mlx5: convert mlx5_cq.refcount from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:40 +0000 (10:23 +0300)]
drivers, net, mlx5: convert mlx5_cq.refcount from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable mlx5_cq.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, mlx4: convert mlx4_srq.refcount from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:39 +0000 (10:23 +0300)]
drivers, net, mlx4: convert mlx4_srq.refcount from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable mlx4_srq.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, mlx4: convert mlx4_qp.refcount from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:38 +0000 (10:23 +0300)]
drivers, net, mlx4: convert mlx4_qp.refcount from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable mlx4_qp.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, mlx4: convert mlx4_cq.refcount from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:37 +0000 (10:23 +0300)]
drivers, net, mlx4: convert mlx4_cq.refcount from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable mlx4_cq.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, ethernet: convert mtk_eth.dma_refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:36 +0000 (10:23 +0300)]
drivers, net, ethernet: convert mtk_eth.dma_refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable mtk_eth.dma_refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodrivers, net, ethernet: convert clip_entry.refcnt from atomic_t to refcount_t
Elena Reshetova [Fri, 20 Oct 2017 07:23:35 +0000 (10:23 +0300)]
drivers, net, ethernet: convert clip_entry.refcnt from atomic_t to refcount_t

atomic_t variables are currently used to implement reference
counters with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable clip_entry.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'hns3-loopback-selftest'
David S. Miller [Sun, 22 Oct 2017 01:16:26 +0000 (02:16 +0100)]
Merge branch 'hns3-loopback-selftest'

Yunsheng Lin says:

====================
Add mac loopback selftest support in hns3 driver

This patchset refactors the skb receiving and transmitting function
before adding mac loopback selftest support in hns3 driver.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: hns3: Add mac loopback selftest support in hns3 driver
Yunsheng Lin [Fri, 20 Oct 2017 02:19:22 +0000 (10:19 +0800)]
net: hns3: Add mac loopback selftest support in hns3 driver

This patch adds mac loopback selftest support for ethtool cmd
by checking if a transmitted packet can be received correctly
when mac loopback is enabled.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: hns3: Refactor the skb receiving and transmitting function
Yunsheng Lin [Fri, 20 Oct 2017 02:19:21 +0000 (10:19 +0800)]
net: hns3: Refactor the skb receiving and transmitting function

This patch refactors the skb receiving and transmitting functions
and export them in order to support the ethtool's mac loopback
selftest.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'bpftool-add-a-version-command-and-fix-several-items'
David S. Miller [Sun, 22 Oct 2017 01:11:33 +0000 (02:11 +0100)]
Merge branch 'bpftool-add-a-version-command-and-fix-several-items'

Jakub Kicinski says:

====================
tools: bpftool: add a "version" command, and fix several items

Quentin says:

The first seven patches of this series bring several minor fixes to
bpftool. Please see individual commit logs for details.

Last patch adds a "version" commands to bpftool, which is in fact the
version of the kernel from which it was compiled.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: add a command to display bpftool version
Quentin Monnet [Thu, 19 Oct 2017 22:46:26 +0000 (15:46 -0700)]
tools: bpftool: add a command to display bpftool version

This command can be used to print the version of the tool, which is in
fact the version from Linux taken from usr/include/linux/version.h.

Example usage:

    $ bpftool version
    bpftool v4.14.0

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: show that `opcodes` or `file FILE` should be exclusive
Quentin Monnet [Thu, 19 Oct 2017 22:46:25 +0000 (15:46 -0700)]
tools: bpftool: show that `opcodes` or `file FILE` should be exclusive

For the `bpftool prog dump { jited | xlated } ...` command, adding
`opcodes` keyword (to request opcodes to be printed) will have no effect
if `file FILE` (to write binary output to FILE) is provided.

The manual page and the help message to be displayed in the terminal
should reflect that, and indicate that these options should be mutually
exclusive.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: print all relevant byte opcodes for "load double word"
Quentin Monnet [Thu, 19 Oct 2017 22:46:24 +0000 (15:46 -0700)]
tools: bpftool: print all relevant byte opcodes for "load double word"

The eBPF instruction permitting to load double words (8 bytes) into a
register need 8-byte long "immediate" field, and thus occupy twice the
space of other instructions. bpftool was aware of this and would
increment the instruction counter only once on meeting such instruction,
but it would only print the first four bytes of the immediate value to
load. Make it able to dump the whole 16 byte-long double instruction
instead (as would `llvm-objdump -d <program>`).

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: print only one error message on byte parsing failure
Quentin Monnet [Thu, 19 Oct 2017 22:46:23 +0000 (15:46 -0700)]
tools: bpftool: print only one error message on byte parsing failure

Make error messages more consistent. Specifically, when bpftool fails at
parsing map key bytes, make it print a single error message to stderr
and return from the function, instead of (always) printing a second
error message afterwards.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: add `bpftool prog help` as real command i.r.t exit code
Quentin Monnet [Thu, 19 Oct 2017 22:46:22 +0000 (15:46 -0700)]
tools: bpftool: add `bpftool prog help` as real command i.r.t exit code

Make error messages and return codes more consistent. Specifically, make
`bpftool prog help` a real command, instead of printing usage by default
for a non-recognized "help" command. Output is the same, but this makes
bpftool return with a success value instead of an error.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: use err() instead of info() if there are too many insns
Quentin Monnet [Thu, 19 Oct 2017 22:46:21 +0000 (15:46 -0700)]
tools: bpftool: use err() instead of info() if there are too many insns

Make error messages and return codes more consistent. Specifically,
replace the use of info() macro with err() when too many eBPF
instructions are received to be dumped, given that bpftool returns with
a non-null exit value in that case.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: fix return value when all eBPF programs have been shown
Quentin Monnet [Thu, 19 Oct 2017 22:46:20 +0000 (15:46 -0700)]
tools: bpftool: fix return value when all eBPF programs have been shown

Change the program to have a more consistent return code. Specifically,
do not make bpftool return an error code simply because it reaches the
end of the list of the eBPF programs to show.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotools: bpftool: add pointer to file argument to print_hex()
Quentin Monnet [Thu, 19 Oct 2017 22:46:19 +0000 (15:46 -0700)]
tools: bpftool: add pointer to file argument to print_hex()

Make print_hex() able to print to any file instead of standard output
only, and rename it to fprint_hex(). The function can now be called with
the info() macro, for example, without splitting the output between
standard and error outputs.

Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: mark expected switch fall-throughs
Gustavo A. R. Silva [Thu, 19 Oct 2017 21:28:24 +0000 (16:28 -0500)]
net: sched: mark expected switch fall-throughs

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: rose: mark expected switch fall-throughs
Gustavo A. R. Silva [Thu, 19 Oct 2017 18:03:51 +0000 (13:03 -0500)]
net: rose: mark expected switch fall-throughs

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoopenvswitch: conntrack: mark expected switch fall-through
Gustavo A. R. Silva [Thu, 19 Oct 2017 17:55:03 +0000 (12:55 -0500)]
openvswitch: conntrack: mark expected switch fall-through

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Notice that in this particular case I placed a "fall through" comment on
its own line, which is what GCC is expecting to find.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: netrom: nr_in: mark expected switch fall-through
Gustavo A. R. Silva [Thu, 19 Oct 2017 17:43:08 +0000 (12:43 -0500)]
net: netrom: nr_in: mark expected switch fall-through

In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobnxt: Move generic devlink code to new file
Steve Lin [Thu, 19 Oct 2017 14:45:56 +0000 (10:45 -0400)]
bnxt: Move generic devlink code to new file

Moving generic devlink code (registration) out of VF-R code
into new bnxt_devlink file, in preparation for future work
to add additional devlink functionality to bnxt.

Signed-off-by: Steve Lin <steven.lin1@broadcom.com>
Acked-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotipc: fix broken tipc_poll() function
Jon Maloy [Thu, 19 Oct 2017 14:42:04 +0000 (16:42 +0200)]
tipc: fix broken tipc_poll() function

In commit ae236fb208a6 ("tipc: receive group membership events via
member socket") we broke the tipc_poll() function by checking the
state of the receive queue before the call to poll_sock_wait(), while
relying that state afterwards, when it might have changed.

We restore this in this commit.

Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'net-sched-convert-cls-ndo_setup_tc-offload-calls-to-per-block-callbacks'
David S. Miller [Sat, 21 Oct 2017 02:04:09 +0000 (03:04 +0100)]
Merge branch 'net-sched-convert-cls-ndo_setup_tc-offload-calls-to-per-block-callbacks'

Jiri Pirko says:

====================
net: sched: convert cls ndo_setup_tc offload calls to per-block callbacks

This patchset is a bit bigger, but most of the patches are doing the
same changes in multiple classifiers and drivers. I could do some
squashes, but I think it is better split.

This is another dependency on the way to shared block implementation.
The goal is to remove use of tp->q in classifiers code.

Also, this provides drivers possibility to track binding of blocks to
qdiscs. Legacy drivers which do not support shared block offloading.
register one callback per binding. That maintains the current
functionality we have with ndo_setup_tc. Drivers which support block
sharing offload register one callback per block which safes overhead.

Patches 1-4 introduce the binding notifications and per-block callbacks
Patches 5-8 add block callbacks calls to classifiers
Patches 9-17 do convert from ndo_setup_tc calls to block callbacks for
             classifier offloads in drivers
Patches 18-20 do cleanup

v1->v2:
- patch1:
  - move new enum value to the end
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: remove unused is_classid_clsact_ingress/egress helpers
Jiri Pirko [Thu, 19 Oct 2017 13:50:48 +0000 (15:50 +0200)]
net: sched: remove unused is_classid_clsact_ingress/egress helpers

These helpers are no longer in use by drivers, so remove them.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: remove unused classid field from tc_cls_common_offload
Jiri Pirko [Thu, 19 Oct 2017 13:50:47 +0000 (15:50 +0200)]
net: sched: remove unused classid field from tc_cls_common_offload

It is no longer used by the drivers, so remove it.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS*
Jiri Pirko [Thu, 19 Oct 2017 13:50:46 +0000 (15:50 +0200)]
net: sched: avoid ndo_setup_tc calls for TC_SETUP_CLS*

All drivers are converted to use block callbacks for TC_SETUP_CLS*.
So it is now safe to remove the calls to ndo_setup_tc from cls_*

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodsa: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:45 +0000 (15:50 +0200)]
dsa: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for matchall offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonfp: bpf: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:44 +0000 (15:50 +0200)]
nfp: bpf: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for bpf offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonfp: flower: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:43 +0000 (15:50 +0200)]
nfp: flower: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlx5e_rep: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:42 +0000 (15:50 +0200)]
mlx5e_rep: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoixgbe: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:41 +0000 (15:50 +0200)]
ixgbe: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for u32 offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agocxgb4: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:40 +0000 (15:50 +0200)]
cxgb4: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower and u32 offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobnxt: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:39 +0000 (15:50 +0200)]
bnxt: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlx5e: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:38 +0000 (15:50 +0200)]
mlx5e: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for flower offloads to block callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:37 +0000 (15:50 +0200)]
mlxsw: spectrum: Convert ndo_setup_tc offloads to block callbacks

Benefit from the newly introduced block callback infrastructure and
convert ndo_setup_tc calls for matchall and flower offloads to block
callbacks.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: cls_bpf: call block callbacks for offload
Jiri Pirko [Thu, 19 Oct 2017 13:50:36 +0000 (15:50 +0200)]
net: sched: cls_bpf: call block callbacks for offload

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: cls_u32: call block callbacks for offload
Jiri Pirko [Thu, 19 Oct 2017 13:50:35 +0000 (15:50 +0200)]
net: sched: cls_u32: call block callbacks for offload

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode
Jiri Pirko [Thu, 19 Oct 2017 13:50:34 +0000 (15:50 +0200)]
net: sched: cls_u32: swap u32_remove_hw_knode and u32_remove_hw_hnode

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: cls_matchall: call block callbacks for offload
Jiri Pirko [Thu, 19 Oct 2017 13:50:33 +0000 (15:50 +0200)]
net: sched: cls_matchall: call block callbacks for offload

Use the newly introduced callbacks infrastructure and call block
callbacks alongside with the existing per-netdev ndo_setup_tc.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: use tc_setup_cb_call to call per-block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:32 +0000 (15:50 +0200)]
net: sched: use tc_setup_cb_call to call per-block callbacks

Extend the tc_setup_cb_call entrypoint function originally used only for
action egress devices callbacks to call per-block callbacks as well.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: introduce per-block callbacks
Jiri Pirko [Thu, 19 Oct 2017 13:50:31 +0000 (15:50 +0200)]
net: sched: introduce per-block callbacks

Introduce infrastructure that allows drivers to register callbacks that
are called whenever tc would offload inserted rule for a specific block.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: use extended variants of block_get/put in ingress and clsact qdiscs
Jiri Pirko [Thu, 19 Oct 2017 13:50:30 +0000 (15:50 +0200)]
net: sched: use extended variants of block_get/put in ingress and clsact qdiscs

Use previously introduced extended variants of block get and put
functions. This allows to specify a binder types specific to clsact
ingress/egress which is useful for drivers to distinguish who actually
got the block.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: sched: add block bind/unbind notif. and extended block_get/put
Jiri Pirko [Thu, 19 Oct 2017 13:50:29 +0000 (15:50 +0200)]
net: sched: add block bind/unbind notif. and extended block_get/put

Introduce new type of ndo_setup_tc message to propage binding/unbinding
of a block to driver. Call this ndo whenever qdisc gets/puts a block.
Alongside with this, there's need to propagate binder type from qdisc
code down to the notifier. So introduce extended variants of
block_get/put in order to pass this info.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: let trace_fib6_table_lookup() dereference the fib table
Paolo Abeni [Thu, 19 Oct 2017 07:31:43 +0000 (09:31 +0200)]
ipv6: let trace_fib6_table_lookup() dereference the fib table

The perf traces for ipv6 routing code show a relevant cost around
trace_fib6_table_lookup(), even if no trace is enabled. This is
due to the fib6_table de-referencing currently performed by the
caller.

Let's the tracing code pay this overhead, passing to the trace
helper the table pointer. This gives small but measurable
performance improvement under UDP flood.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'for-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetoot...
David S. Miller [Sat, 21 Oct 2017 01:22:19 +0000 (02:22 +0100)]
Merge branch 'for-upstream' of git://git./linux/kernel/git/bluetooth/bluetooth-next

Johan Hedberg says:

====================
pull request: bluetooth-next 2017-10-19

Here's the first bluetooth-next pull request targeting the 4.15 kernel
release.

 - Multiple fixes & improvements to the hci_bcm driver
 - DT improvements, e.g. new local-bd-address property
 - Fixes & improvements to ECDH usage. Private key is now generated by
   the crypto subsystem.
 - gcc-4.9 warning fixes

Please let me know if there are any issues pulling. Thanks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv4: ipv4_default_advmss() should use route mtu
Eric Dumazet [Thu, 19 Oct 2017 00:02:03 +0000 (17:02 -0700)]
ipv4: ipv4_default_advmss() should use route mtu

ipv4_default_advmss() incorrectly uses the device MTU instead
of the route provided one. IPv6 has the proper behavior,
lets harmonize the two protocols.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'ieee802154-for-davem-2017-10-18' of git://git.kernel.org/pub/scm/linux...
David S. Miller [Sat, 21 Oct 2017 00:45:56 +0000 (01:45 +0100)]
Merge branch 'ieee802154-for-davem-2017-10-18' of git://git./linux/kernel/git/sschmidt/wpan-next

Stefan Schmidt says:

====================
pull-request: ieee802154 2017-10-18

Please find below a pull request from the ieee802154 subsystem for net-next.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agospectrum: Convert fib event handlers to use container_of on info arg
David Ahern [Wed, 18 Oct 2017 22:01:38 +0000 (15:01 -0700)]
spectrum: Convert fib event handlers to use container_of on info arg

Use container_of to convert the generic fib_notifier_info into
the event specific data structure.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp: fix tcp_send_syn_data()
Eric Dumazet [Wed, 18 Oct 2017 21:20:30 +0000 (14:20 -0700)]
tcp: fix tcp_send_syn_data()

syn_data was allocated by sk_stream_alloc_skb(), meaning
its destructor and _skb_refdst fields are mangled.

We need to call tcp_skb_tsorted_anchor_cleanup() before
calling kfree_skb() or kernel crashes.

Bug was reported by syzkaller bot.

Fixes: e2080072ed2d ("tcp: new list for sent but unacked skbs for RACK recovery")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'ipv6-fixes-for-RTF_CACHE-entries'
David S. Miller [Sat, 21 Oct 2017 00:39:11 +0000 (01:39 +0100)]
Merge branch 'ipv6-fixes-for-RTF_CACHE-entries'

Paolo Abeni says:

====================
ipv6: fixes for RTF_CACHE entries

This series addresses 2 different but related issues with RTF_CACHE
introduced by the recent refactory.

patch 1 restore the gc timer for such routes
patch 2 removes the aged out dst from the fib tree, properly coping with pMTU
routes

v1 -> v2:
 - dropped the  for ip route show cache
 - avoid touching dst.obsolete when the dst is aged out

v2 -> v3:
 - take care of pMTU exceptions
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: remove from fib tree aged out RTF_CACHE dst
Paolo Abeni [Thu, 19 Oct 2017 14:07:11 +0000 (16:07 +0200)]
ipv6: remove from fib tree aged out RTF_CACHE dst

The commit 2b760fcf5cfb ("ipv6: hook up exception table to store
dst cache") partially reverted the commit 1e2ea8ad37be ("ipv6: set
dst.obsolete when a cached route has expired").

As a result, RTF_CACHE dst referenced outside the fib tree will
not be removed until the next sernum change; dst_check() does not
fail on aged-out dst, and dst->__refcnt can't decrease: the aged
out dst will stay valid for a potentially unlimited time after the
timeout expiration.

This change explicitly removes RTF_CACHE dst from the fib tree when
aged out. The rt6_remove_exception() logic will then obsolete the
dst and other entities will drop the related reference on next
dst_check().

pMTU exceptions are not aged-out, and are removed from the exception
table only when the - usually considerably longer - ip6_rt_mtu_expires
timeout expires.

v1 -> v2:
  - do not touch dst.obsolete in rt6_remove_exception(), not needed
v2 -> v3:
  - take care of pMTU exceptions, too

Fixes: 2b760fcf5cfb ("ipv6: hook up exception table to store dst cache")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Wei Wang <weiwan@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: start fib6 gc on RTF_CACHE dst creation
Paolo Abeni [Thu, 19 Oct 2017 14:07:10 +0000 (16:07 +0200)]
ipv6: start fib6 gc on RTF_CACHE dst creation

After the commit 2b760fcf5cfb ("ipv6: hook up exception table
to store dst cache"), the fib6 gc is not started after the
creation of a RTF_CACHE via a redirect or pmtu update, since
fib6_add() isn't invoked anymore for such dsts.

We need the fib6 gc to run periodically to clean the RTF_CACHE,
or the dst will stay there forever.

Fix it by explicitly calling fib6_force_start_gc() on successful
exception creation. gc_args->more accounting will ensure that
the gc timer will run for whatever time needed to properly
clean the table.

v2 -> v3:
 - clarified the commit message

Fixes: 2b760fcf5cfb ("ipv6: hook up exception table to store dst cache")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Wei Wang <weiwan@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'bpf-lsm-hooks'
David S. Miller [Fri, 20 Oct 2017 12:33:00 +0000 (13:33 +0100)]
Merge branch 'bpf-lsm-hooks'

Chenbo Feng says:

====================
bpf: security: New file mode and LSM hooks for eBPF object permission control

Much like files and sockets, eBPF objects are accessed, controlled, and
shared via a file descriptor (FD). Unlike files and sockets, the
existing mechanism for eBPF object access control is very limited.
Currently there are two options for granting accessing to eBPF
operations: grant access to all processes, or only CAP_SYS_ADMIN
processes. The CAP_SYS_ADMIN-only mode is not ideal because most users
do not have this capability and granting a user CAP_SYS_ADMIN grants too
many other security-sensitive permissions. It also unnecessarily allows
all CAP_SYS_ADMIN processes access to eBPF functionality. Allowing all
processes to access to eBPF objects is also undesirable since it has
potential to allow unprivileged processes to consume kernel memory, and
opens up attack surface to the kernel.

Adding LSM hooks maintains the status quo for systems which do not use
an LSM, preserving compatibility with userspace, while allowing security
modules to choose how best to handle permissions on eBPF objects. Here
is a possible use case for the lsm hooks with selinux module:

The network-control daemon (netd) creates and loads an eBPF object for
network packet filtering and analysis. It passes the object FD to an
unprivileged network monitor app (netmonitor), which is not allowed to
create, modify or load eBPF objects, but is allowed to read the traffic
stats from the map.

Selinux could use these hooks to grant the following permissions:
allow netd self:bpf_map { create read write};
allow netmonitor netd:fd use;
allow netmonitor netd:bpf_map read;

In this patch series, A file mode is added to bpf map to store the
accessing mode. With this file mode flags, the map can be obtained read
only, write only or read and write. With the help of this file mode,
several security hooks can be added to the eBPF syscall implementations
to do permissions checks. These LSM hooks are mainly focused on checking
the process privileges before it obtains the fd for a specific bpf
object. No matter from a file location or from a eBPF id. Besides that,
a general check hook is also implemented at the start of bpf syscalls so
that each security module can have their own implementation on the reset
of bpf object related functionalities.

In order to store the ownership and security information about eBPF
maps, a security field pointer is added to the struct bpf_map. And the
last two patch set are implementation of selinux check on these hooks
introduced, plus an additional check when eBPF object is passed between
processes using unix socket as well as binder IPC.

Change since V1:

 - Whitelist the new bpf flags in the map allocate check.
 - Added bpf selftest for the new flags.
 - Added two new security hooks for copying the security information from
   the bpf object security struct to file security struct
 - Simplified the checking action when bpf fd is passed between processes.

 Change since V2:

 - Fixed the line break problem for map flags check
 - Fixed the typo in selinux check of file mode.
 - Merge bpf_map and bpf_prog into one selinux class
 - Added bpf_type and bpf_sid into file security struct to store the
   security information when generate fd.
 - Add the hook to bpf_map_new_fd and bpf_prog_new_fd.

 Change since V3:

 - Return the actual error from security check instead of -EPERM
 - Move the hooks into anon_inode_getfd() to avoid get file again after
   bpf object file is installed with fd.
 - Removed the bpf_sid field inside file_scerity_struct to reduce the
   cache size.

 Change since V4:

 - Rename bpf av prog_use to prog_run to distinguish from fd_use.
 - Remove the bpf_type field inside file_scerity_struct and use bpf fops
   to indentify bpf object instead.

 Change since v5:

 - Fixed the incorrect selinux class name for SECCLASS_BPF

 Change since v7:

 - Fixed the build error caused by xt_bpf module.
 - Add flags check for bpf_obj_get() and bpf_map_get_fd_by_id() to make it
   uapi-wise.
 - Add the flags field to the bpf_obj_get_user function when BPF_SYSCALL
   is not configured.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoselinux: bpf: Add addtional check for bpf object file receive
Chenbo Feng [Wed, 18 Oct 2017 20:00:26 +0000 (13:00 -0700)]
selinux: bpf: Add addtional check for bpf object file receive

Introduce a bpf object related check when sending and receiving files
through unix domain socket as well as binder. It checks if the receiving
process have privilege to read/write the bpf map or use the bpf program.
This check is necessary because the bpf maps and programs are using a
anonymous inode as their shared inode so the normal way of checking the
files and sockets when passing between processes cannot work properly on
eBPF object. This check only works when the BPF_SYSCALL is configured.

Signed-off-by: Chenbo Feng <fengc@google.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Reviewed-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoselinux: bpf: Add selinux check for eBPF syscall operations
Chenbo Feng [Wed, 18 Oct 2017 20:00:25 +0000 (13:00 -0700)]
selinux: bpf: Add selinux check for eBPF syscall operations

Implement the actual checks introduced to eBPF related syscalls. This
implementation use the security field inside bpf object to store a sid that
identify the bpf object. And when processes try to access the object,
selinux will check if processes have the right privileges. The creation
of eBPF object are also checked at the general bpf check hook and new
cmd introduced to eBPF domain can also be checked there.

Signed-off-by: Chenbo Feng <fengc@google.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosecurity: bpf: Add LSM hooks for bpf object related syscall
Chenbo Feng [Wed, 18 Oct 2017 20:00:24 +0000 (13:00 -0700)]
security: bpf: Add LSM hooks for bpf object related syscall

Introduce several LSM hooks for the syscalls that will allow the
userspace to access to eBPF object such as eBPF programs and eBPF maps.
The security check is aimed to enforce a per object security protection
for eBPF object so only processes with the right priviliges can
read/write to a specific map or use a specific eBPF program. Besides
that, a general security hook is added before the multiplexer of bpf
syscall to check the cmd and the attribute used for the command. The
actual security module can decide which command need to be checked and
how the cmd should be checked.

Signed-off-by: Chenbo Feng <fengc@google.com>
Acked-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: Add tests for eBPF file mode
Chenbo Feng [Wed, 18 Oct 2017 20:00:23 +0000 (13:00 -0700)]
bpf: Add tests for eBPF file mode

Two related tests are added into bpf selftest to test read only map and
write only map. The tests verified the read only and write only flags
are working on hash maps.

Signed-off-by: Chenbo Feng <fengc@google.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobpf: Add file mode configuration into bpf maps
Chenbo Feng [Wed, 18 Oct 2017 20:00:22 +0000 (13:00 -0700)]
bpf: Add file mode configuration into bpf maps

Introduce the map read/write flags to the eBPF syscalls that returns the
map fd. The flags is used to set up the file mode when construct a new
file descriptor for bpf maps. To not break the backward capability, the
f_flags is set to O_RDWR if the flag passed by syscall is 0. Otherwise
it should be O_RDONLY or O_WRONLY. When the userspace want to modify or
read the map content, it will check the file mode to see if it is
allowed to make the change.

Signed-off-by: Chenbo Feng <fengc@google.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet-tun: fix panics at dismantle time
Eric Dumazet [Wed, 18 Oct 2017 19:12:09 +0000 (12:12 -0700)]
net-tun: fix panics at dismantle time

syzkaller got crashes at dismantle time [1]

It is not correct to test (tun->flags & IFF_NAPI) in tun_napi_disable()
and tun_napi_del() : Each tun_file can have different mode, depending
on how they were created.

Similarly I have changed tun_get_user() and tun_poll_controller()
to use the new tfile->napi_enabled boolean.

[  154.331360] BUG: unable to handle kernel NULL pointer dereference at           (null)
[  154.339220] IP: [<ffffffff9634cad6>] hrtimer_active+0x26/0x60
[  154.344983] PGD 0
[  154.347009] Oops: 0000 [#1] SMP
[  154.350680] gsmi: Log Shutdown Reason 0x03
[  154.379572] task: ffff994719150dc0 ti: ffff99475c0ae000 task.ti: ffff99475c0ae000
[  154.387043] RIP: 0010:[<ffffffff9634cad6>]  [<ffffffff9634cad6>] hrtimer_active+0x26/0x60
[  154.395232] RSP: 0018:ffff99475c0afce8  EFLAGS: 00010246
[  154.400542] RAX: ffff994754850ac0 RBX: ffff994753e65408 RCX: ffff994753e65388
[  154.407666] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff994753e65408
[  154.414790] RBP: ffff99475c0afce8 R08: 0000000000000000 R09: 0000000000000000
[  154.421921] R10: ffff99475f6f5910 R11: 0000000000000001 R12: 0000000000000000
[  154.429044] R13: ffff99417deab668 R14: ffff99417deaa780 R15: ffff99475f45dde0
[  154.436174] FS:  0000000000000000(0000) GS:ffff994767a00000(0000) knlGS:0000000000000000
[  154.444249] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  154.449986] CR2: 0000000000000000 CR3: 00000005a8a0e000 CR4: 0000000000022670
[  154.457110] Stack:
[  154.459120]  ffff99475c0afd28 ffffffff9634d614 1000000000000000 0000000000000000
[  154.466598]  ffffe54240000000 ffff994753e65408 ffff994753e653a8 ffff99417deab668
[  154.474067]  ffff99475c0afd48 ffffffff9634d6fd ffff99474c2be678 ffff994753e65398
[  154.481537] Call Trace:
[  154.483985]  [<ffffffff9634d614>] hrtimer_try_to_cancel+0x24/0xf0
[  154.490074]  [<ffffffff9634d6fd>] hrtimer_cancel+0x1d/0x30
[  154.495563]  [<ffffffff96860b3c>] napi_disable+0x3c/0x70
[  154.500875]  [<ffffffff9678ae62>] __tun_detach+0xd2/0x360
[  154.506272]  [<ffffffff9678b117>] tun_chr_close+0x27/0x40
[  154.511669]  [<ffffffff9646ebe6>] __fput+0xd6/0x1e0
[  154.516548]  [<ffffffff9646ed3e>] ____fput+0xe/0x10
[  154.521429]  [<ffffffff963035a2>] task_work_run+0x72/0x90
[  154.526827]  [<ffffffff962e9407>] do_exit+0x317/0xb60
[  154.531879]  [<ffffffff962e9c8f>] do_group_exit+0x3f/0xa0
[  154.537275]  [<ffffffff962e9d07>] SyS_exit_group+0x17/0x20
[  154.542769]  [<ffffffff969784be>] entry_SYSCALL_64_fastpath+0x12/0x17

Fixes: 943170998b20 ("net-tun: enable NAPI for TUN/TAP driver")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ipv4: Change fib notifiers to take a fib_alias
David Ahern [Wed, 18 Oct 2017 18:39:13 +0000 (11:39 -0700)]
net: ipv4: Change fib notifiers to take a fib_alias

All of the notifier data (fib_info, tos, type and table id) are
contained in the fib_alias. Pass it to the notifier instead of
each data separately shortening the argument list by 3.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp: socket option to set TCP fast open key
Yuchung Cheng [Wed, 18 Oct 2017 18:22:51 +0000 (11:22 -0700)]
tcp: socket option to set TCP fast open key

New socket option TCP_FASTOPEN_KEY to allow different keys per
listener.  The listener by default uses the global key until the
socket option is set.  The key is a 16 bytes long binary data. This
option has no effect on regular non-listener TCP sockets.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Christoph Paasch <cpaasch@apple.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mlxsw-extack'
David S. Miller [Fri, 20 Oct 2017 12:15:08 +0000 (13:15 +0100)]
Merge branch 'mlxsw-extack'

David Ahern says:

====================
mlxsw: spectrum_router: Add extack messages for RIF and VRF overflow

Currently, exceeding the number of VRF instances or the number of router
interfaces either fails with a non-intuitive EBUSY:
    $ ip li set swp1s1.6 vrf vrf-1s1-6 up
    RTNETLINK answers: Device or resource busy

or fails silently (IPv6) since the checks are done in a work queue. This
set adds support for the address validator notifier to spectrum which
allows ext-ack based messages to be returned on failure.

To make that happen the IPv6 version needs to be converted from atomic
to blocking (patch 2), and then support for extack needs to be added
to the notifier (patch 3). Patch 1 reworks the locking in ipv6_add_addr
to work better in the atomic and non-atomic code paths. Patches 4 and 5
add the validator notifier to spectrum and then plumb the extack argument
through spectrum_router.

With this set, VRF overflows fail with:
   $ ip li set swp1s1.6 vrf vrf-1s1-6 up
   Error: spectrum: Exceeded number of supported VRF.

and RIF overflows fail with:
   $ ip addr add dev swp1s2.191 10.12.191.1/24
   Error: spectrum: Exceeded number of supported router interfaces.

v2 -> v3
- fix surround context of patch 4 which was altered by c30f5d012edf

v1 -> v2
- fix error path in ipv6_add_addr: reset rt to NULL (Ido comment) and
  add in6_dev_put on ifa once the hold has been done

RFC -> v1
- addressed various comments from Ido
- refactored ipv6_add_addr to allow ifa's to be allocated with
  GFP_KERNEL as requested by DaveM
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Add extack message for RIF and VRF overflow
David Ahern [Wed, 18 Oct 2017 16:56:56 +0000 (09:56 -0700)]
mlxsw: spectrum_router: Add extack message for RIF and VRF overflow

Add extack argument down to mlxsw_sp_rif_create and mlxsw_sp_vr_create
to set an error message on RIF or VR overflow. Now on overflow of
either resource the user gets an informative message as opposed to
failing with EBUSY.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: router: Add support for address validator notifier
David Ahern [Wed, 18 Oct 2017 16:56:55 +0000 (09:56 -0700)]
mlxsw: spectrum: router: Add support for address validator notifier

Add support for inetaddr_validator and inet6addr_validator. The
notifiers provide a means for validating ipv4 and ipv6 addresses
before the addresses are installed and on failure the error
is propagated back to the user.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Add extack to validator_info structs used for address notifier
David Ahern [Wed, 18 Oct 2017 16:56:54 +0000 (09:56 -0700)]
net: Add extack to validator_info structs used for address notifier

Add extack to in_validator_info and in6_validator_info. Update the one
user of each, ipvlan, to return an error message for failures.

Only manual configuration of an address is plumbed in the IPv6 code path.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ipv6: Make inet6addr_validator a blocking notifier
David Ahern [Wed, 18 Oct 2017 16:56:53 +0000 (09:56 -0700)]
net: ipv6: Make inet6addr_validator a blocking notifier

inet6addr_validator chain was added by commit 3ad7d2468f79f ("Ipvlan
should return an error when an address is already in use") to allow
address validation before changes are committed and to be able to
fail the address change with an error back to the user. The address
validation is not done for addresses received from router
advertisements.

Handling RAs in softirq context is the only reason for the notifier
chain to be atomic versus blocking. Since the only current user, ipvlan,
of the validator chain ignores softirq context, the notifier can be made
blocking and simply not invoked for softirq path.

The blocking option is needed by spectrum for example to validate
resources for an adding an address to an interface.

Signed-off-by: David Ahern <dsahern@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: addrconf: cleanup locking in ipv6_add_addr
David Ahern [Wed, 18 Oct 2017 16:56:52 +0000 (09:56 -0700)]
ipv6: addrconf: cleanup locking in ipv6_add_addr

ipv6_add_addr is called in process context with rtnl lock held
(e.g., manual config of an address) or during softirq processing
(e.g., autoconf and address from a router advertisement).

Currently, ipv6_add_addr calls rcu_read_lock_bh shortly after entry
and does not call unlock until exit, minus the call around the address
validator notifier. Similarly, addrconf_hash_lock is taken after the
validator notifier and held until exit. This forces the allocation of
inet6_ifaddr to always be atomic.

Refactor ipv6_add_addr as follows:
1. add an input boolean to discriminate the call path (process context
   or softirq). This new flag controls whether the alloc can be done
   with GFP_KERNEL or GFP_ATOMIC.

2. Move the rcu_read_lock_bh and unlock calls only around functions that
   do rcu updates.

3. Remove the in6_dev_hold and put added by 3ad7d2468f79f ("Ipvlan should
   return an error when an address is already in use."). This was done
   presumably because rcu_read_unlock_bh needs to be called before calling
   the validator. Since rcu_read_lock is not needed before the validator
   runs revert the hold and put added by 3ad7d2468f79f and only do the
   hold when setting ifp->idev.

4. move duplicate address check and insertion of new address in the global
   address hash into a helper. The helper is called after an ifa is
   allocated and filled in.

This allows the ifa for manually configured addresses to be done with
GFP_KERNEL and reduces the overall amount of time with rcu_read_lock held
and hash table spinlock held.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 's390-next'
David S. Miller [Fri, 20 Oct 2017 12:11:05 +0000 (13:11 +0100)]
Merge branch 's390-next'

Julian Wiedmann says:

====================
s390/net: updates 2017-10-18

please apply some additional robustness fixes and cleanups for 4.15.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: don't dump control cmd twice
Julian Wiedmann [Wed, 18 Oct 2017 15:40:25 +0000 (17:40 +0200)]
s390/qeth: don't dump control cmd twice

A few lines down, qeth_prepare_control_data() makes further changes to
the control cmd buffer, and then also writes a trace entry for it.
So the first entry just pollutes the trace file with intermediate data,
drop it.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: support GRO flush timer
Julian Wiedmann [Wed, 18 Oct 2017 15:40:24 +0000 (17:40 +0200)]
s390/qeth: support GRO flush timer

Switch to napi_complete_done(), and thus enable delayed GRO flushing.
The timeout is configured via /sys/class/net/<if>/gro_flush_timeout.

Default timeout is 0, so no change in behaviour.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: try harder to get packets from RX buffer
Julian Wiedmann [Wed, 18 Oct 2017 15:40:23 +0000 (17:40 +0200)]
s390/qeth: try harder to get packets from RX buffer

Current code bails out when two subsequent buffer elements hold
insufficient data to contain a qeth_hdr packet descriptor.
This seems reasonable, but it would be legal for quirky hardware to
leave a few elements empty and then present packets in a subsequent
element. These packets would currently be dropped.

So make sure to check all buffer elements, until we hit the LAST_ENTRY
indication.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: consolidate skb allocation
Julian Wiedmann [Wed, 18 Oct 2017 15:40:22 +0000 (17:40 +0200)]
s390/qeth: consolidate skb allocation

Move the allocation of SG skbs into the main path. This allows for
a little code sharing, and handling ENOMEM from within one place.

As side effect, L2 SG skbs now get the proper amount of additional
headroom (read: zero) instead of the hard-coded ETH_HLEN.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: clean up page frag creation
Julian Wiedmann [Wed, 18 Oct 2017 15:40:21 +0000 (17:40 +0200)]
s390/qeth: clean up page frag creation

Replace the open-coded skb_add_rx_frag(), and use a fall-through
to remove some duplicated code.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: no VLAN support on OSM
Julian Wiedmann [Wed, 18 Oct 2017 15:40:20 +0000 (17:40 +0200)]
s390/qeth: no VLAN support on OSM

Instead of silently discarding VLAN registration requests on OSM,
just indicate that this card type doesn't support VLAN.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agos390/qeth: don't verify device when setting MAC address
Julian Wiedmann [Wed, 18 Oct 2017 15:40:19 +0000 (17:40 +0200)]
s390/qeth: don't verify device when setting MAC address

There's no reason why l2_set_mac_address() should ever be called for
a netdevice that's not owned by qeth. It's certainly not required for
VLAN devices, which have their own netdev_ops.

Also:
1) we don't do such validation for any of the other netdev_ops routines.
2) the code in question clearly has never been actually exercised;
   it's broken. After determining that the device is not owned
   by qeth, it would still use dev->ml_priv to write a qeth trace entry.

Remove the check, and its helper that walked the global card list.

Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>