openwrt/staging/blogic.git
6 years agoMerge branch 'bpf-big-map-entries'
Daniel Borkmann [Tue, 2 Oct 2018 12:39:59 +0000 (14:39 +0200)]
Merge branch 'bpf-big-map-entries'

Jakub Kicinski says:

====================
This series makes the control message parsing for interacting
with BPF maps more flexible.  Up until now we had a hard limit
in the ABI for key and value size to be 64B at most.  Using
TLV capability allows us to support large map entries.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agonfp: bpf: allow control message sizing for map ops
Jakub Kicinski [Tue, 2 Oct 2018 01:30:34 +0000 (18:30 -0700)]
nfp: bpf: allow control message sizing for map ops

In current ABI the size of the messages carrying map elements was
statically defined to at most 16 words of key and 16 words of value
(NFP word is 4 bytes).  We should not make this assumption and use
the max key and value sizes from the BPF capability instead.

To make sure old kernels don't get surprised with larger (or smaller)
messages bump the FW ABI version to 3 when key/value size is different
than 16 words.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agonfp: allow apps to request larger MTU on control vNIC
Jakub Kicinski [Tue, 2 Oct 2018 01:30:33 +0000 (18:30 -0700)]
nfp: allow apps to request larger MTU on control vNIC

Some apps may want to have higher MTU on the control vNIC/queue.
Allow them to set the requested MTU at init time.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agonfp: bpf: parse global BPF ABI version capability
Jakub Kicinski [Tue, 2 Oct 2018 01:30:32 +0000 (18:30 -0700)]
nfp: bpf: parse global BPF ABI version capability

Up until now we only had per-vNIC BPF ABI version capabilities,
which are slightly awkward to use because bulk of the resources
and configuration does not relate to any particular vNIC.  Add
a new capability for global ABI version and check the per-vNIC
version are equal to it.  Assume the ABI version 2 if no explicit
version capability is present.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoMerge branch 'bpf-per-cpu-cgroup-storage'
Daniel Borkmann [Mon, 1 Oct 2018 14:18:34 +0000 (16:18 +0200)]
Merge branch 'bpf-per-cpu-cgroup-storage'

Roman Gushchin says:

====================
This patchset implements per-cpu cgroup local storage and provides
an example how per-cpu and shared cgroup local storage can be used
for efficient accounting of network traffic.

v4->v3:
  1) incorporated Alexei's feedback

v3->v2:
  1) incorporated Song's feedback
  2) rebased on top of current bpf-next

v2->v1:
  1) added a selftest implementing network counters
  2) added a missing free() in cgroup local storage selftest
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoselftests/bpf: cgroup local storage-based network counters
Roman Gushchin [Fri, 28 Sep 2018 14:46:00 +0000 (14:46 +0000)]
selftests/bpf: cgroup local storage-based network counters

This commit adds a bpf kselftest, which demonstrates how percpu
and shared cgroup local storage can be used for efficient lookup-free
network accounting.

Cgroup local storage provides generic memory area with a very efficient
lookup free access. To avoid expensive atomic operations for each
packet, per-cpu cgroup local storage is used. Each packet is initially
charged to a per-cpu counter, and only if the counter reaches certain
value (32 in this case), the charge is moved into the global atomic
counter. This allows to amortize atomic operations, keeping reasonable
accuracy.

The test also implements a naive network traffic throttling, mostly to
demonstrate the possibility of bpf cgroup--based network bandwidth
control.

Expected output:
  ./test_netcnt
  test_netcnt:PASS

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agosamples/bpf: extend test_cgrp2_attach2 test to use per-cpu cgroup storage
Roman Gushchin [Fri, 28 Sep 2018 14:45:58 +0000 (14:45 +0000)]
samples/bpf: extend test_cgrp2_attach2 test to use per-cpu cgroup storage

This commit extends the test_cgrp2_attach2 test to cover per-cpu
cgroup storage. Bpf program will use shared and per-cpu cgroup
storages simultaneously, so a better coverage of corresponding
core code will be achieved.

Expected output:
  $ ./test_cgrp2_attach2
  Attached DROP prog. This ping in cgroup /foo should fail...
  ping: sendmsg: Operation not permitted
  Attached DROP prog. This ping in cgroup /foo/bar should fail...
  ping: sendmsg: Operation not permitted
  Attached PASS prog. This ping in cgroup /foo/bar should pass...
  Detached PASS from /foo/bar while DROP is attached to /foo.
  This ping in cgroup /foo/bar should fail...
  ping: sendmsg: Operation not permitted
  Attached PASS from /foo/bar and detached DROP from /foo.
  This ping in cgroup /foo/bar should pass...
  ### override:PASS
  ### multi:PASS

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoselftests/bpf: extend the storage test to test per-cpu cgroup storage
Roman Gushchin [Fri, 28 Sep 2018 14:45:55 +0000 (14:45 +0000)]
selftests/bpf: extend the storage test to test per-cpu cgroup storage

This test extends the cgroup storage test to use per-cpu flavor
of the cgroup storage as well.

The test initializes a per-cpu cgroup storage to some non-zero initial
value (1000), and then simple bumps a per-cpu counter each time
the shared counter is atomically incremented. Then it reads all
per-cpu areas from the userspace side, and checks that the sum
of values adds to the expected sum.

Expected output:
  $ ./test_cgroup_storage
  test_cgroup_storage:PASS

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoselftests/bpf: add verifier per-cpu cgroup storage tests
Roman Gushchin [Fri, 28 Sep 2018 14:45:53 +0000 (14:45 +0000)]
selftests/bpf: add verifier per-cpu cgroup storage tests

This commits adds verifier tests covering per-cpu cgroup storage
functionality. There are 6 new tests, which are exactly the same
as for shared cgroup storage, but do use per-cpu cgroup storage
map.

Expected output:
  $ ./test_verifier
  #0/u add+sub+mul OK
  #0/p add+sub+mul OK
  ...
  #286/p invalid cgroup storage access 6 OK
  #287/p valid per-cpu cgroup storage access OK
  #288/p invalid per-cpu cgroup storage access 1 OK
  #289/p invalid per-cpu cgroup storage access 2 OK
  #290/p invalid per-cpu cgroup storage access 3 OK
  #291/p invalid per-cpu cgroup storage access 4 OK
  #292/p invalid per-cpu cgroup storage access 5 OK
  #293/p invalid per-cpu cgroup storage access 6 OK
  #294/p multiple registers share map_lookup_elem result OK
  ...
  #662/p mov64 src == dst OK
  #663/p mov64 src != dst OK
  Summary: 914 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpftool: add support for PERCPU_CGROUP_STORAGE maps
Roman Gushchin [Fri, 28 Sep 2018 14:45:51 +0000 (14:45 +0000)]
bpftool: add support for PERCPU_CGROUP_STORAGE maps

This commit adds support for BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE
map type.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: sync include/uapi/linux/bpf.h to tools/include/uapi/linux/bpf.h
Roman Gushchin [Fri, 28 Sep 2018 14:45:48 +0000 (14:45 +0000)]
bpf: sync include/uapi/linux/bpf.h to tools/include/uapi/linux/bpf.h

The sync is required due to the appearance of a new map type:
BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE, which implements per-cpu
cgroup local storage.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: don't allow create maps of per-cpu cgroup local storages
Roman Gushchin [Fri, 28 Sep 2018 14:45:46 +0000 (14:45 +0000)]
bpf: don't allow create maps of per-cpu cgroup local storages

Explicitly forbid creating map of per-cpu cgroup local storages.
This behavior matches the behavior of shared cgroup storages.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: introduce per-cpu cgroup local storage
Roman Gushchin [Fri, 28 Sep 2018 14:45:43 +0000 (14:45 +0000)]
bpf: introduce per-cpu cgroup local storage

This commit introduced per-cpu cgroup local storage.

Per-cpu cgroup local storage is very similar to simple cgroup storage
(let's call it shared), except all the data is per-cpu.

The main goal of per-cpu variant is to implement super fast
counters (e.g. packet counters), which don't require neither
lookups, neither atomic operations.

>From userspace's point of view, accessing a per-cpu cgroup storage
is similar to other per-cpu map types (e.g. per-cpu hashmaps and
arrays).

Writing to a per-cpu cgroup storage is not atomic, but is performed
by copying longs, so some minimal atomicity is here, exactly
as with other per-cpu maps.

Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: rework cgroup storage pointer passing
Roman Gushchin [Fri, 28 Sep 2018 14:45:40 +0000 (14:45 +0000)]
bpf: rework cgroup storage pointer passing

To simplify the following introduction of per-cpu cgroup storage,
let's rework a bit a mechanism of passing a pointer to a cgroup
storage into the bpf_get_local_storage(). Let's save a pointer
to the corresponding bpf_cgroup_storage structure, instead of
a pointer to the actual buffer.

It will help us to handle per-cpu storage later, which has
a different way of accessing to the actual data.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: extend cgroup bpf core to allow multiple cgroup storage types
Roman Gushchin [Fri, 28 Sep 2018 14:45:36 +0000 (14:45 +0000)]
bpf: extend cgroup bpf core to allow multiple cgroup storage types

In order to introduce per-cpu cgroup storage, let's generalize
bpf cgroup core to support multiple cgroup storage types.
Potentially, per-node cgroup storage can be added later.

This commit is mostly a formal change that replaces
cgroup_storage pointer with a array of cgroup_storage pointers.
It doesn't actually introduce a new storage type,
it will be done later.

Each bpf program is now able to have one cgroup storage of each type.

Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: permit CGROUP_DEVICE programs accessing helper bpf_get_current_cgroup_id()
Yonghong Song [Thu, 27 Sep 2018 21:37:30 +0000 (14:37 -0700)]
bpf: permit CGROUP_DEVICE programs accessing helper bpf_get_current_cgroup_id()

Currently, helper bpf_get_current_cgroup_id() is not permitted
for CGROUP_DEVICE type of programs. If the helper is used
in such cases, the verifier will log the following error:

  0: (bf) r6 = r1
  1: (69) r7 = *(u16 *)(r6 +0)
  2: (85) call bpf_get_current_cgroup_id#80
  unknown func bpf_get_current_cgroup_id#80

The bpf_get_current_cgroup_id() is useful for CGROUP_DEVICE
type of programs in order to customize action based on cgroup id.
This patch added such a support.

Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoMerge branch 'bpf-libbpf-attach-by-name'
Daniel Borkmann [Thu, 27 Sep 2018 19:15:00 +0000 (21:15 +0200)]
Merge branch 'bpf-libbpf-attach-by-name'

Andrey Ignatov says:

====================
This patch set introduces libbpf_attach_type_by_name function in libbpf
to identify attach type by section name.

This is useful to avoid writing same logic over and over again in user
space applications that leverage libbpf.

Patch 1 has more details on the new function and problem being solved.
Patches 2 and 3 add support for new section names.
Patch 4 uses new function in a selftest.
Patch 5 adds selftest for libbpf_{prog,attach}_type_by_name.

As a side note there are a lot of inconsistencies now between names used
by libbpf and bpftool (e.g. cgroup/skb vs cgroup_skb, cgroup_device and
device vs cgroup/dev, sockops vs sock_ops, etc). This patch set does not
address it but it tries not to make it harder to address it in the future.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoselftests/bpf: Test libbpf_{prog,attach}_type_by_name
Andrey Ignatov [Wed, 26 Sep 2018 22:24:57 +0000 (15:24 -0700)]
selftests/bpf: Test libbpf_{prog,attach}_type_by_name

Add selftest for libbpf functions libbpf_prog_type_by_name and
libbpf_attach_type_by_name.

Example of output:
  % ./tools/testing/selftests/bpf/test_section_names
  Summary: 35 PASSED, 0 FAILED

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agoselftests/bpf: Use libbpf_attach_type_by_name in test_socket_cookie
Andrey Ignatov [Wed, 26 Sep 2018 22:24:56 +0000 (15:24 -0700)]
selftests/bpf: Use libbpf_attach_type_by_name in test_socket_cookie

Use newly introduced libbpf_attach_type_by_name in test_socket_cookie
selftest.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agolibbpf: Support sk_skb/stream_{parser, verdict} section names
Andrey Ignatov [Wed, 26 Sep 2018 22:24:55 +0000 (15:24 -0700)]
libbpf: Support sk_skb/stream_{parser, verdict} section names

Add section names for BPF_SK_SKB_STREAM_PARSER and
BPF_SK_SKB_STREAM_VERDICT attach types to be able to identify them in
libbpf_attach_type_by_name.

"stream_parser" and "stream_verdict" are used instead of simple "parser"
and "verdict" just to avoid possible confusion in a place where attach
type is used alone (e.g. in bpftool's show sub-commands) since there is
another attach point that can be named as "verdict": BPF_SK_MSG_VERDICT.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agolibbpf: Support cgroup_skb/{e,in}gress section names
Andrey Ignatov [Wed, 26 Sep 2018 22:24:54 +0000 (15:24 -0700)]
libbpf: Support cgroup_skb/{e,in}gress section names

Add section names for BPF_CGROUP_INET_INGRESS and BPF_CGROUP_INET_EGRESS
attach types to be able to identify them in libbpf_attach_type_by_name.

"cgroup_skb" is used instead of "cgroup/skb" mostly to easy possible
unifying of how libbpf and bpftool works with section names:
* bpftool uses "cgroup_skb" to in "prog list" sub-command;
* bpftool uses "ingress" and "egress" in "cgroup list" sub-command;
* having two parts instead of three in a string like "cgroup_skb/ingress"
  can be leveraged to split it to prog_type part and attach_type part,
  or vise versa: use two parts to make a section name.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agolibbpf: Introduce libbpf_attach_type_by_name
Andrey Ignatov [Wed, 26 Sep 2018 22:24:53 +0000 (15:24 -0700)]
libbpf: Introduce libbpf_attach_type_by_name

There is a common use-case when ELF object contains multiple BPF
programs and every program has its own section name. If it's cgroup-bpf
then programs have to be 1) loaded and 2) attached to a cgroup.

It's convenient to have information necessary to load BPF program
together with program itself. This is where section name works fine in
conjunction with libbpf_prog_type_by_name that identifies prog_type and
expected_attach_type and these can be used with BPF_PROG_LOAD.

But there is currently no way to identify attach_type by section name
and it leads to messy code in user space that reinvents guessing logic
every time it has to identify attach type to use with BPF_PROG_ATTACH.

The patch introduces libbpf_attach_type_by_name that guesses attach type
by section name if a program can be attached.

The difference between expected_attach_type provided by
libbpf_prog_type_by_name and attach_type provided by
libbpf_attach_type_by_name is the former is used at BPF_PROG_LOAD time
and can be zero if a program of prog_type X has only one corresponding
attach type Y whether the latter provides specific attach type to use
with BPF_PROG_ATTACH.

No new section names were added to section_names array. Only existing
ones were reorganized and attach_type was added where appropriate.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpf: test_bpf: add init_net to dev for flow_dissector
Song Liu [Thu, 27 Sep 2018 16:34:41 +0000 (09:34 -0700)]
bpf: test_bpf: add init_net to dev for flow_dissector

Latest changes in __skb_flow_dissect() assume skb->dev has valid nd_net.
However, this is not true for test_bpf. As a result, test_bpf.ko crashes
the system with the following stack trace:

[ 1133.716622] BUG: unable to handle kernel paging request at 0000000000001030
[ 1133.716623] PGD 8000001fbf7ee067
[ 1133.716624] P4D 8000001fbf7ee067
[ 1133.716624] PUD 1f6c1cf067
[ 1133.716625] PMD 0
[ 1133.716628] Oops: 0000 [#1] SMP PTI
[ 1133.716630] CPU: 7 PID: 40473 Comm: modprobe Kdump: loaded Not tainted 4.19.0-rc5-00805-gca11cc92ccd2 #1167
[ 1133.716631] Hardware name: Wiwynn Leopard-Orv2/Leopard-DDR BW, BIOS LBM12.5 12/06/2017
[ 1133.716638] RIP: 0010:__skb_flow_dissect+0x83/0x1680
[ 1133.716639] Code: 04 00 00 41 0f b7 44 24 04 48 85 db 4d 8d 14 07 0f 84 01 02 00 00 48 8b 43 10 48 85 c0 0f 84 e5 01 00 00 48 8b 80 a8 04 00 00 <48> 8b 90 30 10 00 00 48 85 d2 0f 84 dd 01 00 00 31 c0 b9 05 00 00
[ 1133.716640] RSP: 0018:ffffc900303c7a80 EFLAGS: 00010282
[ 1133.716642] RAX: 0000000000000000 RBX: ffff881fea0b7400 RCX: 0000000000000000
[ 1133.716643] RDX: ffffc900303c7bb4 RSI: ffffffff8235c3e0 RDI: ffff881fea0b7400
[ 1133.716643] RBP: ffffc900303c7b80 R08: 0000000000000000 R09: 000000000000000e
[ 1133.716644] R10: ffffc900303c7bb4 R11: ffff881fb6840400 R12: ffffffff8235c3e0
[ 1133.716645] R13: 0000000000000008 R14: 000000000000001e R15: ffffc900303c7bb4
[ 1133.716646] FS:  00007f54e75d3740(0000) GS:ffff881fff5c0000(0000) knlGS:0000000000000000
[ 1133.716648] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1133.716649] CR2: 0000000000001030 CR3: 0000001f6c226005 CR4: 00000000003606e0
[ 1133.716649] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1133.716650] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1133.716651] Call Trace:
[ 1133.716660]  ? sched_clock_cpu+0xc/0xa0
[ 1133.716662]  ? sched_clock_cpu+0xc/0xa0
[ 1133.716665]  ? log_store+0x1b5/0x260
[ 1133.716667]  ? up+0x12/0x60
[ 1133.716669]  ? skb_get_poff+0x4b/0xa0
[ 1133.716674]  ? __kmalloc_reserve.isra.47+0x2e/0x80
[ 1133.716675]  skb_get_poff+0x4b/0xa0
[ 1133.716680]  bpf_skb_get_pay_offset+0xa/0x10
[ 1133.716686]  ? test_bpf_init+0x578/0x1000 [test_bpf]
[ 1133.716690]  ? netlink_broadcast_filtered+0x153/0x3d0
[ 1133.716695]  ? free_pcppages_bulk+0x324/0x600
[ 1133.716696]  ? 0xffffffffa0279000
[ 1133.716699]  ? do_one_initcall+0x46/0x1bd
[ 1133.716704]  ? kmem_cache_alloc_trace+0x144/0x1a0
[ 1133.716709]  ? do_init_module+0x5b/0x209
[ 1133.716712]  ? load_module+0x2136/0x25d0
[ 1133.716715]  ? __do_sys_finit_module+0xba/0xe0
[ 1133.716717]  ? __do_sys_finit_module+0xba/0xe0
[ 1133.716719]  ? do_syscall_64+0x48/0x100
[ 1133.716724]  ? entry_SYSCALL_64_after_hwframe+0x44/0xa9

This patch fixes tes_bpf by using init_net in the dummy dev.

Fixes: d58e468b1112 ("flow_dissector: implements flow dissector BPF hook")
Reported-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Petar Penkov <ppenkov@google.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agobpftool: Fix bpftool net output
Andrey Ignatov [Tue, 25 Sep 2018 22:20:37 +0000 (15:20 -0700)]
bpftool: Fix bpftool net output

Print `bpftool net` output to stdout instead of stderr. Only errors
should be printed to stderr. Regular output should go to stdout and this
is what all other subcommands of bpftool do, including --json and
--pretty formats of `bpftool net` itself.

Fixes: commit f6f3bac08ff9 ("tools/bpf: bpftool: add net support")
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
6 years agonet-ipv4: remove 2 always zero parameters from ipv4_redirect()
Maciej Żenczykowski [Wed, 26 Sep 2018 03:56:27 +0000 (20:56 -0700)]
net-ipv4: remove 2 always zero parameters from ipv4_redirect()

(the parameters in question are mark and flow_flags)

Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet-ipv4: remove 2 always zero parameters from ipv4_update_pmtu()
Maciej Żenczykowski [Wed, 26 Sep 2018 03:56:26 +0000 (20:56 -0700)]
net-ipv4: remove 2 always zero parameters from ipv4_update_pmtu()

(the parameters in question are mark and flow_flags)

Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: mvneta: Add support for 2500Mbps SGMII
Maxime Chevallier [Tue, 25 Sep 2018 13:59:39 +0000 (15:59 +0200)]
net: mvneta: Add support for 2500Mbps SGMII

The mvneta controller can handle speeds up to 2500Mbps on the SGMII
interface. This relies on serdes configuration, the lane must be
configured at 3.125Gbps and we can't use in-band autoneg at that speed.

The main issue when supporting that speed on this particular controller
is that the link partner can send ethernet frames with a shortened
preamble, which if not explicitly enabled in the controller will cause
unexpected behaviours.

This was tested on Armada 385, with the comphy configuration done in
bootloader.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'net-vhost-improve-performance-when-enable-busyloop'
David S. Miller [Thu, 27 Sep 2018 03:25:55 +0000 (20:25 -0700)]
Merge branch 'net-vhost-improve-performance-when-enable-busyloop'

Tonghao Zhang says:

====================
net: vhost: improve performance when enable busyloop

This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.

For more performance report, see patch 4
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: vhost: add rx busy polling in tx path
Tonghao Zhang [Tue, 25 Sep 2018 12:36:52 +0000 (05:36 -0700)]
net: vhost: add rx busy polling in tx path

This patch improves the guest receive performance.
On the handle_tx side, we poll the sock receive queue at the
same time. handle_rx do that in the same way.

We set the poll-us=100us and use the netperf to test throughput
and mean latency. When running the tests, the vhost-net kthread
of that VM, is alway 100% CPU. The commands are shown as below.

Rx performance is greatly improved by this patch. There is not
notable performance change on tx with this series though. This
patch is useful for bi-directional traffic.

netperf -H IP -t TCP_STREAM -l 20 -- -O "THROUGHPUT, THROUGHPUT_UNITS, MEAN_LATENCY"

Topology:
[Host] ->linux bridge -> tap vhost-net ->[Guest]

TCP_STREAM:
* Without the patch:  19842.95 Mbps, 6.50 us mean latency
* With the patch:     37598.20 Mbps, 3.43 us mean latency

Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: vhost: factor out busy polling logic to vhost_net_busy_poll()
Tonghao Zhang [Tue, 25 Sep 2018 12:36:51 +0000 (05:36 -0700)]
net: vhost: factor out busy polling logic to vhost_net_busy_poll()

Factor out generic busy polling logic and will be
used for in tx path in the next patch. And with the patch,
qemu can set differently the busyloop_timeout for rx queue.

To avoid duplicate codes, introduce the helper functions:
* sock_has_rx_data(changed from sk_has_rx_data)
* vhost_net_busy_poll_try_queue

Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: vhost: replace magic number of lock annotation
Tonghao Zhang [Tue, 25 Sep 2018 12:36:50 +0000 (05:36 -0700)]
net: vhost: replace magic number of lock annotation

Use the VHOST_NET_VQ_XXX as a subclass for mutex_lock_nested.

Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: vhost: lock the vqs one by one
Tonghao Zhang [Tue, 25 Sep 2018 12:36:49 +0000 (05:36 -0700)]
net: vhost: lock the vqs one by one

This patch changes the way that lock all vqs
at the same, to lock them one by one. It will
be used for next patch to avoid the deadlock.

Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agotcp: expose sk_state in tcp_retransmit_skb tracepoint
Yafang Shao [Mon, 24 Sep 2018 12:57:29 +0000 (20:57 +0800)]
tcp: expose sk_state in tcp_retransmit_skb tracepoint

After sk_state exposed, we can get in which state this retransmission
occurs. That could give us more detail for dignostic.
For example, if this retransmission occurs in SYN_SENT state, it may
also indicates that the syn packet may be dropped on the remote peer due
to syn backlog queue full and then we could check the remote peer.

BTW,SYNACK retransmission is traced in tcp_retransmit_synack tracepoint.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: faraday: fix return type of ndo_start_xmit function
YueHaibing [Wed, 26 Sep 2018 09:13:05 +0000 (17:13 +0800)]
net: faraday: fix return type of ndo_start_xmit function

The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
which is a typedef for an enum type, so make sure the implementation in
this driver has returns 'netdev_tx_t' value, and change the function
return type to netdev_tx_t.

Found by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: smsc: fix return type of ndo_start_xmit function
YueHaibing [Wed, 26 Sep 2018 09:06:29 +0000 (17:06 +0800)]
net: smsc: fix return type of ndo_start_xmit function

The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
which is a typedef for an enum type, so make sure the implementation in
this driver has returns 'netdev_tx_t' value, and change the function
return type to netdev_tx_t.

Found by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: liquidio: list usage cleanup
zhong jiang [Wed, 26 Sep 2018 08:56:50 +0000 (16:56 +0800)]
net: liquidio: list usage cleanup

Trival cleanup, list_move_tail will implement the same function that
list_del() + list_add_tail() will do. hence just replace them.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: qed: list usage cleanup
zhong jiang [Wed, 26 Sep 2018 08:53:00 +0000 (16:53 +0800)]
net: qed: list usage cleanup

Trival cleanup, list_move_tail will implement the same function that
list_del() + list_add_tail() will do. hence just replace them.

Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'net-bridge-convert-bool-options-to-bits'
David S. Miller [Wed, 26 Sep 2018 17:04:23 +0000 (10:04 -0700)]
Merge branch 'net-bridge-convert-bool-options-to-bits'

Nikolay Aleksandrov says:

====================
net: bridge: convert bool options to bits

A lot of boolean bridge options have been added around the net_bridge
structure resulting in holes and more importantly different cache lines
that need to be fetched in the fast path. This set moves all of those
to bits in a bitfield which resides in a hot cache line thus reducing
the size of net_bridge, the number of holes and the number of cache
lines needed for the fast path.
The set is also sent in preparation for new boolean options to avoid
spreading them in the structure and making new holes.
One nice side-effect is that we avoid potential race conditions by using
the bitops since some of the options were bits being directly set in
parallel risking hard to debug issues (has_ipv6_addr).

Before:
 size: 1184, holes: 8, sum holes: 30
After:
 size: 1160, holes: 3, sum holes: 7

Patch 01 is a trivial style fix
Patch 02 adds the new options bitfield and converts the vlan boolean
         options to bits
Patches 03-08 convert the rest of the boolean options to bits
Patch 09 re-arranges a few fields in net_bridge to further reduce size

v2: patch 09: remove the comment about offload_fwd_mark in net_bridge and
    leave it where it is now, thanks to Ido for spotting it
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: pack net_bridge better
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:07 +0000 (17:01 +0300)]
net: bridge: pack net_bridge better

Further reduce the size of net_bridge with 8 bytes and reduce the number of
holes in it:
 Before: holes: 5, sum holes: 15
 After: holes: 3, sum holes: 7

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert mtu_set_by_user to a bit
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:06 +0000 (17:01 +0300)]
net: bridge: convert mtu_set_by_user to a bit

Convert the last remaining bool option to a bit thus reducing the overall
net_bridge size further by 8 bytes.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert neigh_suppress_enabled option to a bit
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:05 +0000 (17:01 +0300)]
net: bridge: convert neigh_suppress_enabled option to a bit

Convert the neigh_suppress_enabled option to a bit.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert mcast options to bits
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:04 +0000 (17:01 +0300)]
net: bridge: convert mcast options to bits

This patch converts the rest of the mcast options to bits. It also packs
the mcast options a little better by moving multicast_mld_version to an
existing hole, reducing the net_bridge size by 8 bytes.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert and rename mcast disabled
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:03 +0000 (17:01 +0300)]
net: bridge: convert and rename mcast disabled

Convert mcast disabled to an option bit and while doing so convert the
logic to check if multicast is enabled instead. That is make the logic
follow the option value - if it's set then mcast is enabled and vice versa.
This avoids a few confusing places where we inverted the value that's being
set to follow the mcast_disabled logic.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert group_addr_set option to a bit
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:02 +0000 (17:01 +0300)]
net: bridge: convert group_addr_set option to a bit

Convert group_addr_set internal bridge opt to a bit.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: convert nf call options to bits
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:01 +0000 (17:01 +0300)]
net: bridge: convert nf call options to bits

No functional change, convert of nf_call_[ip|ip6|arp]tables to bits.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: add bitfield for options and convert vlan opts
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:01:00 +0000 (17:01 +0300)]
net: bridge: add bitfield for options and convert vlan opts

Bridge options have usually been added as separate fields all over the
net_bridge struct taking up space and ending up in different cache lines.
Let's move them to a single bitfield to save up space and speedup lookups.
This patch adds a simple API for option modifying and retrieving using
bitops and converts the first user of the API - the bridge vlan options
(vlan_enabled and vlan_stats_enabled).

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: bridge: make struct opening bracket consistent
Nikolay Aleksandrov [Wed, 26 Sep 2018 14:00:59 +0000 (17:00 +0300)]
net: bridge: make struct opening bracket consistent

Currently we have a mix of opening brackets on new lines and on the same
line, let's move them all on the same line.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 's390-net-next'
David S. Miller [Wed, 26 Sep 2018 16:56:08 +0000 (09:56 -0700)]
Merge branch 's390-net-next'

Julian Wiedmann says:

====================
s390/net: updates 2018-09-26

please apply one more series of cleanups and small improvements for qeth
to net-next. Note that one patch needs to touch both af_iucv and qeth, in
order to untangle their receive paths.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: remove duplicated carrier state tracking
Julian Wiedmann [Wed, 26 Sep 2018 16:29:16 +0000 (18:29 +0200)]
s390/qeth: remove duplicated carrier state tracking

The netdevice is always available, apply any carrier state changes to it
without caching them.
On a STARTLAN event (ie. carrier-up), defer updating the state to
qeth_core_hardsetup_card() in the subsequent recovery action.

Also remove the carrier-state checks from the xmit routines. Stopping
transmission on carrier-down is the responsibility of upper-level code
(eg see dev_direct_xmit()).

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: clean up drop conditions for received cmds
Julian Wiedmann [Wed, 26 Sep 2018 16:29:15 +0000 (18:29 +0200)]
s390/qeth: clean up drop conditions for received cmds

If qeth_check_ipa_data() consumed an event, there's no point in
processing it further. So drop it early, and make the surrounding code
a tiny bit more readable.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: re-indent qeth_check_ipa_data()
Julian Wiedmann [Wed, 26 Sep 2018 16:29:14 +0000 (18:29 +0200)]
s390/qeth: re-indent qeth_check_ipa_data()

Pull one level of checking up into qeth_send_control_data_cb(), and
clean up an else-after-return. No functional change.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: consume local address events
Julian Wiedmann [Wed, 26 Sep 2018 16:29:13 +0000 (18:29 +0200)]
s390/qeth: consume local address events

We have no code that is waiting for these events, so just drop them when
they arrive.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: remove various redundant code
Julian Wiedmann [Wed, 26 Sep 2018 16:29:12 +0000 (18:29 +0200)]
s390/qeth: remove various redundant code

1. tracing iob->rc makes no sense when it hasn't been modified by the
   callback,
2. the qeth_dbf_list is declared with LIST_HEAD, which also initializes
   the list,
3. the ccwgroup core only calls the thaw/restore callbacks if the gdev
   is online, so we don't have to check for it again.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: remove CARD_FROM_CDEV helper
Julian Wiedmann [Wed, 26 Sep 2018 16:29:11 +0000 (18:29 +0200)]
s390/qeth: remove CARD_FROM_CDEV helper

The cdev-to-card translation walks through two layers of drvdata,
with no locking or refcounting (where eg. the ccwgroup core only
accesses a cdev's drvdata while holding the ccwlock).

This might be safe for now, but any careless usage of the helper has the
potential for subtle races and use-after-free's. Luckily there's only
one occurrence where we _really_ need it (in qeth_irq()), for any other
user we can just pass through an appropriate card pointer.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: pass card pointer in iob callback
Julian Wiedmann [Wed, 26 Sep 2018 16:29:10 +0000 (18:29 +0200)]
s390/qeth: pass card pointer in iob callback

This allows us to remove the CARD_FROM_CDEV calls in the iob callbacks.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: re-use qeth_notify_skbs()
Julian Wiedmann [Wed, 26 Sep 2018 16:29:09 +0000 (18:29 +0200)]
s390/qeth: re-use qeth_notify_skbs()

When not using the CQ, this allows us avoid the second skb queue walk
in qeth_release_skbs().

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: remove additional skb refcount
Julian Wiedmann [Wed, 26 Sep 2018 16:29:08 +0000 (18:29 +0200)]
s390/qeth: remove additional skb refcount

This was presumably left over from back when qeth recursed into
dev_queue_xmit().

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: replace open-coded skb_queue_walk()
Julian Wiedmann [Wed, 26 Sep 2018 16:29:07 +0000 (18:29 +0200)]
s390/qeth: replace open-coded skb_queue_walk()

To match the use of __skb_queue_purge(), also make the skb's enqueue in
qeth_fill_buffer() lockless.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet/af_iucv: locate IUCV header via skb_network_header()
Julian Wiedmann [Wed, 26 Sep 2018 16:29:06 +0000 (18:29 +0200)]
net/af_iucv: locate IUCV header via skb_network_header()

This patch attempts to untangle the TX and RX code in qeth from
af_iucv's respective HiperTransport path:
On the TX side, pointing skb_network_header() at the IUCV header
means that qeth_l3_fill_af_iucv_hdr() no longer needs a magical offset
to access the header.
On the RX side, qeth pulls the (fake) L2 header off the skb like any
normal ethernet driver would. This makes working with the IUCV header
in af_iucv easier, since we no longer have to assume a fixed skb layout.

While at it, replace the open-coded length checks in af_iucv's RX path
with pskb_may_pull().

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: on gdev release, reset drvdata
Julian Wiedmann [Wed, 26 Sep 2018 16:29:05 +0000 (18:29 +0200)]
s390/qeth: on gdev release, reset drvdata

qeth_core_probe_device() sets the gdev's drvdata, but doesn't reset it
on a subsequent error. Move the (re-)setting around a bit, so that it
happens symmetrically on allocating/freeing the qeth_card struct.

This is no actual problem, as the ccwgroup core will discard the gdev
on a probe error. But from qeth's perspective the gdev is an external
resource, so it's best to manage it cleanly.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: fix discipline unload after setup error
Julian Wiedmann [Wed, 26 Sep 2018 16:29:04 +0000 (18:29 +0200)]
s390/qeth: fix discipline unload after setup error

Device initialization code usually first loads a subdriver
(via qeth_core_load_discipline()), and then runs its setup() callback.
If this fails, it rolls back the load via qeth_core_free_discipline().

qeth_core_free_discipline() expects the options.layer attribute to be
initialized, but on error in setup() that's currently not the case.
Resulting in misbalanced symbol_put() calls.

Fix this by setting options.layer when loading the subdriver.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: use DEFINE_MUTEX for qeth_mod_mutex
Julian Wiedmann [Wed, 26 Sep 2018 16:29:03 +0000 (18:29 +0200)]
s390/qeth: use DEFINE_MUTEX for qeth_mod_mutex

Consolidate declaration and initialization of a static variable.
While at it reduce its scope in qeth_core_load_discipline(), and simplify
the return logic accordingly.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agos390/qeth: convert layer attribute to enum
Julian Wiedmann [Wed, 26 Sep 2018 16:29:02 +0000 (18:29 +0200)]
s390/qeth: convert layer attribute to enum

While the raw values are fixed due to their use in a sysfs attribute,
we can still use the proper QETH_DISCIPLINE_* enum within the driver.

Also move the initialization into qeth_set_initial_options(), along with
all other user-configurable fields.

Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: phy: marvell: Fix build.
David S. Miller [Wed, 26 Sep 2018 05:41:31 +0000 (22:41 -0700)]
net: phy: marvell: Fix build.

Local variable 'autoneg' doesn't even exist:

drivers/net/phy/marvell.c: In function 'm88e1121_config_aneg':
drivers/net/phy/marvell.c:468:25: error: 'autoneg' undeclared (first use in this function); did you mean 'put_net'?
  if (phydev->autoneg != autoneg || changed) {
                         ^~~~~~~

Fixes: d6ab93364734 ("net: phy: marvell: Avoid unnecessary soft reset")
Reported-by:Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agobridge: br_arp_nd_proxy: set icmp6_router if neigh has NTF_ROUTER
Roopa Prabhu [Tue, 25 Sep 2018 21:39:14 +0000 (14:39 -0700)]
bridge: br_arp_nd_proxy: set icmp6_router if neigh has NTF_ROUTER

Fixes: ed842faeb2bd ("bridge: suppress nd pkts on BR_NEIGH_SUPPRESS ports")
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
David S. Miller [Wed, 26 Sep 2018 03:29:38 +0000 (20:29 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2018-09-25

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) Allow for RX stack hardening by implementing the kernel's flow
   dissector in BPF. Idea was originally presented at netconf 2017 [0].
   Quote from merge commit:

     [...] Because of the rigorous checks of the BPF verifier, this
     provides significant security guarantees. In particular, the BPF
     flow dissector cannot get inside of an infinite loop, as with
     CVE-2013-4348, because BPF programs are guaranteed to terminate.
     It cannot read outside of packet bounds, because all memory accesses
     are checked. Also, with BPF the administrator can decide which
     protocols to support, reducing potential attack surface. Rarely
     encountered protocols can be excluded from dissection and the
     program can be updated without kernel recompile or reboot if a
     bug is discovered. [...]

   Also, a sample flow dissector has been implemented in BPF as part
   of this work, from Petar and Willem.

   [0] http://vger.kernel.org/netconf2017_files/rx_hardening_and_udp_gso.pdf

2) Add support for bpftool to list currently active attachment
   points of BPF networking programs providing a quick overview
   similar to bpftool's perf subcommand, from Yonghong.

3) Fix a verifier pruning instability bug where a union member
   from the register state was not cleared properly leading to
   branches not being pruned despite them being valid candidates,
   from Alexei.

4) Various smaller fast-path optimizations in XDP's map redirect
   code, from Jesper.

5) Enable to recognize BPF_MAP_TYPE_REUSEPORT_SOCKARRAY maps
   in bpftool, from Roman.

6) Remove a duplicate check in libbpf that probes for function
   storage, from Taeung.

7) Fix an issue in test_progs by avoid checking for errno since
   on success its value should not be checked, from Mauricio.

8) Fix unused variable warning in bpf_getsockopt() helper when
   CONFIG_INET is not configured, from Anders.

9) Fix a compilation failure in the BPF sample code's use of
   bpf_flow_keys, from Prashant.

10) Minor cleanups in BPF code, from Yue and Zhong.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: dsa: lantiq_gswip: Depend on HAS_IOMEM
Hauke Mehrtens [Tue, 25 Sep 2018 19:55:33 +0000 (21:55 +0200)]
net: dsa: lantiq_gswip: Depend on HAS_IOMEM

The driver uses devm_ioremap_resource() which is only available when
CONFIG_HAS_IOMEM is set, make the driver depend on this config option.
User mode Linux does not have CONFIG_HAS_IOMEM set and the driver was
failing on this architecture.

Fixes: 14fceff4771e ("net: dsa: Add Lantiq / Intel DSA driver for vrx200")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'net-phy-Eliminate-unnecessary-soft'
David S. Miller [Wed, 26 Sep 2018 03:26:45 +0000 (20:26 -0700)]
Merge branch 'net-phy-Eliminate-unnecessary-soft'

Florian Fainelli says:

====================
net: phy: Eliminate unnecessary soft

This patch series eliminates unnecessary software resets of the PHY.
This should hopefully not break anybody's hardware; but I would
appreciate testing to make sure this is is the case.

Sorry for this long email list, I wanted to make sure I reached out to
all people who made changes to the Marvell PHY driver.

Thank you!

Changes since RFT:

- added Tested-by tags from Wang, Dongsheng, Andrew, Chris and Clemens
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: phy: marvell: Avoid unnecessary soft reset
Florian Fainelli [Tue, 25 Sep 2018 18:28:46 +0000 (11:28 -0700)]
net: phy: marvell: Avoid unnecessary soft reset

The BMCR.RESET bit on the Marvell PHYs has a special meaning in that
it commits the register writes into the HW for it to latch and be
configured appropriately. Doing software resets causes link drops, and
this is unnecessary disruption if nothing changed.

Determine from marvell_set_polarity()'s return code whether the register value
was changed and if it was, propagate that to the logic that hits the software
reset bit.

This avoids doing unnecessary soft reset if the PHY is configured in
the same state it was previously, this also eliminates the need for a
m88e1111_config_aneg() function since it now is the same as
marvell_config_aneg().

Tested-by: Wang, Dongsheng <dongsheng.wang@hxt-semitech.com>
Tested-by: Chris Healy <cphealy@gmail.com>
Tested-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: phy: Stop with excessive soft reset
Florian Fainelli [Tue, 25 Sep 2018 18:28:45 +0000 (11:28 -0700)]
net: phy: Stop with excessive soft reset

While consolidating the PHY reset in phy_init_hw() an unconditionaly
BMCR soft-reset I became quite trigger happy with those. This was later
on deactivated for the Generic PHY driver on the premise that a prior
software entity (e.g: bootloader) might have applied workarounds in
commit 0878fff1f42c ("net: phy: Do not perform software reset for
Generic PHY").

Since we have a hook to wire-up a soft_reset callback, just use that and
get rid of the call to genphy_soft_reset() entirely. This speeds up
initialization and link establishment for most PHYs out there that do
not require a reset.

Fixes: 87aa9f9c61ad ("net: phy: consolidate PHY reset in phy_init_hw()")
Tested-by: Wang, Dongsheng <dongsheng.wang@hxt-semitech.com>
Tested-by: Chris Healy <cphealy@gmail.com>
Tested-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Clemens Gruber <clemens.gruber@pqgruber.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Wed, 26 Sep 2018 03:25:00 +0000 (20:25 -0700)]
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2018-09-25

This series contains updates to i40e and xsk.

Mariusz fixes an issue where the VF link state was not being updated
properly when the PF is down or up.  Also cleaned up the promiscuous
configuration during a VF reset.

Patryk simplifies the code a bit to use the variables for PF and HW that
are declared, rather than using the VSI pointers.  Cleaned up the
message length parameter to several virtchnl functions, since it was not
being used (or needed).

Harshitha fixes two potential race conditions when trying to change VF
settings by creating a helper function to validate that the VF is
enabled and that the VSI is set up.

Sergey corrects a double "link down" message by putting in a check for
whether or not the link is up or going down.

Björn addresses an AF_XDP zero-copy issue that buffers passed
from userspace to the kernel was leaked when the hardware descriptor
ring was torn down.  A zero-copy capable driver picks buffers off the
fill ring and places them on the hardware receive ring to be completed at
a later point when DMA is complete. Similar on the transmit side; The
driver picks buffers off the transmit ring and places them on the
transmit hardware ring.

In the typical flow, the receive buffer will be placed onto an receive
ring (completed to the user), and the transmit buffer will be placed on
the completion ring to notify the user that the transfer is done.

However, if the driver needs to tear down the hardware rings for some
reason (interface goes down, reconfiguration and such), the userspace
buffers cannot be leaked. They have to be reused or completed back to
userspace.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'Refactor-classifier-API-to-work-with-Qdisc-blocks-without-rtnl-lock'
David S. Miller [Wed, 26 Sep 2018 03:17:36 +0000 (20:17 -0700)]
Merge branch 'Refactor-classifier-API-to-work-with-Qdisc-blocks-without-rtnl-lock'

Vlad Buslov says:

====================
Refactor classifier API to work with Qdisc/blocks without rtnl lock

Currently, all netlink protocol handlers for updating rules, actions and
qdiscs are protected with single global rtnl lock which removes any
possibility for parallelism. This patch set is a third step to remove
rtnl lock dependency from TC rules update path.

Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added.
Handlers registered with this flag are called without RTNL taken. End
goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER,
etc.) to be registered with UNLOCKED flag to allow parallel execution.
However, there is no intention to completely remove or split rtnl lock
itself. This patch set addresses specific problems in implementation of
classifiers API that prevent its control path from being executed
concurrently. Additional changes are required to refactor classifiers
API and individual classifiers for parallel execution. This patch set
lays groundwork to eventually register rule update handlers as
rtnl-unlocked by modifying code in cls API that works with Qdiscs and
blocks. Following patch set does the same for chains and classifiers.

The goal of this change is to refactor tcf_block_find() and its
dependencies to allow concurrent execution:
- Extend Qdisc API with rcu to lookup and take reference to Qdisc
  without relying on rtnl lock.
- Extend tcf_block with atomic reference counting and rcu.
- Always take reference to tcf_block while working with it.
- Implement tcf_block_release() to release resources obtained by
  tcf_block_find()
- Create infrastructure to allow registering Qdiscs with class ops that
  do not require the caller to hold rtnl lock.

All three netlink rule update handlers use tcf_block_find() to lookup
Qdisc and block, and this patch set introduces additional means of
synchronization to substitute rtnl lock in cls API.

Some functions in cls and sch APIs have historic names that no longer
clearly describe their intent. In order not make this code even more
confusing when introducing their concurrency-friendly versions, rename
these functions to describe actual implementation.

Changes from V2 to V3:
- Patch 1:
  - Explicitly include refcount.h in rtnetlink.h.
- Patch 3:
  - Move rcu_head field to the end of struct Qdisc.
  - Rearrange local variable declarations in qdisc_lookup_rcu().
- Patch 5:
  - Remove tcf_qdisc_put() and inline its content to callers.

Changes from V1 to V2:
- Rebase on latest net-next.
- Patch 8 - remove.
- Patch 9 - fold into patch 11.
- Patch 11:
  - Rename tcf_block_{get|put}() to tcf_block_refcnt_{get|put}().
- Patch 13 - remove.
====================

Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: use reference counting for tcf blocks on rules update
Vlad Buslov [Mon, 24 Sep 2018 16:22:58 +0000 (19:22 +0300)]
net: sched: use reference counting for tcf blocks on rules update

In order to remove dependency on rtnl lock on rules update path, always
take reference to block while using it on rules update path. Change
tcf_block_get() error handling to properly release block with reference
counting, instead of just destroying it, in order to accommodate potential
concurrent users.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: implement tcf_block_refcnt_{get|put}()
Vlad Buslov [Mon, 24 Sep 2018 16:22:57 +0000 (19:22 +0300)]
net: sched: implement tcf_block_refcnt_{get|put}()

Implement get/put function for blocks that only take/release the reference
and perform deallocation. These functions are intended to be used by
unlocked rules update path to always hold reference to block while working
with it. They use on new fine-grained locking mechanisms introduced in
previous patches in this set, instead of relying on global protection
provided by rtnl lock.

Extract code that is common with tcf_block_detach_ext() into common
function __tcf_block_put().

Extend tcf_block with rcu to allow safe deallocation when it is accessed
concurrently.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: protect block idr with spinlock
Vlad Buslov [Mon, 24 Sep 2018 16:22:56 +0000 (19:22 +0300)]
net: sched: protect block idr with spinlock

Protect block idr access with spinlock, instead of relying on rtnl lock.
Take tn->idr_lock spinlock during block insertion and removal.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: implement functions to put and flush all chains
Vlad Buslov [Mon, 24 Sep 2018 16:22:55 +0000 (19:22 +0300)]
net: sched: implement functions to put and flush all chains

Extract code that flushes and puts all chains on tcf block to two
standalone function to be shared with functions that locklessly get/put
reference to block.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: change tcf block reference counter type to refcount_t
Vlad Buslov [Mon, 24 Sep 2018 16:22:54 +0000 (19:22 +0300)]
net: sched: change tcf block reference counter type to refcount_t

As a preparation for removing rtnl lock dependency from rules update path,
change tcf block reference counter type to refcount_t to allow modification
by concurrent users.

In block put function perform decrement and check reference counter once to
accommodate concurrent modification by unlocked users. After this change
tcf_chain_put at the end of block put function is called with
block->refcnt==0 and will deallocate block after the last chain is
released, so there is no need to manually deallocate block in this case.
However, if block reference counter reached 0 and there are no chains to
release, block must still be deallocated manually.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: use Qdisc rcu API instead of relying on rtnl lock
Vlad Buslov [Mon, 24 Sep 2018 16:22:53 +0000 (19:22 +0300)]
net: sched: use Qdisc rcu API instead of relying on rtnl lock

As a preparation from removing rtnl lock dependency from rules update path,
use Qdisc rcu and reference counting capabilities instead of relying on
rtnl lock while working with Qdiscs. Create new tcf_block_release()
function, and use it to free resources taken by tcf_block_find().
Currently, this function only releases Qdisc and it is extended in next
patches in this series.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: add helper function to take reference to Qdisc
Vlad Buslov [Mon, 24 Sep 2018 16:22:52 +0000 (19:22 +0300)]
net: sched: add helper function to take reference to Qdisc

Implement function to take reference to Qdisc that relies on rcu read lock
instead of rtnl mutex. Function only takes reference to Qdisc if reference
counter isn't zero. Intended to be used by unlocked cls API.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: extend Qdisc with rcu
Vlad Buslov [Mon, 24 Sep 2018 16:22:51 +0000 (19:22 +0300)]
net: sched: extend Qdisc with rcu

Currently, Qdisc API functions assume that users have rtnl lock taken. To
implement rtnl unlocked classifiers update interface, Qdisc API must be
extended with functions that do not require rtnl lock.

Extend Qdisc structure with rcu. Implement special version of put function
qdisc_put_unlocked() that is called without rtnl lock taken. This function
only takes rtnl lock if Qdisc reference counter reached zero and is
intended to be used as optimization.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: sched: rename qdisc_destroy() to qdisc_put()
Vlad Buslov [Mon, 24 Sep 2018 16:22:50 +0000 (19:22 +0300)]
net: sched: rename qdisc_destroy() to qdisc_put()

Current implementation of qdisc_destroy() decrements Qdisc reference
counter and only actually destroy Qdisc if reference counter value reached
zero. Rename qdisc_destroy() to qdisc_put() in order for it to better
describe the way in which this function currently implemented and used.

Extract code that deallocates Qdisc into new private qdisc_destroy()
function. It is intended to be shared between regular qdisc_put() and its
unlocked version that is introduced in next patch in this series.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: core: netlink: add helper refcount dec and lock function
Vlad Buslov [Mon, 24 Sep 2018 16:22:49 +0000 (19:22 +0300)]
net: core: netlink: add helper refcount dec and lock function

Rtnl lock is encapsulated in netlink and cannot be accessed by other
modules directly. This means that reference counted objects that rely on
rtnl lock cannot use it with refcounter helper function that atomically
releases decrements reference and obtains mutex.

This patch implements simple wrapper function around refcount_dec_and_lock
that obtains rtnl lock if reference counter value reached 0.

Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoi40e: disallow changing the number of descriptors when AF_XDP is on
Björn Töpel [Fri, 7 Sep 2018 08:18:48 +0000 (10:18 +0200)]
i40e: disallow changing the number of descriptors when AF_XDP is on

When an AF_XDP UMEM is attached to any of the Rx rings, we disallow a
user to change the number of descriptors via e.g. "ethtool -G IFNAME".

Otherwise, the size of the stash/reuse queue can grow unbounded, which
would result in OOM or leaking userspace buffers.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: clean zero-copy XDP Rx ring on shutdown/reset
Björn Töpel [Fri, 7 Sep 2018 08:18:47 +0000 (10:18 +0200)]
i40e: clean zero-copy XDP Rx ring on shutdown/reset

Outstanding Rx descriptors are temporarily stored on a stash/reuse
queue. When/if the HW rings comes up again, entries from the stash are
used to re-populate the ring.

The latter required some restructuring of the allocation scheme for
the AF_XDP zero-copy implementation. There is now a fast, and a slow
allocation. The "fast allocation" is used from the fast-path and
obtains free buffers from the fill ring and the internal recycle
mechanism. The "slow allocation" is only used in ring setup, and
obtains buffers from the fill ring and the stash (if any).

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agonet: xsk: add a simple buffer reuse queue
Jakub Kicinski [Fri, 7 Sep 2018 08:18:46 +0000 (10:18 +0200)]
net: xsk: add a simple buffer reuse queue

XSK UMEM is strongly single producer single consumer so reuse of
frames is challenging.  Add a simple "stash" of FILL packets to
reuse for drivers to optionally make use of.  This is useful
when driver has to free (ndo_stop) or resize a ring with an active
AF_XDP ZC socket.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: clean zero-copy XDP Tx ring on shutdown/reset
Björn Töpel [Fri, 7 Sep 2018 08:18:45 +0000 (10:18 +0200)]
i40e: clean zero-copy XDP Tx ring on shutdown/reset

When the zero-copy enabled XDP Tx ring is torn down, due to
configuration changes, outstanding frames on the hardware descriptor
ring are queued on the completion ring.

The completion ring has a back-pressure mechanism that will guarantee
that there is sufficient space on the ring.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: Remove unused msglen parameter from virtchnl functions
Patryk Małek [Tue, 28 Aug 2018 17:16:07 +0000 (10:16 -0700)]
i40e: Remove unused msglen parameter from virtchnl functions

msglen parameter seems to be unused in several virtchnl function.
This patch removes it from signatures of those functions.

Signed-off-by: Patryk Małek <patryk.malek@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: fix double 'NIC Link is Down' messages
Sergey Nemov [Tue, 28 Aug 2018 17:16:06 +0000 (10:16 -0700)]
i40e: fix double 'NIC Link is Down' messages

When isup is false meaning that interface is going to shut down
set new speed to 0 to avoid double 'NIC Link is Down' messages.

Signed-off-by: Sergey Nemov <sergey.nemov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: add a helper function to validate a VF based on the vf id
Harshitha Ramamurthy [Tue, 28 Aug 2018 17:16:05 +0000 (10:16 -0700)]
i40e: add a helper function to validate a VF based on the vf id

When we are trying to change VF settings, it is possible for 2 race
conditions to happen. One, when the VF is created but not yet enabled.
Second, the VF is enabled but the VSI is still not created or not yet
re-created in the VF reset flow.

This patch introduces a helper function to validate that the VF is
enabled and that the VSI is set up. This patch also calls this
function from other functions which could get into these race conditions.
While we are poking around here, remove unnecessary parenthesis that
checkpatch was complaining about.

Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: use declared variables for pf and hw
Patryk Małek [Tue, 28 Aug 2018 17:16:04 +0000 (10:16 -0700)]
i40e: use declared variables for pf and hw

In order to slightly simplify the code use the variables for pf and hw
that are declared in i40e_set_mac function.

Signed-off-by: Patryk Małek <patryk.malek@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: Unset promiscuous settings on VF reset
Mariusz Stachura [Mon, 20 Aug 2018 15:12:28 +0000 (08:12 -0700)]
i40e: Unset promiscuous settings on VF reset

This patch cleans up promiscuous configuration when a VF reset occurs.
Previously the promiscuous mode settings were still there after the VF
driver removal.

Signed-off-by: Mariusz Stachura <mariusz.stachura@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agoi40e: Fix VF's link state notification
Mariusz Stachura [Mon, 20 Aug 2018 15:12:22 +0000 (08:12 -0700)]
i40e: Fix VF's link state notification

This resolves an issue where the VF link state was not being updated
when the PF is down or up, and the VF link state would always show
that it is running.

Signed-off-by: Mariusz Stachura <mariusz.stachura@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
6 years agotls: Fixed a memory leak during socket close
Vakul Garg [Tue, 25 Sep 2018 14:51:51 +0000 (20:21 +0530)]
tls: Fixed a memory leak during socket close

During socket close, if there is a open record with tx context, it needs
to be be freed apart from freeing up plaintext and encrypted scatter
lists. This patch frees up the open record if present in tx context.

Also tls_free_both_sg() has been renamed to tls_free_open_rec() to
indicate that the free record in tx context is being freed inside the
function.

Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agotls: Fix socket mem accounting error under async encryption
Vakul Garg [Tue, 25 Sep 2018 10:56:17 +0000 (16:26 +0530)]
tls: Fix socket mem accounting error under async encryption

Current async encryption implementation sometimes showed up socket
memory accounting error during socket close. This results in kernel
warning calltrace. The root cause of the problem is that socket var
sk_forward_alloc gets corrupted due to access in sk_mem_charge()
and sk_mem_uncharge() being invoked from multiple concurrent contexts
in multicore processor. The apis sk_mem_charge() and sk_mem_uncharge()
are called from functions alloc_plaintext_sg(), free_sg() etc. It is
required that memory accounting apis are called under a socket lock.

The plaintext sg data sent for encryption is freed using free_sg() in
tls_encryption_done(). It is wrong to call free_sg() from this function.
This is because this function may run in irq context. We cannot acquire
socket lock in this function.

We remove calling of function free_sg() for plaintext data from
tls_encryption_done() and defer freeing up of plaintext data to the time
when the record is picked up from tx_list and transmitted/freed. When
tls_tx_records() gets called, socket is already locked and thus there is
no concurrent access problem.

Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption")
Signed-off-by: Vakul Garg <vakul.garg@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge ra.kernel.org:/pub/scm/linux/kernel/git/davem/net
David S. Miller [Tue, 25 Sep 2018 17:35:29 +0000 (10:35 -0700)]
Merge ra./pub/scm/linux/kernel/git/davem/net

Version bump conflict in batman-adv, take what's in net-next.

iavf conflict, adjustment of netdev_ops in net-next conflicting
with poll controller method removal in net.

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agonet: macb: Clean 64b dma addresses if they are not detected
Michal Simek [Tue, 25 Sep 2018 06:32:50 +0000 (08:32 +0200)]
net: macb: Clean 64b dma addresses if they are not detected

Clear ADDR64 dma bit in DMACFG register in case that HW_DMA_CAP_64B is
not detected on 64bit system.
The issue was observed when bootloader(u-boot) does not check macb
feature at DCFG6 register (DAW64_OFFSET) and enabling 64bit dma support
by default. Then macb driver is reading DMACFG register back and only
adding 64bit dma configuration but not cleaning it out.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agoMerge branch 'r8169-series-with-smaller-improvements'
David S. Miller [Tue, 25 Sep 2018 17:28:03 +0000 (10:28 -0700)]
Merge branch 'r8169-series-with-smaller-improvements'

Heiner Kallweit says:

====================
r8169: series with smaller improvements

This series includes smaller improvements, nothing exciting.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agor8169: improve a check in rtl_init_one
Heiner Kallweit [Tue, 25 Sep 2018 05:59:36 +0000 (07:59 +0200)]
r8169: improve a check in rtl_init_one

The check for pci_is_pcie() is redundant here because all
chip versions >=18 are PCIe only anyway. In addition use
dma_set_mask_and_coherent() instead of separate calls to
pci_set_dma_mask() and pci_set_consistent_dma_mask().

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agor8169: improve rtl8169_irq_mask_and_ack
Heiner Kallweit [Tue, 25 Sep 2018 05:58:00 +0000 (07:58 +0200)]
r8169: improve rtl8169_irq_mask_and_ack

Code can be slightly simplified by acking even events we're not
interested in. In addition add a comment making clear that the
read has no functional purpose and is just a PCI commit.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
6 years agor8169: use default watchdog timeout
Heiner Kallweit [Tue, 25 Sep 2018 05:56:53 +0000 (07:56 +0200)]
r8169: use default watchdog timeout

The networking core has a default watchdog timeout of 5s. I see no
need to define an own timeout of 6s which is basically the same.

Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>