Daniel Borkmann [Tue, 9 Apr 2019 21:20:10 +0000 (23:20 +0200)]
bpf: allow for key-less BTF in array map
Given we'll be reusing BPF array maps for global data/bss/rodata
sections, we need a way to associate BTF DataSec type as its map
value type. In usual cases we have this ugly BPF_ANNOTATE_KV_PAIR()
macro hack e.g. via
38d5d3b3d5db ("bpf: Introduce BPF_ANNOTATE_KV_PAIR")
to get initial map to type association going. While more use cases
for it are discouraged, this also won't work for global data since
the use of array map is a BPF loader detail and therefore unknown
at compilation time. For array maps with just a single entry we make
an exception in terms of BTF in that key type is declared optional
if value type is of DataSec type. The latter LLVM is guaranteed to
emit and it also aligns with how we regard global data maps as just
a plain buffer area reusing existing map facilities for allowing
things like introspection with existing tools.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:09 +0000 (23:20 +0200)]
bpf: kernel side support for BTF Var and DataSec
This work adds kernel-side verification, logging and seq_show dumping
of BTF Var and DataSec kinds which are emitted with latest LLVM. The
following constraints apply:
BTF Var must have:
- Its kind_flag is 0
- Its vlen is 0
- Must point to a valid type
- Type must not resolve to a forward type
- Size of underlying type must be > 0
- Must have a valid name
- Can only be a source type, not sink or intermediate one
- Name may include dots (e.g. in case of static variables
inside functions)
- Cannot be a member of a struct/union
- Linkage so far can either only be static or global/allocated
BTF DataSec must have:
- Its kind_flag is 0
- Its vlen cannot be 0
- Its size cannot be 0
- Must have a valid name
- Can only be a source type, not sink or intermediate one
- Name may include dots (e.g. to represent .bss, .data, .rodata etc)
- Cannot be a member of a struct/union
- Inner btf_var_secinfo array with {type,offset,size} triple
must be sorted by offset in ascending order
- Type must always point to BTF Var
- BTF resolved size of Var must be <= size provided by triple
- DataSec size must be >= sum of triple sizes (thus holes
are allowed)
btf_var_resolve(), btf_ptr_resolve() and btf_modifier_resolve()
are on a high level quite similar but each come with slight,
subtle differences. They could potentially be a bit refactored
in future which hasn't been done here to ease review.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:08 +0000 (23:20 +0200)]
bpf: add specification for BTF Var and DataSec kinds
This adds the BTF specification and UAPI bits for supporting BTF Var
and DataSec kinds. This is following LLVM upstream commit
ac4082b77e07
("[BPF] Add BTF Var and DataSec Support") which has been merged recently.
Var itself is for describing a global variable and DataSec to describe
ELF sections e.g. data/bss/rodata sections that hold one or multiple
global variables.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:07 +0000 (23:20 +0200)]
bpf: allow . char as part of the object name
Trivial addition to allow '.' aside from '_' as "special" characters
in the object name. Used to allow for substrings in maps from loader
side such as ".bss", ".data", ".rodata", but could also be useful for
other purposes.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:06 +0000 (23:20 +0200)]
bpf: add syscall side map freeze support
This patch adds a new BPF_MAP_FREEZE command which allows to
"freeze" the map globally as read-only / immutable from syscall
side.
Map permission handling has been refactored into map_get_sys_perms()
and drops FMODE_CAN_WRITE in case of locked map. Main use case is
to allow for setting up .rodata sections from the BPF ELF which
are loaded into the kernel, meaning BPF loader first allocates
map, sets up map value by copying .rodata section into it and once
complete, it calls BPF_MAP_FREEZE on the map fd to prevent further
modifications.
Right now BPF_MAP_FREEZE only takes map fd as argument while remaining
bpf_attr members are required to be zero. I didn't add write-only
locking here as counterpart since I don't have a concrete use-case
for it on my side, and I think it makes probably more sense to wait
once there is actually one. In that case bpf_attr can be extended
as usual with a flag field and/or others where flag 0 means that
we lock the map read-only hence this doesn't prevent to add further
extensions to BPF_MAP_FREEZE upon need.
A map creation flag like BPF_F_WRONCE was not considered for couple
of reasons: i) in case of a generic implementation, a map can consist
of more than just one element, thus there could be multiple map
updates needed to set the map into a state where it can then be
made immutable, ii) WRONCE indicates exact one-time write before
it is then set immutable. A generic implementation would set a bit
atomically on map update entry (if unset), indicating that every
subsequent update from then onwards will need to bail out there.
However, map updates can fail, so upon failure that flag would need
to be unset again and the update attempt would need to be repeated
for it to be eventually made immutable. While this can be made
race-free, this approach feels less clean and in combination with
reason i), it's not generic enough. A dedicated BPF_MAP_FREEZE
command directly sets the flag and caller has the guarantee that
map is immutable from syscall side upon successful return for any
future syscall invocations that would alter the map state, which
is also more intuitive from an API point of view. A command name
such as BPF_MAP_LOCK has been avoided as it's too close with BPF
map spin locks (which already has BPF_F_LOCK flag). BPF_MAP_FREEZE
is so far only enabled for privileged users.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:05 +0000 (23:20 +0200)]
bpf: add program side {rd, wr}only support for maps
This work adds two new map creation flags BPF_F_RDONLY_PROG
and BPF_F_WRONLY_PROG in order to allow for read-only or
write-only BPF maps from a BPF program side.
Today we have BPF_F_RDONLY and BPF_F_WRONLY, but this only
applies to system call side, meaning the BPF program has full
read/write access to the map as usual while bpf(2) calls with
map fd can either only read or write into the map depending
on the flags. BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG allows
for the exact opposite such that verifier is going to reject
program loads if write into a read-only map or a read into a
write-only map is detected. For read-only map case also some
helpers are forbidden for programs that would alter the map
state such as map deletion, update, etc. As opposed to the two
BPF_F_RDONLY / BPF_F_WRONLY flags, BPF_F_RDONLY_PROG as well
as BPF_F_WRONLY_PROG really do correspond to the map lifetime.
We've enabled this generic map extension to various non-special
maps holding normal user data: array, hash, lru, lpm, local
storage, queue and stack. Further generic map types could be
followed up in future depending on use-case. Main use case
here is to forbid writes into .rodata map values from verifier
side.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:04 +0000 (23:20 +0200)]
bpf: do not retain flags that are not tied to map lifetime
Both BPF_F_WRONLY / BPF_F_RDONLY flags are tied to the map file
descriptor, but not to the map object itself! Meaning, at map
creation time BPF_F_RDONLY can be set to make the map read-only
from syscall side, but this holds only for the returned fd, so
any other fd either retrieved via bpf file system or via map id
for the very same underlying map object can have read-write access
instead.
Given that, keeping the two flags around in the map_flags attribute
and exposing them to user space upon map dump is misleading and
may lead to false conclusions. Since these two flags are not
tied to the map object lets also not store them as map property.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Tue, 9 Apr 2019 21:20:03 +0000 (23:20 +0200)]
bpf: implement lookup-free direct value access for maps
This generic extension to BPF maps allows for directly loading
an address residing inside a BPF map value as a single BPF
ldimm64 instruction!
The idea is similar to what BPF_PSEUDO_MAP_FD does today, which
is a special src_reg flag for ldimm64 instruction that indicates
that inside the first part of the double insns's imm field is a
file descriptor which the verifier then replaces as a full 64bit
address of the map into both imm parts. For the newly added
BPF_PSEUDO_MAP_VALUE src_reg flag, the idea is the following:
the first part of the double insns's imm field is again a file
descriptor corresponding to the map, and the second part of the
imm field is an offset into the value. The verifier will then
replace both imm parts with an address that points into the BPF
map value at the given value offset for maps that support this
operation. Currently supported is array map with single entry.
It is possible to support more than just single map element by
reusing both 16bit off fields of the insns as a map index, so
full array map lookup could be expressed that way. It hasn't
been implemented here due to lack of concrete use case, but
could easily be done so in future in a compatible way, since
both off fields right now have to be 0 and would correctly
denote a map index 0.
The BPF_PSEUDO_MAP_VALUE is a distinct flag as otherwise with
BPF_PSEUDO_MAP_FD we could not differ offset 0 between load of
map pointer versus load of map's value at offset 0, and changing
BPF_PSEUDO_MAP_FD's encoding into off by one to differ between
regular map pointer and map value pointer would add unnecessary
complexity and increases barrier for debugability thus less
suitable. Using the second part of the imm field as an offset
into the value does /not/ come with limitations since maximum
possible value size is in u32 universe anyway.
This optimization allows for efficiently retrieving an address
to a map value memory area without having to issue a helper call
which needs to prepare registers according to calling convention,
etc, without needing the extra NULL test, and without having to
add the offset in an additional instruction to the value base
pointer. The verifier then treats the destination register as
PTR_TO_MAP_VALUE with constant reg->off from the user passed
offset from the second imm field, and guarantees that this is
within bounds of the map value. Any subsequent operations are
normally treated as typical map value handling without anything
extra needed from verification side.
The two map operations for direct value access have been added to
array map for now. In future other types could be supported as
well depending on the use case. The main use case for this commit
is to allow for BPF loader support for global variables that
reside in .data/.rodata/.bss sections such that we can directly
load the address of them with minimal additional infrastructure
required. Loader support has been added in subsequent commits for
libbpf library.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrey Ignatov [Sun, 7 Apr 2019 05:37:34 +0000 (22:37 -0700)]
libbpf: Ignore -Wformat-nonliteral warning
vsprintf() in __base_pr() uses nonliteral format string and it breaks
compilation for those who provide corresponding extra CFLAGS, e.g.:
https://github.com/libbpf/libbpf/issues/27
If libbpf is built with the flags from PR:
libbpf.c:68:26: error: format string is not a string literal
[-Werror,-Wformat-nonliteral]
return vfprintf(stderr, format, args);
^~~~~~
1 error generated.
Ignore this warning since the use case in libbpf.c is legit.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 5 Apr 2019 14:50:08 +0000 (16:50 +0200)]
Merge branch 'bpf-varstack-fixes'
Andrey Ignatov says:
====================
v2->v3:
- sanity check max value for variable offset.
v1->v2:
- rely on meta = NULL to reject var_off stack access to uninit buffer.
This patch set is a follow-up for discussion [1].
It fixes variable offset stack access handling for raw and unprivileged
mode, rejecting both of them, and sanity checks max variable offset value.
Patch 1 handles raw (uninitialized) mode.
Patch 2 adds test for raw mode.
Patch 3 handles unprivileged mode.
Patch 4 adds test for unprivileged mode.
Patch 5 adds sanity check for max value of variable offset.
Patch 6 adds test for variable offset max value checking.
Patch 7 is a minor fix in verbose log.
Unprivileged mode is an interesting case since one (and only?) way to come
up with variable offset is to use pointer arithmetics. Though pointer
arithmetics is already prohibited for unprivileged mode. I'm not sure if
it's enough though and it seems like a good idea to still reject variable
offset for unpriv in check_stack_boundary(). Please see patches 3 and 4
for more details on this.
[1] https://marc.info/?l=linux-netdev&m=
155419526427742&w=2
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:43 +0000 (23:22 -0700)]
bpf: Add missed newline in verifier verbose log
check_stack_access() that prints verbose log is used in
adjust_ptr_min_max_vals() that prints its own verbose log and now they
stick together, e.g.:
variable stack access var_off=(0xfffffffffffffff0; 0x4) off=-16
size=1R2 stack pointer arithmetic goes out of range, prohibited for
!root
Add missing newline so that log is more readable:
variable stack access var_off=(0xfffffffffffffff0; 0x4) off=-16 size=1
R2 stack pointer arithmetic goes out of range, prohibited for !root
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:42 +0000 (23:22 -0700)]
selftests/bpf: Test unbounded var_off stack access
Test the case when reg->smax_value is too small/big and can overflow,
and separately min and max values outside of stack bounds.
Example of output:
# ./test_verifier
#856/p indirect variable-offset stack access, unbounded OK
#857/p indirect variable-offset stack access, max out of bound OK
#858/p indirect variable-offset stack access, min out of bound OK
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:41 +0000 (23:22 -0700)]
bpf: Sanity check max value for var_off stack access
As discussed in [1] max value of variable offset has to be checked for
overflow on stack access otherwise verifier would accept code like this:
0: (b7) r2 = 6
1: (b7) r3 = 28
2: (7a) *(u64 *)(r10 -16) = 0
3: (7a) *(u64 *)(r10 -8) = 0
4: (79) r4 = *(u64 *)(r1 +168)
5: (c5) if r4 s< 0x0 goto pc+4
R1=ctx(id=0,off=0,imm=0) R2=inv6 R3=inv28
R4=inv(id=0,umax_value=
9223372036854775807,var_off=(0x0;
0x7fffffffffffffff)) R10=fp0,call_-1 fp-8=mmmmmmmm fp-16=mmmmmmmm
6: (17) r4 -= 16
7: (0f) r4 += r10
8: (b7) r5 = 8
9: (85) call bpf_getsockopt#57
10: (b7) r0 = 0
11: (95) exit
, where R4 obviosly has unbounded max value.
Fix it by checking that reg->smax_value is inside (-BPF_MAX_VAR_OFF;
BPF_MAX_VAR_OFF) range.
reg->smax_value is used instead of reg->umax_value because stack
pointers are calculated using negative offset from fp. This is opposite
to e.g. map access where offset must be non-negative and where
umax_value is used.
Also dedicated verbose logs are added for both min and max bound check
failures to have diagnostics consistent with variable offset handling in
check_map_access().
[1] https://marc.info/?l=linux-netdev&m=
155433357510597&w=2
Fixes: 2011fccfb61b ("bpf: Support variable offset stack access from helpers")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:40 +0000 (23:22 -0700)]
selftests/bpf: Test indirect var_off stack access in unpriv mode
Test that verifier rejects indirect stack access with variable offset in
unprivileged mode and accepts same code in privileged mode.
Since pointer arithmetics is prohibited in unprivileged mode verifier
should reject the program even before it gets to helper call that uses
variable offset, at the time when that variable offset is trying to be
constructed.
Example of output:
# ./test_verifier
...
#859/u indirect variable-offset stack access, priv vs unpriv OK
#859/p indirect variable-offset stack access, priv vs unpriv OK
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:39 +0000 (23:22 -0700)]
bpf: Reject indirect var_off stack access in unpriv mode
Proper support of indirect stack access with variable offset in
unprivileged mode (!root) requires corresponding support in Spectre
masking for stack ALU in retrieve_ptr_limit().
There are no use-case for variable offset in unprivileged mode though so
make verifier reject such accesses for simplicity.
Pointer arithmetics is one (and only?) way to cause variable offset and
it's already rejected in unpriv mode so that verifier won't even get to
helper function whose argument contains variable offset, e.g.:
0: (7a) *(u64 *)(r10 -16) = 0
1: (7a) *(u64 *)(r10 -8) = 0
2: (61) r2 = *(u32 *)(r1 +0)
3: (57) r2 &= 4
4: (17) r2 -= 16
5: (0f) r2 += r10
variable stack access var_off=(0xfffffffffffffff0; 0x4) off=-16 size=1R2
stack pointer arithmetic goes out of range, prohibited for !root
Still it looks like a good idea to reject variable offset indirect stack
access for unprivileged mode in check_stack_boundary() explicitly.
Fixes: 2011fccfb61b ("bpf: Support variable offset stack access from helpers")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:38 +0000 (23:22 -0700)]
selftests/bpf: Test indirect var_off stack access in raw mode
Test that verifier rejects indirect access to uninitialized stack with
variable offset.
Example of output:
# ./test_verifier
...
#859/p indirect variable-offset stack access, uninitialized OK
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 4 Apr 2019 06:22:37 +0000 (23:22 -0700)]
bpf: Reject indirect var_off stack access in raw mode
It's hard to guarantee that whole memory is marked as initialized on
helper return if uninitialized stack is accessed with variable offset
since specific bounds are unknown to verifier. This may cause
uninitialized stack leaking.
Reject such an access in check_stack_boundary to prevent possible
leaking.
There are no known use-cases for indirect uninitialized stack access
with variable offset so it shouldn't break anything.
Fixes: 2011fccfb61b ("bpf: Support variable offset stack access from helpers")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Thu, 4 Apr 2019 21:37:14 +0000 (14:37 -0700)]
samples/bpf: fix build with new clang
clang started to error on invalid asm clobber usage in x86 headers
and many bpf program samples failed to build with the message:
CLANG-bpf /data/users/ast/bpf-next/samples/bpf/xdp_redirect_kern.o
In file included from /data/users/ast/bpf-next/samples/bpf/xdp_redirect_kern.c:14:
In file included from ../include/linux/in.h:23:
In file included from ../include/uapi/linux/in.h:24:
In file included from ../include/linux/socket.h:8:
In file included from ../include/linux/uio.h:14:
In file included from ../include/crypto/hash.h:16:
In file included from ../include/linux/crypto.h:26:
In file included from ../include/linux/uaccess.h:5:
In file included from ../include/linux/sched.h:15:
In file included from ../include/linux/sem.h:5:
In file included from ../include/uapi/linux/sem.h:5:
In file included from ../include/linux/ipc.h:9:
In file included from ../include/linux/refcount.h:72:
../arch/x86/include/asm/refcount.h:72:36: error: asm-specifier for input or output variable conflicts with asm clobber list
r->refs.counter, e, "er", i, "cx");
^
../arch/x86/include/asm/refcount.h:86:27: error: asm-specifier for input or output variable conflicts with asm clobber list
r->refs.counter, e, "cx");
^
2 errors generated.
Override volatile() to workaround the problem.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel T. Lee [Wed, 3 Apr 2019 22:17:56 +0000 (07:17 +0900)]
samples, selftests/bpf: add NULL check for ksym_search
Since, ksym_search added with verification logic for symbols existence,
it could return NULL when the kernel symbols are not loaded.
This commit will add NULL check logic after ksym_search.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel T. Lee [Wed, 3 Apr 2019 22:17:55 +0000 (07:17 +0900)]
selftests/bpf: ksym_search won't check symbols exists
Currently, ksym_search located at trace_helpers won't check symbols are
existing or not.
In ksym_search, when symbol is not found, it will return &syms[0](_stext).
But when the kernel symbols are not loaded, it will return NULL, which is
not a desired action.
This commit will add verification logic whether symbols are loaded prior
to the symbol search.
Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Wed, 3 Apr 2019 23:27:39 +0000 (01:27 +0200)]
Merge branch 'bpf-verifier-scalability'
Alexei Starovoitov says:
====================
v1->v2:
- fixed typo in patch 1
- added a patch to convert kcalloc to kvcalloc
- added a patch to verbose 16-bit jump offset check
- added a test with 1m insns
This patch set is the first step to be able to accept large programs.
The verifier still suffers from its brute force algorithm and
large programs can easily hit 1M insn_processed limit.
A lot more work is necessary to be able to verify large programs.
v1:
Realize two key ideas to speed up verification speed by ~20 times
1. every 'branching' instructions records all verifier states.
not all of them are useful for search pruning.
add a simple heuristic to keep states that were successful in search pruning
and remove those that were not
2. mark_reg_read walks parentage chain of registers to mark parents as LIVE_READ.
Once the register is marked there is no need to remark it again in the future.
Hence stop walking the chain once first LIVE_READ is seen.
1st optimization gives 10x speed up on large programs
and 2nd optimization reduces the cost of mark_reg_read from ~40% of cpu to <1%.
Combined the deliver ~20x speedup on large programs.
Faster and bounded verification time allows to increase insn_processed
limit to 1 million from 130k.
Worst case it takes 1/10 of a second to process that many instructions
and peak memory consumption is peak_states * sizeof(struct bpf_verifier_state)
which is around ~5Mbyte.
Increase insn_per_program limit for root to insn_processed limit.
Add verification stats and stress tests for verifier scalability.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:49 +0000 (21:27 -0700)]
selftests/bpf: synthetic tests to push verifier limits
Add a test to generate 1m ld_imm64 insns to stress the verifier.
Bump the size of fill_ld_abs_vlan_push_pop test from 4k to 29k
and jump_around_ld_abs from 4k to 5.5k.
Larger sizes are not possible due to 16-bit offset encoding
in jump instructions.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:48 +0000 (21:27 -0700)]
selftests/bpf: add few verifier scale tests
Add 3 basic tests that stress verifier scalability.
test_verif_scale1.c calls non-inlined jhash() function 90 times on
different position in the packet.
This test simulates network packet parsing.
jhash function is ~140 instructions and main program is ~1200 insns.
test_verif_scale2.c force inlines jhash() function 90 times.
This program is ~15k instructions long.
test_verif_scale3.c calls non-inlined jhash() function 90 times on
But this time jhash has to process 32-bytes from the packet
instead of 14-bytes in tests 1 and 2.
jhash function is ~230 insns and main program is ~1200 insns.
$ test_progs -s
can be used to see verifier stats.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:47 +0000 (21:27 -0700)]
libbpf: teach libbpf about log_level bit 2
Allow bpf_prog_load_xattr() to specify log_level for program loading.
Teach libbpf to accept log_level with bit 2 set.
Increase default BPF_LOG_BUF_SIZE from 256k to 16M.
There is no downside to increase it to a maximum allowed by old kernels.
Existing 256k limit caused ENOSPC errors and users were not able to see
verifier error which is printed at the end of the verifier log.
If ENOSPC is hit, double the verifier log and try again to capture
the verifier error.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:46 +0000 (21:27 -0700)]
bpf: increase verifier log limit
The existing 16Mbyte verifier log limit is not enough for log_level=2
even for small programs. Increase it to 1Gbyte.
Note it's not a kernel memory limit.
It's an amount of memory user space provides to store
the verifier log. The kernel populates it 1k at a time.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:45 +0000 (21:27 -0700)]
bpf: increase complexity limit and maximum program size
Large verifier speed improvements allow to increase
verifier complexity limit.
Now regardless of the program composition and its size it takes
little time for the verifier to hit insn_processed limit.
On typical x86 machine non-debug kernel processes 1M instructions
in 1/10 of a second.
(before these speed improvements specially crafted programs
could be hitting multi-second verification times)
Full kasan kernel with debug takes ~1 second for the same 1M insns.
Hence bump the BPF_COMPLEXITY_LIMIT_INSNS limit to 1M.
Also increase the number of instructions per program
from 4k to internal BPF_COMPLEXITY_LIMIT_INSNS limit.
4k limit was confusing to users, since small programs with hundreds
of insns could be hitting BPF_COMPLEXITY_LIMIT_INSNS limit.
Sometimes adding more insns and bpf_trace_printk debug statements
would make the verifier accept the program while removing
code would make the verifier reject it.
Some user space application started to add #define MAX_FOO to
their programs and do:
MAX_FOO=100;
again:
compile with MAX_FOO;
try to load;
if (fails_to_load) { reduce MAX_FOO; goto again; }
to be able to fit maximum amount of processing into single program.
Other users artificially split their single program into a set of programs
and use all 32 iterations of tail_calls to increase compute limits.
And the most advanced folks used unlimited tc-bpf filter list
to execute many bpf programs.
Essentially the users managed to workaround 4k insn limit.
This patch removes the limit for root programs from uapi.
BPF_COMPLEXITY_LIMIT_INSNS is the kernel internal limit
and success to load the program no longer depends on program size,
but on 'smartness' of the verifier only.
The verifier will continue to get smarter with every kernel release.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:44 +0000 (21:27 -0700)]
bpf: verbose jump offset overflow check
Larger programs may trigger 16-bit jump offset overflow check
during instruction patching. Make this error verbose otherwise
users cannot decipher error code without printks in the verifier.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:43 +0000 (21:27 -0700)]
bpf: convert temp arrays to kvcalloc
Temporary arrays used during program verification need to be vmalloc-ed
to support large bpf programs.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:42 +0000 (21:27 -0700)]
bpf: improve verification speed by not remarking live_read
With large verifier speed improvement brought by the previous patch
mark_reg_read() becomes the hottest function during verification.
On a typical program it consumes 40% of cpu.
mark_reg_read() walks parentage chain of registers to mark parents as LIVE_READ.
Once the register is marked there is no need to remark it again in the future.
Hence stop walking the chain once first LIVE_READ is seen.
This optimization drops mark_reg_read() time from 40% of cpu to <1%
and overall 2x improvement of verification speed.
For some programs the longest_mark_read_walk counter improves from ~500 to ~5
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:41 +0000 (21:27 -0700)]
bpf: improve verification speed by droping states
Branch instructions, branch targets and calls in a bpf program are
the places where the verifier remembers states that led to successful
verification of the program.
These states are used to prune brute force program analysis.
For unprivileged programs there is a limit of 64 states per such
'branching' instructions (maximum length is tracked by max_states_per_insn
counter introduced in the previous patch).
Simply reducing this threshold to 32 or lower increases insn_processed
metric to the point that small valid programs get rejected.
For root programs there is no limit and cilium programs can have
max_states_per_insn to be 100 or higher.
Walking 100+ states multiplied by number of 'branching' insns during
verification consumes significant amount of cpu time.
Turned out simple LRU-like mechanism can be used to remove states
that unlikely will be helpful in future search pruning.
This patch introduces hit_cnt and miss_cnt counters:
hit_cnt - this many times this state successfully pruned the search
miss_cnt - this many times this state was not equivalent to other states
(and that other states were added to state list)
The heuristic introduced in this patch is:
if (sl->miss_cnt > sl->hit_cnt * 3 + 3)
/* drop this state from future considerations */
Higher numbers increase max_states_per_insn (allow more states to be
considered for pruning) and slow verification speed, but do not meaningfully
reduce insn_processed metric.
Lower numbers drop too many states and insn_processed increases too much.
Many different formulas were considered.
This one is simple and works well enough in practice.
(the analysis was done on selftests/progs/* and on cilium programs)
The end result is this heuristic improves verification speed by 10 times.
Large synthetic programs that used to take a second more now take
1/10 of a second.
In cases where max_states_per_insn used to be 100 or more, now it's ~10.
There is a slight increase in insn_processed for cilium progs:
before after
bpf_lb-DLB_L3.o 1831 1838
bpf_lb-DLB_L4.o 3029 3218
bpf_lb-DUNKNOWN.o 1064 1064
bpf_lxc-DDROP_ALL.o 26309 26935
bpf_lxc-DUNKNOWN.o 33517 34439
bpf_netdev.o 9713 9721
bpf_overlay.o 6184 6184
bpf_lcx_jit.o 37335 39389
And 2-3 times improvement in the verification speed.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Tue, 2 Apr 2019 04:27:40 +0000 (21:27 -0700)]
bpf: add verifier stats and log_level bit 2
In order to understand the verifier bottlenecks add various stats
and extend log_level:
log_level 1 and 2 are kept as-is:
bit 0 - level=1 - print every insn and verifier state at branch points
bit 1 - level=2 - print every insn and verifier state at every insn
bit 2 - level=4 - print verifier error and stats at the end of verification
When verifier rejects the program the libbpf is trying to load the program twice.
Once with log_level=0 (no messages, only error code is reported to user space)
and second time with log_level=1 to tell the user why the verifier rejected it.
With introduction of bit 2 - level=4 the libbpf can choose to always use that
level and load programs once, since the verification speed is not affected and
in case of error the verbose message will be available.
Note that the verifier stats are not part of uapi just like all other
verbose messages. They're expected to change in the future.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrii Nakryiko [Tue, 2 Apr 2019 16:49:50 +0000 (09:49 -0700)]
kbuild: add ability to generate BTF type info for vmlinux
This patch adds new config option to trigger generation of BTF type
information from DWARF debuginfo for vmlinux and kernel modules through
pahole, which in turn relies on libbpf for btf_dedup() algorithm.
The intent is to record compact type information of all types used
inside kernel, including all the structs/unions/typedefs/etc. This
enables BPF's compile-once-run-everywhere ([0]) approach, in which
tracing programs that are inspecting kernel's internal data (e.g.,
struct task_struct) can be compiled on a system running some kernel
version, but would be possible to run on other kernel versions (and
configurations) without recompilation, even if the layout of structs
changed and/or some of the fields were added, removed, or renamed.
This is only possible if BPF loader can get kernel type info to adjust
all the offsets correctly. This patch is a first time in this direction,
making sure that BTF type info is part of Linux kernel image in
non-loadable ELF section.
BTF deduplication ([1]) algorithm typically provides 100x savings
compared to DWARF data, so resulting .BTF section is not big as is
typically about 2MB in size.
[0] http://vger.kernel.org/lpc-bpf2018.html#session-2
[1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Tue, 2 Apr 2019 21:17:19 +0000 (23:17 +0200)]
Merge branch 'bpf-selftest-clang-fixes'
Stanislav Fomichev says:
====================
This series contains small fixes to make bpf selftests compile cleanly
with clangs. All of them are not real problems, but it's nice to have
an option to use clang for the tests themselves.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Stanislav Fomichev [Tue, 2 Apr 2019 17:08:35 +0000 (10:08 -0700)]
selftests: bpf: remove duplicate .flags initialization in ctx_skb.c
verifier/ctx_skb.c:708:11: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Stanislav Fomichev [Tue, 2 Apr 2019 17:08:34 +0000 (10:08 -0700)]
selftests: bpf: fix -Wformat-invalid-specifier for bpf_obj_id.c
Use standard C99 %zu for sizeof, not GCC's custom %Zu:
bpf_obj_id.c:76:48: warning: invalid conversion specifier 'Z'
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Stanislav Fomichev [Tue, 2 Apr 2019 17:08:33 +0000 (10:08 -0700)]
selftests: bpf: fix -Wformat-security warning for flow_dissector_load.c
flow_dissector_load.c:55:19: warning: format string is not a string literal (potentially insecure)
[-Wformat-security]
error(1, errno, command);
^~~~~~~
flow_dissector_load.c:55:19: note: treat the string as an argument to avoid this
error(1, errno, command);
^
"%s",
1 warning generated.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Stanislav Fomichev [Tue, 2 Apr 2019 17:08:32 +0000 (10:08 -0700)]
selftests: bpf: tests.h should depend on .c files, not the output
This makes sure we don't put headers as input files when doing
compilation, because clang complains about the following:
clang-9: error: cannot specify -o when generating multiple output files
../lib.mk:152: recipe for target 'xxx/tools/testing/selftests/bpf/test_verifier' failed
make: *** [xxx/tools/testing/selftests/bpf/test_verifier] Error 1
make: *** Waiting for unfinished jobs....
clang-9: error: cannot specify -o when generating multiple output files
../lib.mk:152: recipe for target 'xxx/tools/testing/selftests/bpf/test_progs' failed
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Yonghong Song [Fri, 29 Mar 2019 05:30:53 +0000 (22:30 -0700)]
bpf: add bpffs multi-dimensional array tests in test_btf
For multiple dimensional arrays like below,
int a[2][3]
both llvm and pahole generated one BTF_KIND_ARRAY type like
. element_type: int
. index_type: unsigned int
. number of elements: 6
Such a collapsed BTF_KIND_ARRAY type will cause the divergence
in BTF vs. the user code. In the compile-once-run-everywhere
project, the header file is generated from BTF and used for bpf
program, and the definition in the header file will be different
from what user expects.
But the kernel actually supports chained multi-dimensional array
types properly. The above "int a[2][3]" can be represented as
Type #n:
. element_type: int
. index_type: unsigned int
. number of elements: 3
Type #(n+1):
. element_type: type #n
. index_type: unsigned int
. number of elements: 2
The following llvm commit
https://reviews.llvm.org/rL357215
also enables llvm to generated proper chained multi-dimensional arrays.
The test_btf already has a raw test ("struct test #1") for chained
multi-dimensional arrays. This patch added amended bpffs test for
chained multi-dimensional arrays.
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 29 Mar 2019 19:05:36 +0000 (12:05 -0700)]
Merge branch 'variable-stack-access'
Andrey Ignatov says:
====================
The patch set adds support for stack access with variable offset from helpers.
Patch 1 is the main patch in the set and provides more details.
Patch 2 adds selftests for new functionality.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrey Ignatov [Fri, 29 Mar 2019 01:01:58 +0000 (18:01 -0700)]
selftests/bpf: Test variable offset stack access
Test different scenarios of indirect variable-offset stack access: out of
bound access (>0), min_off below initialized part of the stack,
max_off+size above initialized part of the stack, initialized stack.
Example of output:
...
#856/p indirect variable-offset stack access, out of bound OK
#857/p indirect variable-offset stack access, max_off+size > max_initialized OK
#858/p indirect variable-offset stack access, min_off < min_initialized OK
#859/p indirect variable-offset stack access, ok OK
...
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrey Ignatov [Fri, 29 Mar 2019 01:01:57 +0000 (18:01 -0700)]
bpf: Support variable offset stack access from helpers
Currently there is a difference in how verifier checks memory access for
helper arguments for PTR_TO_MAP_VALUE and PTR_TO_STACK with regard to
variable part of offset.
check_map_access, that is used for PTR_TO_MAP_VALUE, can handle variable
offsets just fine, so that BPF program can call a helper like this:
some_helper(map_value_ptr + off, size);
, where offset is unknown at load time, but is checked by program to be
in a safe rage (off >= 0 && off + size < map_value_size).
But it's not the case for check_stack_boundary, that is used for
PTR_TO_STACK, and same code with pointer to stack is rejected by
verifier:
some_helper(stack_value_ptr + off, size);
For example:
0: (7a) *(u64 *)(r10 -16) = 0
1: (7a) *(u64 *)(r10 -8) = 0
2: (61) r2 = *(u32 *)(r1 +0)
3: (57) r2 &= 4
4: (17) r2 -= 16
5: (0f) r2 += r10
6: (18) r1 = 0xffff888111343a80
8: (85) call bpf_map_lookup_elem#1
invalid variable stack read R2 var_off=(0xfffffffffffffff0; 0x4)
Add support for variable offset access to check_stack_boundary so that
if offset is checked by program to be in a safe range it's accepted by
verifier.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Luca Boccassi [Thu, 28 Mar 2019 11:33:53 +0000 (11:33 +0000)]
tools/bpf: generate pkg-config file for libbpf
Generate a libbpf.pc file at build time so that users can rely
on pkg-config to find the library, its CFLAGS and LDFLAGS.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
David S. Miller [Thu, 28 Mar 2019 00:37:58 +0000 (17:37 -0700)]
Merge git://git./linux/kernel/git/davem/net
Eric Dumazet [Wed, 27 Mar 2019 19:40:33 +0000 (12:40 -0700)]
inet: switch IP ID generator to siphash
According to Amit Klein and Benny Pinkas, IP ID generation is too weak
and might be used by attackers.
Even with recent net_hash_mix() fix (netns: provide pure entropy for net_hash_mix())
having 64bit key and Jenkins hash is risky.
It is time to switch to siphash and its 128bit keys.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Amit Klein <aksecurity@gmail.com>
Reported-by: Benny Pinkas <benny@pinkas.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Wed, 27 Mar 2019 16:15:20 +0000 (16:15 +0000)]
net: phy: mdio-bcm-unimac: remove redundant !timeout check
The check for zero timeout is always true at the end of the proceeding
while loop; the only other exit path in the loop is if the unimac MDIO
is not busy. Remove the redundant zero timeout check and always
return -ETIMEDOUT on this timeout return path.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 26 Mar 2019 15:34:55 +0000 (08:34 -0700)]
tcp: fix zerocopy and notsent_lowat issues
My recent patch had at least three problems :
1) TX zerocopy wants notification when skb is acknowledged,
thus we need to call skb_zcopy_clear() if the skb is
cached into sk->sk_tx_skb_cache
2) Some applications might expect precise EPOLLOUT
notifications, so we need to update sk->sk_wmem_queued
and call sk_mem_uncharge() from sk_wmem_free_skb()
in all cases. The SOCK_QUEUE_SHRUNK flag must also be set.
3) Reuse of saved skb should have used skb_cloned() instead
of simply checking if the fast clone has been freed.
Fixes: 472c2e07eef0 ("tcp: add one skb cache for tx")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Numan Siddique [Tue, 26 Mar 2019 00:43:46 +0000 (06:13 +0530)]
net: openvswitch: Add a new action check_pkt_len
This patch adds a new action - 'check_pkt_len' which checks the
packet length and executes a set of actions if the packet
length is greater than the specified length or executes
another set of actions if the packet length is lesser or equal to.
This action takes below nlattrs
* OVS_CHECK_PKT_LEN_ATTR_PKT_LEN - 'pkt_len' to check for
* OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_GREATER - Nested actions
to apply if the packet length is greater than the specified 'pkt_len'
* OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL - Nested
actions to apply if the packet length is lesser or equal to the
specified 'pkt_len'.
The main use case for adding this action is to solve the packet
drops because of MTU mismatch in OVN virtual networking solution.
When a VM (which belongs to a logical switch of OVN) sends a packet
destined to go via the gateway router and if the nic which provides
external connectivity, has a lesser MTU, OVS drops the packet
if the packet length is greater than this MTU.
With the help of this action, OVN will check the packet length
and if it is greater than the MTU size, it will generate an
ICMP packet (type 3, code 4) and includes the next hop mtu in it
so that the sender can fragment the packets.
Reported-at:
https://mail.openvswitch.org/pipermail/ovs-discuss/2018-July/047039.html
Suggested-by: Ben Pfaff <blp@ovn.org>
Signed-off-by: Numan Siddique <nusiddiq@redhat.com>
CC: Gregory Rose <gvrose8192@gmail.com>
CC: Pravin B Shelar <pshelar@ovn.org>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Tested-by: Greg Rose <gvrose8192@gmail.com>
Reviewed-by: Greg Rose <gvrose8192@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 27 Mar 2019 20:51:49 +0000 (13:51 -0700)]
Merge branch 'ethtool-add-support-for-Fast-Link-Down-as-new-PHY-tunable'
Heiner Kallweit says:
====================
ethtool: add support for Fast Link Down as new PHY tunable
This adds support for Fast Link Down as new PHY tunable.
Fast Link Down reduces the time until a link down event is reported
for 1000BaseT. According to the standard it's 750ms what is too long
for several use cases.
This is the kernel-related series, the ethtool userspace extension
I'd submit once the kernel part has been applied.
v2:
- add describing comment in patch 1
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Mon, 25 Mar 2019 18:35:41 +0000 (19:35 +0100)]
net: phy: marvell: add PHY tunable fast link down support for
88E1540
1000BaseT standard requires that a link is reported as down earliest
after 750ms. Several use case however require a much faster detecion
of a broken link. Fast Link Down supports this by intentionally
violating a the standard. This patch exposes the Fast Link Down
feature of
88E1540 and
88E6390. These PHY's can be found as internal
PHY's in several switches:
88E6352,
88E6240,
88E6176,
88E6172,
and
88E6390(X). Fast Link Down and EEE are mutually exclusive.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Mon, 25 Mar 2019 18:34:58 +0000 (19:34 +0100)]
ethtool: add PHY Fast Link Down support
This adds support for Fast Link Down as new PHY tunable.
Fast Link Down reduces the time until a link down event is reported
for 1000BaseT. According to the standard it's 750ms what is too long
for several use cases.
v2:
- add comment describing the constants
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bart Van Assche [Mon, 25 Mar 2019 16:17:23 +0000 (09:17 -0700)]
net/core: Allow the compiler to verify declaration and definition consistency
Instead of declaring a function in a .c file, declare it in a header
file and include that header file from the source files that define
and that use the function. That allows the compiler to verify
consistency of declaration and definition. See also commit
52267790ef52 ("sock: add MSG_ZEROCOPY") # v4.14.
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bart Van Assche [Mon, 25 Mar 2019 16:17:22 +0000 (09:17 -0700)]
net/core: Fix rtnetlink kernel-doc headers
This patch avoids that the following warnings are reported when building
with W=1:
net/core/rtnetlink.c:3580: warning: Function parameter or member 'ndm' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3580: warning: Function parameter or member 'tb' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3580: warning: Function parameter or member 'dev' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3580: warning: Function parameter or member 'addr' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3580: warning: Function parameter or member 'vid' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3580: warning: Function parameter or member 'flags' not described in 'ndo_dflt_fdb_add'
net/core/rtnetlink.c:3718: warning: Function parameter or member 'ndm' not described in 'ndo_dflt_fdb_del'
net/core/rtnetlink.c:3718: warning: Function parameter or member 'tb' not described in 'ndo_dflt_fdb_del'
net/core/rtnetlink.c:3718: warning: Function parameter or member 'dev' not described in 'ndo_dflt_fdb_del'
net/core/rtnetlink.c:3718: warning: Function parameter or member 'addr' not described in 'ndo_dflt_fdb_del'
net/core/rtnetlink.c:3718: warning: Function parameter or member 'vid' not described in 'ndo_dflt_fdb_del'
net/core/rtnetlink.c:3861: warning: Function parameter or member 'skb' not described in 'ndo_dflt_fdb_dump'
net/core/rtnetlink.c:3861: warning: Function parameter or member 'cb' not described in 'ndo_dflt_fdb_dump'
net/core/rtnetlink.c:3861: warning: Function parameter or member 'filter_dev' not described in 'ndo_dflt_fdb_dump'
net/core/rtnetlink.c:3861: warning: Function parameter or member 'idx' not described in 'ndo_dflt_fdb_dump'
net/core/rtnetlink.c:3861: warning: Excess function parameter 'nlh' description in 'ndo_dflt_fdb_dump'
Cc: Hubert Sokolowski <hubert.sokolowski@intel.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bart Van Assche [Mon, 25 Mar 2019 16:17:21 +0000 (09:17 -0700)]
net/core: Document __skb_flow_dissect() flags argument
This patch avoids that the following warning is reported when building
with W=1:
warning: Function parameter or member 'flags' not described in '__skb_flow_dissect'
Cc: Tom Herbert <tom@herbertland.com>
Fixes: cd79a2382aa5 ("flow_dissector: Add flags argument to skb_flow_dissector functions") # v4.3.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bart Van Assche [Mon, 25 Mar 2019 16:17:20 +0000 (09:17 -0700)]
net/core: Document all dev_ioctl() arguments
This patch avoids that the following warnings are reported when building
with W=1:
net/core/dev_ioctl.c:378: warning: Function parameter or member 'ifr' not described in 'dev_ioctl'
net/core/dev_ioctl.c:378: warning: Function parameter or member 'need_copyout' not described in 'dev_ioctl'
net/core/dev_ioctl.c:378: warning: Excess function parameter 'arg' description in 'dev_ioctl'
Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: 44c02a2c3dc5 ("dev_ioctl(): move copyin/copyout to callers") # v4.16.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bart Van Assche [Mon, 25 Mar 2019 16:17:19 +0000 (09:17 -0700)]
net/core: Document reuseport_add_sock() bind_inany argument
This patch avoids that the following warning is reported when building
with W=1:
warning: Function parameter or member 'bind_inany' not described in 'reuseport_add_sock'
Cc: Martin KaFai Lau <kafai@fb.com>
Fixes: 2dbb9b9e6df6 ("bpf: Introduce BPF_PROG_TYPE_SK_REUSEPORT") # v4.19.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sat, 23 Mar 2019 18:45:13 +0000 (19:45 +0100)]
net: dsa: mv88e6xxx: remove unneeded cmode initialization
This partially reverts
ed8fe20205ac ("net: dsa: mv88e6xxx: prevent
interrupt storm caused by mv88e6390x_port_set_cmode"). I missed
that chip->ports[].cmode is overwritten anyway by the cmode
caching in mv88e6xxx_setup().
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sudarsana Reddy Kalluru [Wed, 27 Mar 2019 11:40:43 +0000 (04:40 -0700)]
bnx2x: Utilize FW 7.13.11.0.
Commit
8fcf0ec44c11f "bnx2x: Add FW 7.13.11.0" added said .bin FW to
linux-firmware; This patch incorporates the FW in the bnx2x driver.
This introduces few FW fixes and the support for Tx VLAN filtering.
Please consider applying it to 'net-next' tree.
Signed-off-by: Sudarsana Reddy Kalluru <skalluru@marvell.com>
Signed-off-by: Ariel Elior <aelior@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kristian Evensen [Wed, 27 Mar 2019 10:16:03 +0000 (11:16 +0100)]
fou: Support binding FoU socket
An FoU socket is currently bound to the wildcard-address. While this
works fine, there are several use-cases where the use of the
wildcard-address is not desirable. For example, I use FoU on some
multi-homed servers and would like to use FoU on only one of the
interfaces.
This commit adds support for binding FoU sockets to a given source
address/interface, as well as connecting the socket to a given
destination address/port. udp_tunnel already provides the required
infrastructure, so most of the code added is for exposing and setting
the different attributes (local address, peer address, etc.).
The lookups performed when we add, delete or get an FoU-socket has also
been updated to compare all the attributes a user can set. Since the
comparison now involves several elements, I have added a separate
comparison-function instead of open-coding.
In order to test the code and ensure that the new comparison code works
correctly, I started by creating a wildcard socket bound to port 1234 on
my machine. I then tried to create a non-wildcarded socket bound to the
same port, as well as fetching and deleting the socket (including source
address, peer address or interface index in the netlink request). Both
the create, fetch and delete request failed. Deleting/fetching the
socket was only successful when my netlink request attributes matched
those used to create the socket.
I then repeated the tests, but with a socket bound to a local ip
address, a socket bound to a local address + interface, and a bound
socket that was also «connected» to a peer. Add only worked when no
socket with the matching source address/interface (or wildcard) existed,
while fetch/delete was only successful when all attributes matched.
In addition to testing that the new code work, I also checked that the
current behavior is kept. If none of the new attributes are provided,
then an FoU-socket is configured as before (i.e., wildcarded). If any
of the new attributes are provided, the FoU-socket is configured as
expected.
v1->v2:
* Fixed building with IPv6 disabled (kbuild).
* Fixed a return type warning and make the ugly comparison function more
readable (kbuild).
* Describe more in detail what has been tested (thanks David Miller).
* Make peer port required if peer address is specified.
Signed-off-by: Kristian Evensen <kristian.evensen@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Wed, 27 Mar 2019 19:22:57 +0000 (12:22 -0700)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
"Fixes here and there, a couple new device IDs, as usual:
1) Fix BQL race in dpaa2-eth driver, from Ioana Ciornei.
2) Fix 64-bit division in iwlwifi, from Arnd Bergmann.
3) Fix documentation for some eBPF helpers, from Quentin Monnet.
4) Some UAPI bpf header sync with tools, also from Quentin Monnet.
5) Set descriptor ownership bit at the right time for jumbo frames in
stmmac driver, from Aaro Koskinen.
6) Set IFF_UP properly in tun driver, from Eric Dumazet.
7) Fix load/store doubleword instruction generation in powerpc eBPF
JIT, from Naveen N. Rao.
8) nla_nest_start() return value checks all over, from Kangjie Lu.
9) Fix asoc_id handling in SCTP after the SCTP_*_ASSOC changes this
merge window. From Marcelo Ricardo Leitner and Xin Long.
10) Fix memory corruption with large MTUs in stmmac, from Aaro
Koskinen.
11) Do not use ipv4 header for ipv6 flows in TCP and DCCP, from Eric
Dumazet.
12) Fix topology subscription cancellation in tipc, from Erik Hugne.
13) Memory leak in genetlink error path, from Yue Haibing.
14) Valid control actions properly in packet scheduler, from Davide
Caratti.
15) Even if we get EEXIST, we still need to rehash if a shrink was
delayed. From Herbert Xu.
16) Fix interrupt mask handling in interrupt handler of r8169, from
Heiner Kallweit.
17) Fix leak in ehea driver, from Wen Yang"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (168 commits)
dpaa2-eth: fix race condition with bql frame accounting
chelsio: use BUG() instead of BUG_ON(1)
net: devlink: skip info_get op call if it is not defined in dumpit
net: phy: bcm54xx: Encode link speed and activity into LEDs
tipc: change to check tipc_own_id to return in tipc_net_stop
net: usb: aqc111: Extend HWID table by QNAP device
net: sched: Kconfig: update reference link for PIE
net: dsa: qca8k: extend slave-bus implementations
net: dsa: qca8k: remove leftover phy accessors
dt-bindings: net: dsa: qca8k: support internal mdio-bus
dt-bindings: net: dsa: qca8k: fix example
net: phy: don't clear BMCR in genphy_soft_reset
bpf, libbpf: clarify bump in libbpf version info
bpf, libbpf: fix version info and add it to shared object
rxrpc: avoid clang -Wuninitialized warning
tipc: tipc clang warning
net: sched: fix cleanup NULL pointer exception in act_mirr
r8169: fix cable re-plugging issue
net: ethernet: ti: fix possible object reference leak
net: ibm: fix possible object reference leak
...
David S. Miller [Wed, 27 Mar 2019 18:19:13 +0000 (11:19 -0700)]
Merge branch '100GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
100GbE Intel Wired LAN Driver Updates 2019-03-26
This series contains more updates to the ice driver only.
Jeremiah provides his first patch to the Linux kernel to clean up
un-necessary newlines in driver log messages.
Mitch updates the ice driver to use existing status codes in the iavf
driver so that when errors occur, it will not report nonsensical
results. Adds support for VF admin queue interrupts by programming the
VPINT_MBX_CTL register array.
Brett adds a check for a bit that we set while preparing for a reset, to
ensure we are prepared to do a proper reset. Also implemented PCI error
handling operations. Went through and audited the hot path with pahole
and made modifications based on the results since 2 structures were
taking up more space than necessary due to cache alignment issues.
Fixed an issue where when flow control was disabled, the state of flow
control was being displayed as "Unknown".
Anirudh fixes adaptive interrupt moderation changes by adding code that
was missed, that should have been added in the initial patch to add that
support. Cleaned up a function prototype that was never implemented.
Did additional code cleanup by removing unneeded braces and redundant
code comments.
Akeem fixes an issue that occurs when the VF is attempting to remove the
default LAN/MAC address, which is programmed by the administrator by
updating the error message to explicitly say that the VF cannot change
the MAC programmed by the PF.
Preethi fixes the driver to not fall into the error path when a added
filter already exists, but instead continue to process the rest of the
function and add appropriate checks after adding MAC filters.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 27 Mar 2019 18:10:58 +0000 (11:10 -0700)]
Merge branch 'net-mvpp2-Classifier-updates-and-cleanups'
Maxime Chevallier says:
====================
net: mvpp2: Classifier updates and cleanups
In preparation for the future addition of classification offload
support, this series cleans-up the current code dealing with the
classification engines.
As of today, the classification code only deals with RSS, which is a
very narrow use-case when considering all of the classifier's features.
This lead to design and naming decisions that don't really stand when
considering using more classification features.
This series therefore includes quite a lot a renaming of functions and
macros, and tries to make the naming schemes more consistent and the
code more readable.
The debugfs interface that allows reading the various Parsing and
Classification engines has been cleaned-up and made more generic,
allowing to read the Flow Table and C2 engine hit counters on a
per-entry basis.
The Classifier itself has been made more robust by introducing the
lu_type field in classification lookups, that prevents false-positive
matches in the future. We also initialise the various engine in a more
extensive and less error-prone way by assining default values to all
entries in the C2 and Flow table.
This is a pretty big series considering it's mostly cleanup, but my goal
is to make the future series that will contain new big features easier
to review, focusing on the real logic.
Besides the debugfs interface, this series doesn't intend to introduce any
new features or change in behaviours.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:22 +0000 (09:44 +0100)]
net: mvpp2: cls: Rework C2 engine macros
The C2 classification engine has a 256 entry TCAM, used for ternary
matches on an 8 byte Header Extracted Key. For now, we compute the
various indices for classification and RSS that use this engine thanks
to a set of macros.
This commit mainly renames the macros used to make it clear that they
should be used with the C2 engine, but also make use of the full 256
entries in the engine. For now, the C2 entries are only used for RSS.
These entries are put at the end of the TCAM range, in case we want to
add higher priority matches later on.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:21 +0000 (09:44 +0100)]
net: mvpp2: cls: Initialize lookup priorities for all entries in the flow
When classifying a packet pertaining to a given flow, the classifier
will issue multiple lookup commands until it finds one with the 'last'
bit set. It expects all prorities to be assign continuously (although
not necessarily in an ordered fashion) from 0 to the number of lookups.
We can initialize this once, and make sure unused lookups are given an
empty port map. This avoids having to maintain priorities and the
information of which lookup is the last.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:20 +0000 (09:44 +0100)]
net: mvpp2: cls: Invalidate all C2 entries except the ones we use
C2 TCAM entries can be invalidated to avoid unwanted matches. Make sure
all entries are invalidated at init, then validate only the ones we use.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:19 +0000 (09:44 +0100)]
net: mvpp2: cls: Rename the flow table macros
The Flow Table dictates what lookups will be issued for each flow type.
The lookup sequence for each flow is similar, and the index of each
lookup is computed by some macros.
There are similar mechanisms for the C2 TCAM lookups, so in order to
avoid confusion, rename the flow table index computing macros with a
common prefix.
The only difference in behaviour is that we now use the very first entry
in the flow for the RSS lookup (the first entry was previously unused).
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:18 +0000 (09:44 +0100)]
net: mvpp2: cls: Don't use the sequence attribute for classification
The classifier allows to combine multiple lookups in one "sequence" that
is counted as a single lookup to an engine, with a single result.
We don't actually use that feature, so remove any places where we set
this field, so that the classifier doesn't try to interpret these
fields.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:17 +0000 (09:44 +0100)]
net: mvpp2: cls: Rename classifer per-port functions
This commit renames some of the classifier functions to follow the
naming 'mvpp2_port_*' that's used for function that act on a given port.
This commit is purely cosmetic.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:16 +0000 (09:44 +0100)]
net: mvpp2: cls: Move C2 read/write helpers around
Move C2 read/write helpers higher in the file to ease future work that
rely on these helpers
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:15 +0000 (09:44 +0100)]
net: mvpp2: cls: Write C2 TCAM data last when writing a C2 entry
When writing a C2 entry to hardware, some registers writes will only
take effect when the TCAM_DATA4 register is written. This includes all
C2 TCAM registers, and the C2 invalidate register.
To make sure we always write C2 entries correctly, document that
behaviour with a comment, and move TCAM writes to the end of the
mvpp2_cls_c2_write helper.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:14 +0000 (09:44 +0100)]
net: mvpp2: cls: Use iterators to go through the cls_table
The cls_table is a global read-only table containing the different
parameters that are used by various tables in the classifier. It
describes the links between the Header Parser, the decoding table and
the flow_table.
There are several possible way we want to iterate over that table,
depending on wich classifier engine we want to configure. For the Header
Parser, we want to iterate over each entry. For the Decoding table, we
want to iterate over each entry having a unique flow_id. Finally, when
configuring an ethtool flow, we want to iterate over each entry having a
unique flow_id and that has a given flow_type.
This commit introduces some iterator to both provide syntactic sugar and
also clarify the way we want to iterate over the table.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:13 +0000 (09:44 +0100)]
net: mvpp2: debugfs: Allow reading the C2 engine table from debugfs
PPv2's Classifier uses multiple engines to perform classification. So
far, only the C2 engine is used, which has a 256 entries TCAM.
So far, we only accessed the relevant entries from the C2 engines, which
are the one implementing RSS. To implement and debug ntuple
classification offload, beaing able to see the hit count for each C2
entry is helpful, so this commit moves the logic to a dedicated
directory allowing to access each entry.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:12 +0000 (09:44 +0100)]
net: mvpp2: debugfs: Allow reading the flow table from debugfs
The Classifier flow table is the central part of the PPv2 Classifier,
since it describes all classification steps performed for each flow.
It has 512 entries, shared between all ports, which are divided into
sequences that are pointed-to by the decoding table. Being able to see
which entries in the flow table were hit is a key point when
implementing and debugging classification offload.
This commit allows reading each flow table entry's hit count
independently, with a clear-on-read behaviour.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:11 +0000 (09:44 +0100)]
net: mvpp2: debugfs: Store debugfs entries data in mvpp2 struct
The current way to store the required private data needed to access
various debugfs entries is to alloc them on the fly, share them within
the entries that need to access them, and finally have one entry free
that data upon closing. This leads to hard to maintain code, and is very
error-prone.
This commit stores all debugfs related data in the same place, making
sure this is allocated only when the debugfs directory is successfully
created, so that we don't waste memory when we don't use this feature.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:10 +0000 (09:44 +0100)]
net: mvpp2: cls: Make the flow definitions const
The cls_flow table represent the overall configuration of the
classifier, used to match the different traffic classes in the Parsing
and Classification engines.
This configuration is static, and applies to all PPv2 instances, we must
therefore keep it const so that no modifications of this table are
performed at runtime.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:09 +0000 (09:44 +0100)]
net: mvpp2: cls: Rename MVPP2_N_FLOWS to MVPP2_N_PRS_FLOWS
The macro definition MVPP2_N_FLOWS is ambiguous because it really
represents the number of entries in the Header Parser that are used to
identify the classification flows.
Rename the macro to clearly state that we represent the number of flows
in the Header Parser.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:08 +0000 (09:44 +0100)]
net: mvpp2: cls: use Lookup Type in classification engines
The PPv2 classifier allows to perform multiple lookups on the same
engine when classifying a packet. These lookups can match similar parts
of a packet header, but perform different actions upon matching. To
differentiate these types of lookups, it's possible to specify a Lookup
Type in the flow table entries, which will be part of the key for the
lookup engines.
This commit introduces the use of Lookup Types for C2 matches. Since for
now we only perform C2 lookups to enable RSS, we only need one Lookup
Type.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:07 +0000 (09:44 +0100)]
net: mvpp2: cls: Start cls flow entries from beginning of table
The Classifier flow table has 512 entries, that contains lookups
commands executed consecutively for every flow. Since we have 21
different flows, we have to carefully manage the flow table use.
As of today, the start index of a lookup sequence is computed
directly based in the flow->id. There are 8 reserved flow ids, from
0-7, which don't have any corresponding sequence in the flow table. We
can therefore ignore them when computing the index, and make so that the
first non-reserved flow point to the very beginning of the flow table.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Suggested-by: Alan Winkowski <walan@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:06 +0000 (09:44 +0100)]
net: mvpp2: cls: Add missing MAC_DA field extraction
PPv2's classifier supports extracting the MAC Destination Address from the
L2 header to perform RSS and flow steering. Add the missing case when
setting the Header Extracted Key fields in the flow table.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Wed, 27 Mar 2019 08:44:05 +0000 (09:44 +0100)]
net: mvpp2: Don't use an int to store netdev_features_t
int is not long enough to store all netdev_features, use the correct
dedicated type to store them when building the list of dev->features.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 27 Mar 2019 04:44:13 +0000 (21:44 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2019-03-26
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) introduce bpf_tcp_check_syncookie() helper for XDP and tc, from Lorenz.
2) allow bpf_skb_ecn_set_ce() in tc, from Peter.
3) numerous bpf tc tunneling improvements, from Willem.
4) and other miscellaneous improvements from Adrian, Alan, Daniel, Ivan, Stanislav.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Anirudh Venkataramanan [Tue, 19 Feb 2019 23:04:11 +0000 (15:04 -0800)]
ice: Remove "2 BITS" comment
Some enums in ice_tx_desc_cmd_bits have a trailing /* 2 BITS */ comment,
but the value has just one bit set (ex. ICE_TX_DESC_CMD_L4T_EOFT_SCTP
has the value 0x200 (i.e. only bit 9 is set). This is confusing and
misleading. So remove the comment.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Brett Creeley [Tue, 19 Feb 2019 23:04:10 +0000 (15:04 -0800)]
ice: Update comment regarding the ITR_GRAN_S
Since the driver now hard codes the ITR granularity to 2 us in the
GLINT_CTL register the comment next to ITR_GRAN_S needs to be updated.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Anirudh Venkataramanan [Tue, 19 Feb 2019 23:04:09 +0000 (15:04 -0800)]
ice: Update function header for __ice_vsi_get_qs
Remove some redundant text in the function header for __ice_vsi_get_qs
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Anirudh Venkataramanan [Tue, 19 Feb 2019 23:04:08 +0000 (15:04 -0800)]
ice: Remove unnecessary braces
Single statement if conditions don't need braces. Remove it.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Anirudh Venkataramanan [Tue, 19 Feb 2019 23:04:07 +0000 (15:04 -0800)]
ice: Remove unused function prototype
Commit
37bb83901286 ("ice: Move common functions out of ice_main.c part
7/7") seems to have inadvertently introduced a function prototype for
ice_vsi_cfg_tc without a corresponding function implementation. Remove it.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Brett Creeley [Tue, 19 Feb 2019 23:04:06 +0000 (15:04 -0800)]
ice: Add missing case in print_link_msg for printing flow control
Currently we aren't checking for the ICE_FC_NONE case for the current
flow control mode. This is causing "Unknown" to be printed for the
current flow control method if flow control is disabled. Fix this by
adding the case for ICE_FC_NONE to print "None".
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Brett Creeley [Tue, 19 Feb 2019 23:04:05 +0000 (15:04 -0800)]
ice: Audit hotpath structures with pahole
Currently the ice_q_vector structure and ice_ring_container structure
are taking up more space than necessary due to cache alignment holes
and unnecessary variables respectively. This is not helping the
driver's performance. The following fixes were done to improve cache
alignment, reduce wasted space, and increase performance.
1. Remove the ice_latency_range enum as it is unused.
2. Remove the latency_range variable in the ice_ring_container structure.
3. Change the size of the itr_idx in the ice_ring_container structure
from an int to an u16. This reduced the size of ice_ring_container
structure to 32 Bytes so it has no holes or padding.
4. Re-arrange the ice_q_vector structure using pahole to align
members as best as possible in regards to 64 Byte cache line size.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Preethi Banala [Tue, 19 Feb 2019 23:04:04 +0000 (15:04 -0800)]
ice: Do not bail out when filter already exists
If filter already exists, do not go through error path flow but instead
continue to process rest of the function. Hence have an appropriate check
after adding MAC filters.
Signed-off-by: Preethi Banala <preethi.banala@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Akeem G Abodunrin [Wed, 20 Feb 2019 17:11:48 +0000 (09:11 -0800)]
ice: Fix issue with VF attempt to delete default MAC address
This patch fixes issue that occurs when VF is attempting to remove
default LAN/MAC address, which is programmed by the administrator.
We shouldn't return error for the call by the VF, but continue with
the remaining steps to handle MAC opcode. Also update the dev_err
message to explicitly say that VF can't change MAC programmed by PF.
Also change "mac" to "MAC" for kernel print statements in the same file.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Linus Torvalds [Tue, 26 Mar 2019 21:25:48 +0000 (14:25 -0700)]
Merge tag 'nfs-for-5.1-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client bugfixes from Trond Myklebust:
"Highlights include:
Stable fixes:
- Fix nfs4_lock_state refcounting in nfs4_alloc_{lock,unlock}data()
- fix mount/umount race in nlmclnt.
- NFSv4.1 don't free interrupted slot on open
Bugfixes:
- Don't let RPC_SOFTCONN tasks time out if the transport is connected
- Fix a typo in nfs_init_timeout_values()
- Fix layoutstats handling during read failovers
- fix uninitialized variable warning"
* tag 'nfs-for-5.1-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
SUNRPC: fix uninitialized variable warning
pNFS/flexfiles: Fix layoutstats handling during read failovers
NFS: Fix a typo in nfs_init_timeout_values()
SUNRPC: Don't let RPC_SOFTCONN tasks time out if the transport is connected
NFSv4.1 don't free interrupted slot on open
NFS: fix mount/umount race in nlmclnt.
NFS: Fix nfs4_lock_state refcounting in nfs4_alloc_{lock,unlock}data()
Mitch Williams [Tue, 19 Feb 2019 23:04:02 +0000 (15:04 -0800)]
ice: enable VF admin queue interrupts
The VPINT_MBX_CTL register array must be programmed to enable VF admin
queue interrupts. Without this, VFs never get interrupts on vector 0,
and some VF drivers will fail to init.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Anirudh Venkataramanan [Tue, 19 Feb 2019 23:04:01 +0000 (15:04 -0800)]
ice: Fix for adaptive interrupt moderation
commit
63f545ed1285 ("ice: Add support for adaptive interrupt moderation")
was meant to add support for adaptive interrupt moderation but there was
an error on my part while formatting the patch, and thus only part of the
patch ended up being submitted.
This patch rectifies the error by adding the rest of the code.
Fixes: 63f545ed1285 ("ice: Add support for adaptive interrupt moderation")
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Brett Creeley [Wed, 13 Feb 2019 18:51:15 +0000 (10:51 -0800)]
ice: Implement pci_error_handler ops
This patch implements the following pci_error_handler ops:
.error_detected = ice_pci_err_detected
.slot_reset = ice_pci_err_slot_reset
.reset_notify = ice_pci_err_reset_notify
.reset_prepare = ice_pci_err_reset_prepare
.reset_done = ice_pci_err_reset_done
.resume = ice_pci_err_resume
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Brett Creeley [Wed, 13 Feb 2019 18:51:14 +0000 (10:51 -0800)]
ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset
Currently we check if the __ICE_PREPARED_FOR_RESET bit is set prior to
calling ice_prepare_for_reset in ice_reset_subtask(), but we aren't
checking that bit in ice_do_reset() before calling
ice_prepare_for_reset(). This is not consistent and can cause issues if
ice_prepare_for_reset() is called prior to ice_do_reset(). Fix this by
checking if the __ICE_PREPARED_FOR_RESET bit is set internal to
ice_prepare_for_reset().
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Mitch Williams [Wed, 13 Feb 2019 18:51:13 +0000 (10:51 -0800)]
ice: use virt channel status codes
When communicating with the AVF driver, we need to use the status codes
from virtchnl.h, not our own ice-specific codes. Without this, when an
error occurs, the VF will report nonsensical results.
NOTE: this depends on changes made to include/linux/avf/virtchnl.h by
commit
bb58fd7eeffc ("i40e: Update status codes")
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeremiah Kyle [Wed, 13 Feb 2019 18:51:12 +0000 (10:51 -0800)]
ice: Remove unnecessary newlines from log messages
Two log messages contained newlines in the middle of the message. This
resulted in unexpected driver log output.
This patch removes the newlines to restore consistency with the rest of
the driver log messages.
Signed-off-by: Jeremiah Kyle <jeremiah.kyle@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alakesh Haloi [Tue, 26 Mar 2019 02:00:01 +0000 (02:00 +0000)]
SUNRPC: fix uninitialized variable warning
Avoid following compiler warning on uninitialized variable
net/sunrpc/xprtsock.c: In function ‘xs_read_stream_request.constprop’:
net/sunrpc/xprtsock.c:525:10: warning: ‘read’ may be used uninitialized in this function [-Wmaybe-uninitialized]
return read;
^~~~
net/sunrpc/xprtsock.c:529:23: warning: ‘ret’ may be used uninitialized in this function [-Wmaybe-uninitialized]
return ret < 0 ? ret : read;
~~~~~~~~~~~~~~^~~~~~
Signed-off-by: Alakesh Haloi <alakesh.haloi@gmail.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Flavio Leitner [Mon, 25 Mar 2019 18:58:31 +0000 (15:58 -0300)]
openvswitch: add seqadj extension when NAT is used.
When the conntrack is initialized, there is no helper attached
yet so the nat info initialization (nf_nat_setup_info) skips
adding the seqadj ext.
A helper is attached later when the conntrack is not confirmed
but is going to be committed. In this case, if NAT is needed then
adds the seqadj ext as well.
Fixes: 16ec3d4fbb96 ("openvswitch: Fix cached ct with helper.")
Signed-off-by: Flavio Leitner <fbl@sysclose.org>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stanislav Fomichev [Tue, 19 Mar 2019 21:53:24 +0000 (14:53 -0700)]
selftests: bpf: don't depend on hardcoded perf sample_freq
When running stacktrace_build_id_nmi, try to query
kernel.perf_event_max_sample_rate sysctl and use it as a sample_freq.
If there was an error reading sysctl, fallback to 5000.
kernel.perf_event_max_sample_rate sysctl can drift and/or can be
adjusted by the perf tool, so assuming a fixed number might be
problematic on a long running machine.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Ioana Ciornei [Mon, 25 Mar 2019 13:42:39 +0000 (13:42 +0000)]
dpaa2-eth: use netif_receive_skb_list
Take advantage of the software Rx batching by using
netif_receive_skb_list instead of napi_gro_receive.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>