Julian Wiedmann [Wed, 20 Dec 2017 19:10:57 +0000 (20:10 +0100)]
s390/qeth: use ip*_eth_mc_map helpers
Get rid of some wrapper indirection, and stop accessing the skb at
hard-coded offsets.
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Elena Reshetova [Wed, 20 Dec 2017 19:10:56 +0000 (20:10 +0100)]
qeth: convert qeth_reply.refcnt from atomic_t to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basic atomic operations
(set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.
The variable qeth_reply.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.
Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
[jwi: removed the WARN_ONs. Use CONFIG_REFCOUNT_FULL if you care.]
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Elena Reshetova [Wed, 20 Dec 2017 19:10:55 +0000 (20:10 +0100)]
net: convert lcs_reply.refcnt from atomic_t to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basic atomic operations
(set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.
The variable lcs_reply.refcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.
Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
[jwi: removed the WARN_ONs. Use CONFIG_REFCOUNT_FULL if you care.]
Signed-off-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Dec 2017 19:33:03 +0000 (14:33 -0500)]
Merge tag 'batadv-next-for-davem-
20171220' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This feature/cleanup patchset includes the following patches:
- bump version strings, by Simon Wunderlich
- de-inline hash functions to save memory footprint, by Denys Vlasenko
- Add License information to various files, by Sven Eckelmann (3 patches)
- Change batman_adv.h from ISC to MIT, by Sven Eckelmann
- Improve various includes, by Sven Eckelmann (5 patches)
- Lots of kernel-doc work by Sven Eckelmann (8 patches)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Dec 2017 19:00:26 +0000 (14:00 -0500)]
Merge branch 'replace-tcp_set_state-tracepoint-with-inet_sock_set_state'
Yafang Shao says:
====================
replace tcp_set_state tracepoint with inet_sock_set_state
According to the discussion in the mail thread
https://patchwork.kernel.org/patch/
10099243/,
tcp_set_state tracepoint is renamed to inet_sock_set_state tracepoint and is
moved to include/trace/events/sock.h.
With this new tracepoint, we can trace AF_INET/AF_INET6 sock state transitions.
As there's only one single tracepoint for inet, so I didn't create a new trace
file named trace/events/inet_sock.h, and just place it in
include/trace/events/sock.h
Currently TCP/DCCP/SCTP state transitions are traced with this tracepoint.
- Why not more protocol ?
If we really think that anonter protocol should be traced, I will modify the
code to trace it.
I just want to make the code easy and not output useless information.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Wed, 20 Dec 2017 03:12:54 +0000 (11:12 +0800)]
net: tracepoint: using sock_set_state tracepoint to trace SCTP state transition
With changes in inet_ files, SCTP state transitions are traced with
inet_sock_set_state tracepoint.
As SCTP state names, i.e. SCTP_SS_CLOSED, SCTP_SS_ESTABLISHED,
have the same value with TCP state names. So the output info still print
the TCP state names, that makes the code easy.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Wed, 20 Dec 2017 03:12:53 +0000 (11:12 +0800)]
net: tracepoint: using sock_set_state tracepoint to trace DCCP state transition
With changes in inet_ files, DCCP state transitions are traced with
inet_sock_set_state tracepoint.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Wed, 20 Dec 2017 03:12:52 +0000 (11:12 +0800)]
net: sock: replace sk_state_load with inet_sk_state_load and remove sk_state_store
sk_state_load is only used by AF_INET/AF_INET6, so rename it to
inet_sk_state_load and move it into inet_sock.h.
sk_state_store is removed as it is not used any more.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Wed, 20 Dec 2017 03:12:51 +0000 (11:12 +0800)]
net: tracepoint: replace tcp_set_state tracepoint with inet_sock_set_state tracepoint
As sk_state is a common field for struct sock, so the state
transition tracepoint should not be a TCP specific feature.
Currently it traces all AF_INET state transition, so I rename this
tracepoint to inet_sock_set_state tracepoint with some minor changes and move it
into trace/events/sock.h.
We dont need to create a file named trace/events/inet_sock.h for this one single
tracepoint.
Two helpers are introduced to trace sk_state transition
- void inet_sk_state_store(struct sock *sk, int newstate);
- void inet_sk_set_state(struct sock *sk, int state);
As trace header should not be included in other header files,
so they are defined in sock.c.
The protocol such as SCTP maybe compiled as a ko, hence export
inet_sk_set_state().
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Steven Rostedt (VMware) [Wed, 20 Dec 2017 03:12:50 +0000 (11:12 +0800)]
tcp: Export to userspace the TCP state names for the trace events
The TCP trace events (specifically tcp_set_state), maps emums to symbol
names via __print_symbolic(). But this only works for reading trace events
from the tracefs trace files. If perf or trace-cmd were to record these
events, the event format file does not convert the enum names into numbers,
and you get something like:
__print_symbolic(REC->oldstate,
{ TCP_ESTABLISHED, "TCP_ESTABLISHED" },
{ TCP_SYN_SENT, "TCP_SYN_SENT" },
{ TCP_SYN_RECV, "TCP_SYN_RECV" },
{ TCP_FIN_WAIT1, "TCP_FIN_WAIT1" },
{ TCP_FIN_WAIT2, "TCP_FIN_WAIT2" },
{ TCP_TIME_WAIT, "TCP_TIME_WAIT" },
{ TCP_CLOSE, "TCP_CLOSE" },
{ TCP_CLOSE_WAIT, "TCP_CLOSE_WAIT" },
{ TCP_LAST_ACK, "TCP_LAST_ACK" },
{ TCP_LISTEN, "TCP_LISTEN" },
{ TCP_CLOSING, "TCP_CLOSING" },
{ TCP_NEW_SYN_RECV, "TCP_NEW_SYN_RECV" })
Where trace-cmd and perf do not know the values of those enums.
Use the TRACE_DEFINE_ENUM() macros that will have the trace events convert
the enum strings into their values at system boot. This will allow perf and
trace-cmd to see actual numbers and not enums:
__print_symbolic(REC->oldstate,
{ 1, "TCP_ESTABLISHED" },
{ 2, "TCP_SYN_SENT" },
{ 3, "TCP_SYN_RECV" },
{ 4, "TCP_FIN_WAIT1" },
{ 5, "TCP_FIN_WAIT2" },
{ 6, "TCP_TIME_WAIT" },
{ 7, "TCP_CLOSE" },
{ 8, "TCP_CLOSE_WAIT" },
{ 9, "TCP_LAST_ACK" },
{ 10, "TCP_LISTEN" },
{ 11, "TCP_CLOSING" },
{ 12, "TCP_NEW_SYN_RECV" })
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Prashant Bhole [Wed, 20 Dec 2017 03:18:57 +0000 (12:18 +0900)]
netdevsim: correctly check return value of debugfs_create_dir
- Checking return value with IS_ERROR_OR_NULL
- Added error handling where it was not handled
Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Wed, 20 Dec 2017 02:07:01 +0000 (10:07 +0800)]
ip6_gre: fix potential memory leak in ip6erspan_rcv
If md is NULL, tun_dst must be freed, otherwise it will cause memory
leak.
Fixes: ef7baf5e083c ("ip6_gre: add ip6 erspan collect_md mode")
Cc: William Tu <u9012063@gmail.com>
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Wed, 20 Dec 2017 02:07:00 +0000 (10:07 +0800)]
ip_gre: fix potential memory leak in erspan_rcv
If md is NULL, tun_dst must be freed, otherwise it will cause memory
leak.
Fixes: 1a66a836da6 ("gre: add collect_md mode to ERSPAN tunnel")
Cc: William Tu <u9012063@gmail.com>
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Wed, 20 Dec 2017 02:21:47 +0000 (10:21 +0800)]
ip6_gre: fix error path when ip6erspan_rcv failed
Same as ipv4 code, when ip6erspan_rcv call return PACKET_REJECT, we
should call icmpv6_send to send icmp unreachable message in error path.
Fixes: 5a963eb61b7c ("ip6_gre: Add ERSPAN native tunnel support")
Acked-by: William Tu <u9012063@gmail.com>
Cc: William Tu <u9012063@gmail.com>
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Wed, 20 Dec 2017 02:21:46 +0000 (10:21 +0800)]
ip_gre: fix error path when erspan_rcv failed
When erspan_rcv call return PACKET_REJECT, we shoudn't call ipgre_rcv to
process packets again, instead send icmp unreachable message in error
path.
Fixes: 84e54fe0a5ea ("gre: introduce native tunnel support for ERSPAN")
Acked-by: William Tu <u9012063@gmail.com>
Cc: William Tu <u9012063@gmail.com>
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Wed, 20 Dec 2017 01:53:19 +0000 (09:53 +0800)]
ip6_gre: fix a pontential issue in ip6erspan_rcv
pskb_may_pull() can change skb->data, so we need to load ipv6h/ershdr at
the right place.
Fixes: 5a963eb61b7c ("ip6_gre: Add ERSPAN native tunnel support")
Cc: William Tu <u9012063@gmail.com>
Acked-by: William Tu <u9012063@gmail.com>
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Shevchenko [Tue, 19 Dec 2017 21:22:15 +0000 (23:22 +0200)]
net: amd-xgbe: Get rid of custom hex_dump_to_buffer()
Get rid of yet another custom hex_dump_to_buffer().
The output is slightly changed, i.e. each byte followed by white space.
Note, we don't use print_hex_dump() here since the original code uses
nedev_dbg().
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Tue, 19 Dec 2017 21:12:56 +0000 (16:12 -0500)]
net: Clarify dev_weight documentation for LRO and GRO_HW.
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 20 Dec 2017 17:51:11 +0000 (12:51 -0500)]
Merge branch 'netdevsim-couple-of-build-warning-fixes'
Jakub Kicinski says:
====================
netdevsim: couple of build warning fixes
This series fixes two harmless build warning about a symbol which
should be static and an unused variable.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 19 Dec 2017 20:10:43 +0000 (12:10 -0800)]
netdevsim: bpf: remove unused variable
skip_sw is set but no longer used.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 19 Dec 2017 20:10:42 +0000 (12:10 -0800)]
netdevsim: declare struct device_type as static
struct device_type nsim_dev_type created for SR-IOV support
should be static.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
William Tu [Tue, 19 Dec 2017 18:37:02 +0000 (10:37 -0800)]
selftests: rtnetlink: add gretap test cases
Add test cases for gretap and ip6gretap, native mode
and external (collect metadata) mode.
Signed-off-by: William Tu <u9012063@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Shevchenko [Tue, 19 Dec 2017 18:31:03 +0000 (20:31 +0200)]
net: pasemi: Replace mac address parsing
Replace sscanf() with mac_pton().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Shevchenko [Tue, 19 Dec 2017 18:20:44 +0000 (20:20 +0200)]
net: bonding: Replace mac address parsing
Replace sscanf() with mac_pton().
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Shevchenko [Tue, 19 Dec 2017 18:10:53 +0000 (20:10 +0200)]
bridge: Use helpers to handle MAC address
Use
%pM to print MAC
mac_pton() to convert it from ASCII to binary format, and
ether_addr_copy() to copy.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexey Kodanev [Tue, 19 Dec 2017 13:59:21 +0000 (16:59 +0300)]
ip6_vti: adjust vti mtu according to mtu of lower device
LTP/udp6_ipsec_vti tests fail when sending large UDP datagrams over
ip6_vti that require fragmentation and the underlying device has an
MTU smaller than 1500 plus some extra space for headers. This happens
because ip6_vti, by default, sets MTU to ETH_DATA_LEN and not updating
it depending on a destination address or link parameter. Further
attempts to send UDP packets may succeed because pmtu gets updated on
ICMPV6_PKT_TOOBIG in vti6_err().
In case the lower device has larger MTU size, e.g. 9000, ip6_vti works
but not using the possible maximum size, output packets have 1500 limit.
The above cases require manual MTU setup after ip6_vti creation. However
ip_vti already updates MTU based on lower device with ip_tunnel_bind_dev().
Here is the example when the lower device MTU is set to 9000:
# ip a sh ltp_ns_veth2
ltp_ns_veth2@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 ...
inet 10.0.0.2/24 scope global ltp_ns_veth2
inet6 fd00::2/64 scope global
# ip li add vti6 type vti6 local fd00::2 remote fd00::1
# ip li show vti6
vti6@NONE: <POINTOPOINT,NOARP> mtu 1500 ...
link/tunnel6 fd00::2 peer fd00::1
After the patch:
# ip li add vti6 type vti6 local fd00::2 remote fd00::1
# ip li show vti6
vti6@NONE: <POINTOPOINT,NOARP> mtu 8832 ...
link/tunnel6 fd00::2 peer fd00::1
Reported-by: Petr Vorel <pvorel@suse.cz>
Signed-off-by: Alexey Kodanev <alexey.kodanev@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 19:52:13 +0000 (14:52 -0500)]
Merge branch 'nfp-flower-add-Geneve-tunnel-support'
Simon Horman says:
====================
nfp: flower: add Geneve tunnel support
John Hurley says:
This patchset adds support for offloading the encap and decap of Geneve
tunnels to the NFP. In both cases, specifying well known port 6081 is a
requirement for rule offload.
Geneve firmware support has been recently added, so the patchset includes
the reading of a fw symbol that defines a bitmap of newly supported
features. Geneve will only be offloaded if the fw supports it. The new
symbol is added in fw r5646.
Geneve option fields are not supported as either a match or an action due
there current exclussion from TC flower. Because Geneve (as both a match
and action) behaves the same as other udp tunnels such as VXLAN, generic
functions are created that handle both Geneve and VXLAN. It is anticapated
that these functions will be modified to support options in future
patches.
The removal of an unused variable 'tun_dst_mask' is included as a separate
patch here. This does not affect functionality.
Also included are modifications to the test framework to check that the
new encap and decap features are functioning correctly.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Tue, 19 Dec 2017 16:58:29 +0000 (17:58 +0100)]
nfp: flower: compile Geneve encap actions
Generate rules for the NFP to encapsulate packets in Geneve tunnels. Move
the vxlan action code to generic udp tunnel actions and use core code for
both vxlan and Geneve.
Only support outputting to well known port 6081. Setting tunnel options
is not supported yet.
Only attempt to offload if the fw supports Geneve.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Tue, 19 Dec 2017 16:58:28 +0000 (17:58 +0100)]
nfp: flower: compile Geneve match fields
Compile Geneve match fields for offloading to the NFP. The addition of
Geneve overflows the 8 bit key_layer field, so apply extended metadata to
the match cmsg allowing up to 32 more key_layer fields.
Rather than adding new Geneve blocks, move the vxlan code to generic ipv4
udp tunnel structs and use these for both vxlan and Geneve.
Matches are only supported when specifically mentioning well known port
6081. Geneve tunnel options are not yet included in the match.
Only offload Geneve if the fw supports it - include check for this.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Tue, 19 Dec 2017 16:58:27 +0000 (17:58 +0100)]
nfp: flower: read extra feature support from fw
Extract the _abi_flower_extra_features symbol from the fw which gives a 64
bit bitmap of new features (on top of the flower base support) that the fw
can offload. Store this bitmap in the priv data associated with each app.
If the symbol does not exist, set the bitmap to 0.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Hurley [Tue, 19 Dec 2017 16:58:26 +0000 (17:58 +0100)]
nfp: flower: remove unused tun_mask variable
The tunnel dest IP is required for separate offload to the NFP. It is
already verified that a dest IP must be present and must be an exact
match in the flower rule. Therefore, we can just extract the IP from the
generated offload rule and remove the unused mask variable. The function
is then no longer required to return the IP separately.
Because tun_dst is localised to tunnel matches, move the declaration to
the tunnel if branch.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ganesh Goudar [Tue, 19 Dec 2017 01:52:28 +0000 (07:22 +0530)]
cxgb4: RSS table is 4k for T6
RSS table is 4k for T6 and later cards, add check for the
same.
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang [Mon, 18 Dec 2017 22:34:26 +0000 (14:34 -0800)]
net_sched: properly check for empty skb array on error path
First, the check of &q->ring.queue against NULL is wrong, it
is always false. We should check the value rather than the address.
Secondly, we need the same check in pfifo_fast_reset() too,
as both ->reset() and ->destroy() are called in qdisc_destroy().
Fixes: c5ad119fb6c0 ("net: sched: pfifo_fast use skb_array")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Mon, 18 Dec 2017 18:52:40 +0000 (12:52 -0600)]
ibmvnic: Include header descriptor support for ARP packets
In recent tests with new adapters, it was discovered that ARP
packets were not being properly processed. This patch adds
support for ARP packet headers to be passed to backing adapters,
if necessary.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 19:08:20 +0000 (14:08 -0500)]
Merge branch 'ibmvnic-Fix-and-increase-maximum-TX-RX-queues'
Thomas Falcon says:
====================
ibmvnic: Fix and increase maximum TX/RX queues
This series renames IBMVNIC_MAX_TX_QUEUES to IBMVNIC_MAX_QUEUES since
it is used to allocate both RX and TX queues. The value is also increased
to accommodate newer hardware.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Mon, 18 Dec 2017 18:52:12 +0000 (12:52 -0600)]
ibmvnic: Increase maximum number of RX/TX queues
Increase the number of queues allocated to accommodate recent
network adapter inclusions on the IBM vNIC platform.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Mon, 18 Dec 2017 18:52:11 +0000 (12:52 -0600)]
ibmvnic: Rename IBMVNIC_MAX_TX_QUEUES to IBMVNIC_MAX_QUEUES
This value denotes the maximum number of TX queues but is used
to allocate both RX and TX queues.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 19:04:52 +0000 (14:04 -0500)]
Merge tag 'wireless-drivers-next-for-davem-2017-12-18' of git://git./linux/kernel/git/kvalo/wireless-drivers-next
The drivers/net/wireless/intel/iwlwifi/pcie/drv.c conflict was
resolved using a diff provided by Kalle in his pull request.
Kalle Valo says:
====================
wireless-drivers-next patches for 4.16
A bigger pull request this time, the most visible change being the new
driver mt76. But there's also Kconfig refactoring in ath9k and ath10k,
work beginning in iwlwifi to have rate scaling in firmware/hardware,
wcn3990 support getting closer in ath10k and lots of smaller changes.
mt76
* a new driver for MT76x2e, a 2x2 PCIe 802.11ac chipset by MediaTek
ath10k
* enable multiqueue support for all hw using mac80211 wake_tx_queue op
* new Kconfig option ATH10K_SPECTRAL to save RAM
* show tx stats on QCA9880
* new qcom,ath10k-calibration-variant DT entry
* WMI layer support for wcn3990
ath9k
* new Kconfig option ATH9K_COMMON_SPECTRAL to save RAM
wcn36xx
* hardware scan offload support
wil6210
* run-time PM support when interface is down
iwlwifi
* initial work for rate-scaling offload
* Support for new FW API version 36
* Rename the temporary hw name A000 to 22000
ssb
* make SSB a menuconfig to ease disabling it all
mwl8k
* enable non-DFS 5G channels 149-165
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ganesh Goudar [Mon, 18 Dec 2017 14:15:22 +0000 (19:45 +0530)]
cxgb4: Report tid start range correctly for T6
For T6, tid start range should be read from
LE_DB_ACTIVE_TABLE_START_INDEX_A register.
Signed-off-by: Arjun Vynipadath <arjun@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 18:53:39 +0000 (13:53 -0500)]
Merge branch 'for-upstream' of git://git./linux/kernel/git/bluetooth/bluetooth-next
Johan Hedberg says:
====================
pull request: bluetooth-next 2017-12-18
Here's the first bluetooth-next pull request for the 4.16 kernel.
- hci_ll: multiple cleanups & fixes
- Remove Gustavo Padovan from the MAINTAINERS file
- Support BLE Adversing while connected (if the controller can do it)
- DT updates for TI chips
- Various other smaller cleanups & fixes
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Lukas Wunner [Mon, 18 Dec 2017 10:17:07 +0000 (11:17 +0100)]
net: ks8851: Support DT-provided MAC address
Allow the boot loader to specify the MAC address in the device tree
to override the EEPROM, or in case no EEPROM is present.
Cc: Ben Dooks <ben@simtec.co.uk>
Cc: Tristram Ha <tristram.ha@micrel.com>
Cc: David J. Choi <david.choi@micrel.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 16:07:17 +0000 (11:07 -0500)]
Merge branch 'bcm63xx_enet-remove-mac_id-usage'
Jonas Gorski says:
====================
bcm63xx_enet: remove mac_id usage
This patchset aims at reducing the platform device id number usage with
the target of making it eventually possible to probe the driver through OF.
Runtested on BCM6358.
Since the patches touch mostly net/, they should go through net-next.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jonas Gorski [Sun, 17 Dec 2017 16:02:55 +0000 (17:02 +0100)]
bcm63xx_enet: use platform device id directly for miibus name
Directly use the platform device for generating the miibus name. This
removes the last user of bcm_enet_priv::mac_id and we can remove the
field.
Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jonas Gorski [Sun, 17 Dec 2017 16:02:54 +0000 (17:02 +0100)]
bcm63xx_enet: remove pointless mac_id check
Enabling the ephy clock for mac 1 is harmless, and the actual usage of
the ephy is not restricted to mac 0, so we might as well remove the
check.
Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jonas Gorski [Sun, 17 Dec 2017 16:02:53 +0000 (17:02 +0100)]
bcm63xx_enet: use platform data for dma channel numbers
To reduce the reliance on device ids, pass the dma channel numbers to
the enet devices as platform data.
Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jonas Gorski [Sun, 17 Dec 2017 16:02:52 +0000 (17:02 +0100)]
bcm63xx_enet: just use "enet" as the clock name
Now that we have the individual clocks available as "enet" we
don't need to rely on the device id for them anymore.
Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 15:59:44 +0000 (10:59 -0500)]
Merge branch 'net-speedup-vxlan-geneve-tunnel-dismantle'
Haishuang Yan says:
====================
net: speedup geneve/vxlan tunnels dismantle
This patch series add batching to vxlan/geneve tunnels so that netns
dismantles are less costly.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Sat, 16 Dec 2017 09:54:50 +0000 (17:54 +0800)]
geneve: speedup geneve tunnels dismantle
Since we now hold RTNL lock in geneve_exit_net, it's better batch them
to speedup geneve tunnel dismantle.
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Haishuang Yan [Sat, 16 Dec 2017 09:54:49 +0000 (17:54 +0800)]
vxlan: speedup vxlan tunnels dismantle
Since we now hold RTNL lock in vxlan_exit_net, it's better to batch them
to speedup vxlan tunnels dismantle.
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Zhu Yanjun [Sat, 16 Dec 2017 09:31:03 +0000 (04:31 -0500)]
forcedeth: remove duplicate structure member in xmit
Since both first_tx_ctx and tx_skb are the head of tx ctx, it not
necessary to use two structure members to statically indicate
the head of tx ctx. So first_tx_ctx is removed.
CC: Srinivas Eeda <srinivas.eeda@oracle.com>
CC: Joe Jin <joe.jin@oracle.com>
CC: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 19 Dec 2017 15:38:37 +0000 (10:38 -0500)]
Merge branch 'net-NETIF_F_GRO_HW'
Michael Chan says:
====================
Introduce NETIF_F_GRO_HW
Introduce NETIF_F_GRO_HW feature flag and convert drivers that support
hardware GRO to use the new flag.
v5:
- Documentation changes requested by Alexander Duyck.
- bnx2x changes requested by Manish Chopra to enable LRO by default, and
disable GRO_HW if disable_tpa module parameter is set.
v4:
- more changes requested by Alexander Duyck:
- check GRO_HW/GRO dependency in drivers's ndo_fix_features().
- Reverse the order of RXCSUM and GRO_HW dependency check in
netdev_fix_features().
- No propagation in netdev_disable_gro_hw().
v3:
- Let driver's ndo_fix_features() disable NETIF_F_LRO when NETIF_F_GRO_HW
is set instead of doing it in common netdev_fix_features().
v2:
- NETIF_F_GRO_HW flag propagation between upper and lower devices not
required (see patch 1).
- NETIF_F_GRO_HW depends on NETIF_F_GRO and NETIF_F_RXCSUM.
- Add dev_disable_gro_hw() to disable GRO_HW for generic XDP.
- Use ndo_fix_features() on all 3 drivers to drop GRO_HW when it is not
supported
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 16 Dec 2017 08:09:44 +0000 (03:09 -0500)]
qede: Use NETIF_F_GRO_HW.
Advertise NETIF_F_GRO_HW and set edev->gro_disable according to the
feature flag. Add qede_fix_features() to drop NETIF_F_GRO_HW if
XDP is running or MTU does not support GRO_HW or GRO is not set.
qede_change_mtu() also checks and disables GRO_HW if MTU is not
supported.
Cc: Ariel Elior <Ariel.Elior@cavium.com>
Cc: everest-linux-l2@cavium.com
Acked-by: Manish Chopra <manish.chopra@cavium.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Acked-by: Manish Chopra <manish.chopra@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 16 Dec 2017 08:09:43 +0000 (03:09 -0500)]
bnx2x: Use NETIF_F_GRO_HW.
Advertise NETIF_F_GRO_HW and turn on TPA_MODE_GRO when NETIF_F_GRO_HW
is set. Disable NETIF_F_GRO_HW in bnx2x_fix_features() if the MTU
does not support TPA_MODE_GRO or GRO is not set. bnx2x_change_mtu() also
needs to disable NETIF_F_GRO_HW if the MTU does not support it.
Original parameter disable_tpa will continue to disable LRO and GRO_HW.
Preserve the original behavior of enabling LRO by default. User has
to run ethtool -K to explicitly enable GRO_HW.
Cc: Ariel Elior <Ariel.Elior@cavium.com>
Cc: everest-linux-l2@cavium.com
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Acked-by: Manish Chopra <manish.chopra@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 16 Dec 2017 08:09:42 +0000 (03:09 -0500)]
bnxt_en: Use NETIF_F_GRO_HW.
Advertise NETIF_F_GRO_HW in hw_features if hardware GRO is supported.
In bnxt_fix_features(), disable GRO_HW and LRO if current hardware
configuration does not allow it. GRO_HW depends on GRO. GRO_HW is
also mutually exclusive with LRO. XDP setup will now rely on
bnxt_fix_features() to turn off aggregation. During chip init, turn on
or off hardware GRO based on NETIF_F_GRO_HW in features flag.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 16 Dec 2017 08:09:41 +0000 (03:09 -0500)]
net: Disable GRO_HW when generic XDP is installed on a device.
Hardware should not aggregate any packets when generic XDP is installed.
Cc: Ariel Elior <Ariel.Elior@cavium.com>
Cc: everest-linux-l2@cavium.com
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 16 Dec 2017 08:09:40 +0000 (03:09 -0500)]
net: Introduce NETIF_F_GRO_HW.
Introduce NETIF_F_GRO_HW feature flag for NICs that support hardware
GRO. With this flag, we can now independently turn on or off hardware
GRO when GRO is on. Previously, drivers were using NETIF_F_GRO to
control hardware GRO and so it cannot be independently turned on or
off without affecting GRO.
Hardware GRO (just like GRO) guarantees that packets can be re-segmented
by TSO/GSO to reconstruct the original packet stream. Logically,
GRO_HW should depend on GRO since it a subset, but we will let
individual drivers enforce this dependency as they see fit.
Since NETIF_F_GRO is not propagated between upper and lower devices,
NETIF_F_GRO_HW should follow suit since it is a subset of GRO. In other
words, a lower device can independent have GRO/GRO_HW enabled or disabled
and no feature propagation is required. This will preserve the current
GRO behavior. This can be changed later if we decide to propagate GRO/
GRO_HW/RXCSUM from upper to lower devices.
Cc: Ariel Elior <Ariel.Elior@cavium.com>
Cc: everest-linux-l2@cavium.com
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Thu, 14 Dec 2017 13:51:59 +0000 (05:51 -0800)]
sock: Hide unused variable when !CONFIG_PROC_FS.
When CONFIG_PROC_FS is disabled, we will not use the prot_inuse
counter. This adds an #ifdef to hide the variable definition in
that case. This is not a bugfix. But we can save bytes when there
are many network namespace.
Cc: Pavel Emelyanov <xemul@openvz.org>
Signed-off-by: Martin Zhang <zhangjunweimartin@didichuxing.com>
Signed-off-by: Tonghao Zhang <zhangtonghao@didichuxing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Thu, 14 Dec 2017 13:51:58 +0000 (05:51 -0800)]
sock: Move the socket inuse to namespace.
In some case, we want to know how many sockets are in use in
different _net_ namespaces. It's a key resource metric.
This patch add a member in struct netns_core. This is a counter
for socket-inuse in the _net_ namespace. The patch will add/sub
counter in the sk_alloc, sk_clone_lock and __sk_free.
This patch will not counter the socket created in kernel.
It's not very useful for userspace to know how many kernel
sockets we created.
The main reasons for doing this are that:
1. When linux calls the 'do_exit' for process to exit, the functions
'exit_task_namespaces' and 'exit_task_work' will be called sequentially.
'exit_task_namespaces' may have destroyed the _net_ namespace, but
'sock_release' called in 'exit_task_work' may use the _net_ namespace
if we counter the socket-inuse in sock_release.
2. socket and sock are in pair. More important, sock holds the _net_
namespace. We counter the socket-inuse in sock, for avoiding holding
_net_ namespace again in socket. It's a easy way to maintain the code.
Signed-off-by: Martin Zhang <zhangjunweimartin@didichuxing.com>
Signed-off-by: Tonghao Zhang <zhangtonghao@didichuxing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Thu, 14 Dec 2017 13:51:57 +0000 (05:51 -0800)]
sock: Change the netns_core member name.
Change the member name will make the code more readable.
This patch will be used in next patch.
Signed-off-by: Martin Zhang <zhangjunweimartin@didichuxing.com>
Signed-off-by: Tonghao Zhang <zhangtonghao@didichuxing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bjorn Helgaas [Fri, 15 Dec 2017 23:01:50 +0000 (17:01 -0600)]
cxgb4: Simplify PCIe Completion Timeout setting
Simplify PCIe Completion Timeout setting by using the
pcie_capability_clear_and_set_word() interface. No functional change
intended.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Dec 2017 20:11:26 +0000 (15:11 -0500)]
Merge branch 'erspan-a-couple-fixes'
William Tu says:
====================
net: erspan: a couple fixes
Haishuang Yan reports a couple of issues (wrong return value,
pskb_may_pull) on erspan V1. Since erspan V2 is in net-next,
this series fix the similar issues on v2.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
William Tu [Fri, 15 Dec 2017 22:27:44 +0000 (14:27 -0800)]
net: erspan: reload pointer after pskb_may_pull
pskb_may_pull() can change skb->data, so we need to re-load pkt_md
and ershdr at the right place.
Fixes: 94d7d8f29287 ("ip6_gre: add erspan v2 support")
Fixes: f551c91de262 ("net: erspan: introduce erspan v2 for ip_gre")
Signed-off-by: William Tu <u9012063@gmail.com>
Cc: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
William Tu [Fri, 15 Dec 2017 22:27:43 +0000 (14:27 -0800)]
net: erspan: fix wrong return value
If pskb_may_pull return failed, return PACKET_REJECT
instead of -ENOMEM.
Fixes: 94d7d8f29287 ("ip6_gre: add erspan v2 support")
Fixes: f551c91de262 ("net: erspan: introduce erspan v2 for ip_gre")
Signed-off-by: William Tu <u9012063@gmail.com>
Cc: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Acked-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Dec 2017 19:57:49 +0000 (14:57 -0500)]
Merge branch 'sfp-phylink-fixes'
Russell King says:
====================
More SFP/phylink fixes
This series fixes a few more bits with sfp/phylink, particularly
confusion with the right way to test for the RTNL mutex being
held, a change in 2016 to the mdiobus_scan() behaviour that wasn't
noticed, and a fix for reading module EEPROMs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Dec 2017 16:09:47 +0000 (16:09 +0000)]
phylink: fix locking asserts
Use ASSERT_RTNL() rather than WARN_ON(!lockdep_rtnl_is_held()) which
stops working when lockdep fires, and we end up with lots of warnings.
Fixes: 9525ae83959b ("phylink: add phylink infrastructure")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Dec 2017 16:09:41 +0000 (16:09 +0000)]
sfp: fix EEPROM reading in the case of non-SFF8472 SFPs
The EEPROM reading was trying to read from the second EEPROM address
if we requested the last byte from the SFF8079 EEPROM, which caused a
failure when the second EEPROM is not present. Discovered with a
S-RJ01 SFP module. Fix this.
Fixes: 73970055450e ("sfp: add SFP module support")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Fri, 15 Dec 2017 16:09:36 +0000 (16:09 +0000)]
sfp: fix non-detection of PHY
The detection of a PHY changed in commit
e98a3aabf85f ("mdio_bus: don't
return NULL from mdiobus_scan()") which now causes sfp to print an
error message. Update for this change.
Fixes: 73970055450e ("sfp: add SFP module support")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Samuel Mendoza-Jonas [Fri, 15 Dec 2017 05:16:40 +0000 (16:16 +1100)]
net/ncsi: Don't take any action on HNCDSC AEN
The current HNCDSC handler takes the status flag from the AEN packet and
will update or change the current channel based on this flag and the
current channel status.
However the flag from the HNCDSC packet merely represents the host link
state. While the state of the host interface is potentially interesting
information it should not affect the state of the NCSI link. Indeed the
NCSI specification makes no mention of any recommended action related to
the host network controller driver state.
Update the HNCDSC handler to record the host network driver status but
take no other action.
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Dec 2017 18:24:57 +0000 (13:24 -0500)]
Merge branch 'phy-meson-gxl-clean-up-and-improvements'
Jerome Brunet says:
====================
net: phy: meson-gxl: clean-up and improvements
This patchset adds defines for the control registers and helpers to access
the banked registers. The goal being to make it easier to understand what
the driver actually does.
Then CONFIG_A6 settings is removed since this statement was without effect
Finally interrupt support is added, speeding things up a little
This series has been tested on the libretech-cc and khadas VIM
Changes since v2 [0]:
Drop LPA corruption fix which has been merged through net. Apart from this,
series remains the same.
[0]: https://lkml.kernel.org/r/
20171207142715.32578-1-jbrunet@baylibre.com
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:46 +0000 (10:44 +0100)]
net: phy: meson-gxl: join the authors
Following previous changes, join the other authors of this driver and
take the blame with them
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:45 +0000 (10:44 +0100)]
net: phy: meson-gxl: add interrupt support
Enable interrupt support in meson-gxl PHY driver
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:44 +0000 (10:44 +0100)]
net: phy: meson-gxl: leave CONFIG_A6 untouched
The PHY performs just as well when left in its default configuration and
it makes senses because this poke gets reset just after init.
According to the documentation, all registers in the Analog/DSP bank are
reset when there is a mode switch from 10BT to 100BT. The bank is also
reset on power down and soft reset, so we will never see the value which
may have been set by the bootloader.
In the end, we have used the default configuration so far and there is no
reason to change now. Remove CONFIG_A6 poke to make this clear.
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:43 +0000 (10:44 +0100)]
net: phy: meson-gxl: use genphy_config_init
Use the generic init function to populate some of the phydev
structure fields
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:42 +0000 (10:44 +0100)]
net: phy: meson-gxl: add read and write helpers for banked registers
Add read and write helpers to manipulate banked registers on this PHY
This helps clarify the settings applied to these registers and what the
driver actually does
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:41 +0000 (10:44 +0100)]
net: phy: meson-gxl: define control registers
Define registers and bits in meson-gxl PHY driver to make a bit
more human friendly. No functional change.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jerome Brunet [Mon, 18 Dec 2017 09:44:40 +0000 (10:44 +0100)]
net: phy: meson-gxl: check phy_write return value
Always check phy_write return values. Better to be safe than sorry
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Dec 2017 18:07:50 +0000 (13:07 -0500)]
Merge branch 'sfc-Medford2'
Edward Cree says:
====================
sfc: Initial X2000-series (Medford2) support
Basic PCI-level changes to support X2000-series NICs.
Also fix unexpected-PTP-event log messages, since the timestamp format has
been changed in these NICs and that causes us to fail to probe PTP (but we
still get the PPS events).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Bert Kenward [Mon, 18 Dec 2017 16:57:41 +0000 (16:57 +0000)]
sfc: populate the timer reload field
The timer mode register now has a separate field for the reload value.
Since we always use this timer with the reload (for interrupt moderation)
we set this to the same as the initial value.
Previous hardware ignores this field, so we can safely set these bits
on all hardware that uses this register.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bert Kenward [Mon, 18 Dec 2017 16:57:18 +0000 (16:57 +0000)]
sfc: update EF10 register definitions
The RX_L4_CLASS field has shrunk from 3 bits to 2 bits. The upper
bit was never used in previous hardware, so we can use the new
definition throughout.
The TSO OUTER_IPID field was previously spelt differently from the
external definitions.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 18 Dec 2017 16:56:58 +0000 (16:56 +0000)]
sfc: improve PTP error reporting
Log a message if PTP probing fails; if we then, unexpectedly, get PTP
events, only log a message for the first one on each device.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 18 Dec 2017 16:56:34 +0000 (16:56 +0000)]
sfc: add Medford2 (SFC9250) PCI Device IDs
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 18 Dec 2017 16:56:19 +0000 (16:56 +0000)]
sfc: support VI strides other than 8k
Medford2 can also have 16k or 64k VI stride. This is reported by MCDI in
GET_CAPABILITIES, which fortunately is called before the driver does
anything sensitive to the VI stride (such as accessing or even allocating
VIs past the zeroth).
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Mon, 18 Dec 2017 16:55:50 +0000 (16:55 +0000)]
sfc: make mem_bar a function rather than a constant
Support using BAR 0 on SFC9250, even though the driver doesn't bind to such
devices yet.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Dec 2017 15:51:06 +0000 (10:51 -0500)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2017-12-18
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Allow arbitrary function calls from one BPF function to another BPF function.
As of today when writing BPF programs, __always_inline had to be used in
the BPF C programs for all functions, unnecessarily causing LLVM to inflate
code size. Handle this more naturally with support for BPF to BPF calls
such that this __always_inline restriction can be overcome. As a result,
it allows for better optimized code and finally enables to introduce core
BPF libraries in the future that can be reused out of different projects.
x86 and arm64 JIT support was added as well, from Alexei.
2) Add infrastructure for tagging functions as error injectable and allow for
BPF to return arbitrary error values when BPF is attached via kprobes on
those. This way of injecting errors generically eases testing and debugging
without having to recompile or restart the kernel. Tags for opting-in for
this facility are added with BPF_ALLOW_ERROR_INJECTION(), from Josef.
3) For BPF offload via nfp JIT, add support for bpf_xdp_adjust_head() helper
call for XDP programs. First part of this work adds handling of BPF
capabilities included in the firmware, and the later patches add support
to the nfp verifier part and JIT as well as some small optimizations,
from Jakub.
4) The bpftool now also gets support for basic cgroup BPF operations such
as attaching, detaching and listing current BPF programs. As a requirement
for the attach part, bpftool can now also load object files through
'bpftool prog load'. This reuses libbpf which we have in the kernel tree
as well. bpftool-cgroup man page is added along with it, from Roman.
5) Back then commit
e87c6bc3852b ("bpf: permit multiple bpf attachments for
a single perf event") added support for attaching multiple BPF programs
to a single perf event. Given they are configured through perf's ioctl()
interface, the interface has been extended with a PERF_EVENT_IOC_QUERY_BPF
command in this work in order to return an array of one or multiple BPF
prog ids that are currently attached, from Yonghong.
6) Various minor fixes and cleanups to the bpftool's Makefile as well
as a new 'uninstall' and 'doc-uninstall' target for removing bpftool
itself or prior installed documentation related to it, from Quentin.
7) Add CONFIG_CGROUP_BPF=y to the BPF kernel selftest config file which is
required for the test_dev_cgroup test case to run, from Naresh.
8) Fix reporting of XDP prog_flags for nfp driver, from Jakub.
9) Fix libbpf's exit code from the Makefile when libelf was not found in
the system, also from Jakub.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Josef Bacik [Sat, 16 Dec 2017 02:42:57 +0000 (21:42 -0500)]
trace: reenable preemption if we modify the ip
Things got moved around between the original bpf_override_return patches
and the final version, and now the ftrace kprobe dispatcher assumes if
you modified the ip that you also enabled preemption. Make a comment of
this and enable preemption, this fixes the lockdep splat that happened
when using this feature.
Fixes: 9802d86585db ("bpf: add a bpf_override_function helper")
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Sat, 16 Dec 2017 00:29:13 +0000 (16:29 -0800)]
nfp: set flags in the correct member of netdev_bpf
netdev_bpf.flags is the input member for installing the program.
netdev_bpf.prog_flags is the output member for querying. Set
the correct one on query.
Fixes: 92f0292b35a0 ("net: xdp: report flags program was installed with on query")
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Sat, 16 Dec 2017 00:19:30 +0000 (16:19 -0800)]
libbpf: fix Makefile exit code if libelf not found
/bin/sh's exit does not recognize -1 as a number, leading to
the following error message:
/bin/sh: 1: exit: Illegal number: -1
Use 1 as the exit code.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Sun, 17 Dec 2017 19:34:37 +0000 (20:34 +0100)]
Merge branch 'bpf-to-bpf-function-calls'
Alexei Starovoitov says:
====================
First of all huge thank you to Daniel, John, Jakub, Edward and others who
reviewed multiple iterations of this patch set over the last many months
and to Dave and others who gave critical feedback during netconf/netdev.
The patch is solid enough and we thought through numerous corner cases,
but it's not the end. More followups with code reorg and features to follow.
TLDR: Allow arbitrary function calls from bpf function to another bpf function.
Since the beginning of bpf all bpf programs were represented as a single function
and program authors were forced to use always_inline for all functions
in their C code. That was causing llvm to unnecessary inflate the code size
and forcing developers to move code to header files with little code reuse.
With a bit of additional complexity teach verifier to recognize
arbitrary function calls from one bpf function to another as long as
all of functions are presented to the verifier as a single bpf program.
Extended program layout:
..
r1 = .. // arg1
r2 = .. // arg2
call pc+1 // function call pc-relative
exit
.. = r1 // access arg1
.. = r2 // access arg2
..
call pc+20 // second level of function call
...
It allows for better optimized code and finally allows to introduce
the core bpf libraries that can be reused in different projects,
since programs are no longer limited by single elf file.
With function calls bpf can be compiled into multiple .o files.
This patch is the first step. It detects programs that contain
multiple functions and checks that calls between them are valid.
It splits the sequence of bpf instructions (one program) into a set
of bpf functions that call each other. Calls to only known
functions are allowed. Since all functions are presented to
the verifier at once conceptually it is 'static linking'.
Future plans:
- introduce BPF_PROG_TYPE_LIBRARY and allow a set of bpf functions
to be loaded into the kernel that can be later linked to other
programs with concrete program types. Aka 'dynamic linking'.
- introduce function pointer type and indirect calls to allow
bpf functions call other dynamically loaded bpf functions while
the caller bpf function is already executing. Aka 'runtime linking'.
This will be more generic and more flexible alternative
to bpf_tail_calls.
FAQ:
Q: Interpreter and JIT changes mean that new instruction is introduced ?
A: No. The call instruction technically stays the same. Now it can call
both kernel helpers and other bpf functions.
Calling convention stays the same as well.
From uapi point of view the call insn got new 'relocation' BPF_PSEUDO_CALL
similar to BPF_PSEUDO_MAP_FD 'relocation' of bpf_ldimm64 insn.
Q: What had to change on LLVM side?
A: Trivial LLVM patch to allow calls was applied to upcoming 6.0 release:
https://reviews.llvm.org/rL318614
with few bugfixes as well.
Make sure to build the latest llvm to have bpf_call support.
More details in the patches.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Fri, 15 Dec 2017 01:55:17 +0000 (17:55 -0800)]
selftests/bpf: additional bpf_call tests
Add some additional checks for few more corner cases.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:16 +0000 (17:55 -0800)]
bpf: arm64: add JIT support for multi-function programs
similar to x64 add support for bpf-to-bpf calls.
When program has calls to in-kernel helpers the target call offset
is known at JIT time and arm64 architecture needs 2 passes.
With bpf-to-bpf calls the dynamically allocated function start
is unknown until all functions of the program are JITed.
Therefore (just like x64) arm64 JIT needs one extra pass over
the program to emit correct call offsets.
Implementation detail:
Avoid being too clever in 64-bit immediate moves and
always use 4 instructions (instead of 3-4 depending on the address)
to make sure only one extra pass is needed.
If some future optimization would make it worth while to optimize
'call 64-bit imm' further, the JIT would need to do 4 passes
over the program instead of 3 as in this patch.
For typical bpf program address the mov needs 3 or 4 insns,
so unconditional 4 insns to save extra pass is a worthy trade off
at this state of JIT.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:15 +0000 (17:55 -0800)]
bpf: x64: add JIT support for multi-function programs
Typical JIT does several passes over bpf instructions to
compute total size and relative offsets of jumps and calls.
With multitple bpf functions calling each other all relative calls
will have invalid offsets intially therefore we need to additional
last pass over the program to emit calls with correct offsets.
For example in case of three bpf functions:
main:
call foo
call bpf_map_lookup
exit
foo:
call bar
exit
bar:
exit
We will call bpf_int_jit_compile() indepedently for main(), foo() and bar()
x64 JIT typically does 4-5 passes to converge.
After these initial passes the image for these 3 functions
will be good except call targets, since start addresses of
foo() and bar() are unknown when we were JITing main()
(note that call bpf_map_lookup will be resolved properly
during initial passes).
Once start addresses of 3 functions are known we patch
call_insn->imm to point to right functions and call
bpf_int_jit_compile() again which needs only one pass.
Additional safety checks are done to make sure this
last pass doesn't produce image that is larger or smaller
than previous pass.
When constant blinding is on it's applied to all functions
at the first pass, since doing it once again at the last
pass can change size of the JITed code.
Tested on x64 and arm64 hw with JIT on/off, blinding on/off.
x64 jits bpf-to-bpf calls correctly while arm64 falls back to interpreter.
All other JITs that support normal BPF_CALL will behave the same way
since bpf-to-bpf call is equivalent to bpf-to-kernel call from
JITs point of view.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:14 +0000 (17:55 -0800)]
bpf: fix net.core.bpf_jit_enable race
global bpf_jit_enable variable is tested multiple times in JITs,
blinding and verifier core. The malicious root can try to toggle
it while loading the programs. This race condition was accounted
for and there should be no issues, but it's safer to avoid
this race condition.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:13 +0000 (17:55 -0800)]
bpf: add support for bpf_call to interpreter
though bpf_call is still the same call instruction and
calling convention 'bpf to bpf' and 'bpf to helper' is the same
the interpreter has to oparate on 'struct bpf_insn *'.
To distinguish these two cases add a kernel internal opcode and
mark call insns with it.
This opcode is seen by interpreter only. JITs will never see it.
Also add tiny bit of debug code to aid interpreter debugging.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:12 +0000 (17:55 -0800)]
selftests/bpf: add xdp noinline test
add large semi-artificial XDP test with 18 functions to stress test
bpf call verification logic
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:11 +0000 (17:55 -0800)]
selftests/bpf: add bpf_call test
strip always_inline from test_l4lb.c and compile it with -fno-inline
to let verifier go through 11 function with various function arguments
and return values
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:10 +0000 (17:55 -0800)]
libbpf: add support for bpf_call
- recognize relocation emitted by llvm
- since all regular function will be kept in .text section and llvm
takes care of pc-relative offsets in bpf_call instruction
simply copy all of .text to relevant program section while adjusting
bpf_call instructions in program section to point to newly copied
body of instructions from .text
- do so for all programs in the elf file
- set all programs types to the one passed to bpf_prog_load()
Note for elf files with multiple programs that use different
functions in .text section we need to do 'linker' style logic.
This work is still TBD
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:09 +0000 (17:55 -0800)]
selftests/bpf: add tests for stack_zero tracking
adjust two tests, since verifier got smarter
and add new one to test stack_zero logic
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:08 +0000 (17:55 -0800)]
bpf: teach verifier to recognize zero initialized stack
programs with function calls are often passing various
pointers via stack. When all calls are inlined llvm
flattens stack accesses and optimizes away extra branches.
When functions are not inlined it becomes the job of
the verifier to recognize zero initialized stack to avoid
exploring paths that program will not take.
The following program would fail otherwise:
ptr = &buffer_on_stack;
*ptr = 0;
...
func_call(.., ptr, ...) {
if (..)
*ptr = bpf_map_lookup();
}
...
if (*ptr != 0) {
// Access (*ptr)->field is valid.
// Without stack_zero tracking such (*ptr)->field access
// will be rejected
}
since stack slots are no longer uniform invalid | spill | misc
add liveness marking to all slots, but do it in 8 byte chunks.
So if nothing was read or written in [fp-16, fp-9] range
it will be marked as LIVE_NONE.
If any byte in that range was read, it will be marked LIVE_READ
and stacksafe() check will perform byte-by-byte verification.
If all bytes in the range were written the slot will be
marked as LIVE_WRITTEN.
This significantly speeds up state equality comparison
and reduces total number of states processed.
before after
bpf_lb-DLB_L3.o 2051 2003
bpf_lb-DLB_L4.o 3287 3164
bpf_lb-DUNKNOWN.o 1080 1080
bpf_lxc-DDROP_ALL.o 24980 12361
bpf_lxc-DUNKNOWN.o 34308 16605
bpf_netdev.o 15404 10962
bpf_overlay.o 7191 6679
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:07 +0000 (17:55 -0800)]
selftests/bpf: add verifier tests for bpf_call
Add extensive set of tests for bpf_call verification logic:
calls: basic sanity
calls: using r0 returned by callee
calls: callee is using r1
calls: callee using args1
calls: callee using wrong args2
calls: callee using two args
calls: callee changing pkt pointers
calls: two calls with args
calls: two calls with bad jump
calls: recursive call. test1
calls: recursive call. test2
calls: unreachable code
calls: invalid call
calls: jumping across function bodies. test1
calls: jumping across function bodies. test2
calls: call without exit
calls: call into middle of ld_imm64
calls: call into middle of other call
calls: two calls with bad fallthrough
calls: two calls with stack read
calls: two calls with stack write
calls: spill into caller stack frame
calls: two calls with stack write and void return
calls: ambiguous return value
calls: two calls that return map_value
calls: two calls that return map_value with bool condition
calls: two calls that return map_value with incorrect bool check
calls: two calls that receive map_value via arg=ptr_stack_of_caller. test1
calls: two calls that receive map_value via arg=ptr_stack_of_caller. test2
calls: two jumps that receive map_value via arg=ptr_stack_of_jumper. test3
calls: two calls that receive map_value_ptr_or_null via arg. test1
calls: two calls that receive map_value_ptr_or_null via arg. test2
calls: pkt_ptr spill into caller stack
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 15 Dec 2017 01:55:06 +0000 (17:55 -0800)]
bpf: introduce function calls (verification)
Allow arbitrary function calls from bpf function to another bpf function.
To recognize such set of bpf functions the verifier does:
1. runs control flow analysis to detect function boundaries
2. proceeds with verification of all functions starting from main(root) function
It recognizes that the stack of the caller can be accessed by the callee
(if the caller passed a pointer to its stack to the callee) and the callee
can store map_value and other pointers into the stack of the caller.
3. keeps track of the stack_depth of each function to make sure that total
stack depth is still less than 512 bytes
4. disallows pointers to the callee stack to be stored into the caller stack,
since they will be invalid as soon as the callee returns
5. to reuse all of the existing state_pruning logic each function call
is considered to be independent call from the verifier point of view.
The verifier pretends to inline all function calls it sees are being called.
It stores the callsite instruction index as part of the state to make sure
that two calls to the same callee from two different places in the caller
will be different from state pruning point of view
6. more safety checks are added to liveness analysis
Implementation details:
. struct bpf_verifier_state is now consists of all stack frames that
led to this function
. struct bpf_func_state represent one stack frame. It consists of
registers in the given frame and its stack
. propagate_liveness() logic had a premature optimization where
mark_reg_read() and mark_stack_slot_read() were manually inlined
with loop iterating over parents for each register or stack slot.
Undo this optimization to reuse more complex mark_*_read() logic
. skip_callee() logic is not necessary from safety point of view,
but without it mark_*_read() markings become too conservative,
since after returning from the funciton call a read of r6-r9
will incorrectly propagate the read marks into callee causing
inefficient pruning later
. mark_*_read() logic is now aware of control flow which makes it
more complex. In the future the plan is to rewrite liveness
to be hierarchical. So that liveness can be done within
basic block only and control flow will be responsible for
propagation of liveness information along cfg and between calls.
. tail_calls and ld_abs insns are not allowed in the programs with
bpf-to-bpf calls
. returning stack pointers to the caller or storing them into stack
frame of the caller is not allowed
Testing:
. no difference in cilium processed_insn numbers
. large number of tests follows in next patches
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>