David S. Miller [Sun, 1 Apr 2018 03:37:33 +0000 (23:37 -0400)]
Merge branch 'chelsio-inline-tls'
Atul Gupta says:
====================
Chelsio Inline TLS
Series for Chelsio Inline TLS driver (chtls)
Use tls ULP infrastructure to register chtls as Inline TLS driver.
Chtls use TCP Sockets to Tx/Rx TLS records.
TCP sk_proto APIs are enhanced to offload TLS record.
T6 adapter provides the following features:
-TLS record offload, TLS header, encrypt, digest and transmit
-TLS record receive and decrypt
-TLS keys store
-TCP/IP engine
-TLS engine
-GCM crypto engine [support CBC also]
TLS provides security at the transport layer. It uses TCP to provide
reliable end-to-end transport of application data.
It relies on TCP for any retransmission.
TLS session comprises of three parts:
a. TCP/IP connection
b. TLS handshake
c. Record layer processing
TLS handshake state machine is executed in host (refer standard
implementation eg. OpenSSL). Setsockopt [SOL_TCP, TCP_ULP]
initialize TCP proto-ops for Chelsio inline tls support.
setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));
Tx and Rx Keys are decided during handshake and programmed on
the chip after CCS is exchanged.
struct tls12_crypto_info_aes_gcm_128 crypto_info
setsockopt(sock, SOL_TLS, TLS_TX, &crypto_info, sizeof(crypto_info))
Finish is the first encrypted/decrypted message tx/rx inline.
On the Tx path TLS engine receive plain text from openssl, insert IV,
fetches the tx key, create cipher text records and generate MAC.
TLS header is added to cipher text and forward to TCP/IP engine for
transport layer processing and transmission on wire.
TX PATH:
Apps--openssl--chtls---TLS engine---encrypt/auth---TCP/IP engine---wire
On the Rx side, data received is PDU aligned at record boundaries.
TLS processes only the complete record. If rx key is programmed on
CCS receive, data is decrypted and plain text is posted to host.
RX PATH:
Wire--cipher-text--TCP/IP engine [PDU align]---TLS engine---
decrypt/auth---plain-text--chtls--openssl--application
v15: indent fix in mark_urg
-removed unwanted checks in sendmsg, sendpage, recvmsg,
close, disconnect,shutdown, destroy sock [Sabrina]
- removed unused chtls_free_kmap [chtls.h]
- rebase to top of net-next
v14: -Reverse christmas tree style for variable declarations for
various functions in chtls_hw.c, chtls_io.c [Stefano Brivio]
- replaced break with return in tcp_state_to_flowc_state
[Stefano Brivio]
- renamed tlstx_seq_number to tlstx_incr_seqnum [Stefano Brivio]
- use bool for corked, should_push and send_should_push
[Stefano Brivio]
- removed "Reviewed-by" tag for Stefano, Sabrina, Dave Watson
v13: handle clean ctx free for HW_RECORD in tls_sk_proto_close
-removed SOCK_INLINE [chtls.h], using csk_conn_inline instead
in send_abort_rpl,chtls_send_abort_rpl,chtls_sendmsg,chtls_sendpage
-removed sk_no_receive [chtls_io.c] replaced with sk_shutdown &
RCV_SHUTDOWN in chtls_pt_recvmsg, peekmsg and chtls_recvmsg
-cleaned chtls_expansion_size [Stefano Brivio]
- u8 conf:3 in tls_sw_context to add TLS_HW_RECORD
-removed is_tls_skb, using tls_skb_inline [Stefano Brivio]
-reverse christmas tree formatting in chtls_io.c, chtls_cm.c
[Stefano Brivio]
-fixed build warning reported by kbuild robot
-retained ctx conf enum in chtls_main vs earlier versions, tls_prots
not used in chtls.
-cleanup [removed syn_sent, base_prot, added synq] [Michael Werner]
- passing struct fw_wr_hdr * to ofldtxq_stop [Casey]
- rebased on top of the current net-next
v12: patch against net-next
-fixed build error [reported by Julia]
-replace set_queue with skb_set_queue_mapping [Sabrina]
-copyright year correction [chtls]
v11: formatting and cleanup, few function rename and error
handling [Stefano Brivio]
- ctx freed later for TLS_HW_RECORD
- split tx and rx in different patch
v10: fixed following based on the review comments of Sabrina Dubroca
-docs header added for struct tls_device [tls.h]
-changed TLS_FULL_HW to TLS_HW_RECORD
-similary using tls-hw-record instead of tls-inline for
ethtool feature config
-added more description to patch sets
-replaced kmalloc/vmalloc/kfree with kvzalloc/kvfree
-reordered the patch sequence
-formatted entire patch for func return values
v9: corrected __u8 and similar usage
-create_ctx to alloc tls_context
-tls_hw_prot before sk !establish check
v8: tls_main.c cleanup comment [Dave Watson]
v7: func name change, use sk->sk_prot where required
v6: modify prot only for FULL_HW
-corrected commit message for patch 11
v5: set TLS_FULL_HW for registered inline tls drivers
-set TLS_FULL_HW prot for offload connection else move
to TLS_SW_TX
-Case handled for interface with same IP [Dave Miller]
-Removed Specific IP and INADDR_ANY handling [v4]
v4: removed chtls ULP type, retained tls ULP
-registered chtls with net tls
-defined struct tls_device to register the Inline drivers
-ethtool interface tls-inline to enable Inline TLS for interface
-prot update to support inline TLS
v3: fixed the kbuild test issues
-made few funtions static
-initialized few variables
v2: fixed the following based on the review comments of Stephan Mueller,
Stefano Brivio and Hannes Frederic
-Added more details in cover letter
-Fixed indentation and formating issues
-Using aes instead of aes-generic
-memset key info after programing the key on chip
-reordered the patch sequence
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:12:03 +0000 (21:42 +0530)]
crypto: chtls - Makefile Kconfig
Entry for Inline TLS as another driver dependent on cxgb4 and chcr
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:12:02 +0000 (21:42 +0530)]
crypto: chtls - Program the TLS session Key
Initialize the space reserved for storing the TLS keys,
get and free the location where key is stored for the TLS
connection.
Program the Tx and Rx key as received from user in
struct tls12_crypto_info_aes_gcm_128 and understood by hardware.
added socket option TLS_RX
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:12:01 +0000 (21:42 +0530)]
crypto: chtls - Inline TLS record Rx
handler for record receive. plain text copied to user
buffer
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Michael Werner <werner@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:12:00 +0000 (21:42 +0530)]
crypto: chtls - Inline TLS record Tx
TLS handler for record transmit.
Create Inline TLS work request and post to FW.
Create Inline TLS record CPLs for hardware
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Michael Werner <werner@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:59 +0000 (21:41 +0530)]
crypto : chtls - CPL handler definition
Exchange messages with hardware to program the TLS session
CPL handlers for messages received from chip.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: Michael Werner <werner@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:58 +0000 (21:41 +0530)]
crypto: chtls - Register chtls with net tls
Register chtls as Inline TLS driver, chtls is ULD to cxgb4.
Setsockopt to program (tx/rx) keys on chip.
Support AES GCM of key size 128.
Support both Inline Rx and Tx.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Casey Leedom <leedom@chelsio.com>
Reviewed-by: Michael Werner <werner@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:57 +0000 (21:41 +0530)]
crypto: chtls - structure and macro for Inline TLS
Define Inline TLS state, connection management info.
Supporting macros definition.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Michael Werner <werner@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:56 +0000 (21:41 +0530)]
crypto: chcr - Inline TLS Key Macros
Define macro for programming the TLS Key context
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:55 +0000 (21:41 +0530)]
cxgb4: LLD driver changes to support TLS
Read the Inline TLS capability from firmware.
Determine the area reserved for storing the keys
Dump the Inline TLS tx and rx records count.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Michael Werner <werner@chelsio.com>
Reviewed-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:54 +0000 (21:41 +0530)]
cxgb4: Inline TLS FW Interface
Key area size in hw-config file. CPL struct for TLS request
and response. Work request for Inline TLS.
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Casey Leedom <leedom@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:53 +0000 (21:41 +0530)]
ethtool: enable Inline TLS in HW
Ethtool option enables TLS record offload on HW, user
configures the feature for netdev capable of Inline TLS.
This allows user to define custom sk_prot for Inline TLS sock
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Atul Gupta [Sat, 31 Mar 2018 16:11:52 +0000 (21:41 +0530)]
tls: support for Inline tls record
Facility to register Inline TLS drivers to net/tls. Setup
TLS_HW_RECORD prot to listen on offload device.
Cases handled
- Inline TLS device exists, setup prot for TLS_HW_RECORD
- Atleast one Inline TLS exists, sets TLS_HW_RECORD.
- If non-inline device establish connection, move to TLS_SW_TX
Signed-off-by: Atul Gupta <atul.gupta@chelsio.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 03:33:04 +0000 (23:33 -0400)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-03-31
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Add raw BPF tracepoint API in order to have a BPF program type that
can access kernel internal arguments of the tracepoints in their
raw form similar to kprobes based BPF programs. This infrastructure
also adds a new BPF_RAW_TRACEPOINT_OPEN command to BPF syscall which
returns an anon-inode backed fd for the tracepoint object that allows
for automatic detach of the BPF program resp. unregistering of the
tracepoint probe on fd release, from Alexei.
2) Add new BPF cgroup hooks at bind() and connect() entry in order to
allow BPF programs to reject, inspect or modify user space passed
struct sockaddr, and as well a hook at post bind time once the port
has been allocated. They are used in FB's container management engine
for implementing policy, replacing fragile LD_PRELOAD wrapper
intercepting bind() and connect() calls that only works in limited
scenarios like glibc based apps but not for other runtimes in
containerized applications, from Andrey.
3) BPF_F_INGRESS flag support has been added to sockmap programs for
their redirect helper call bringing it in line with cls_bpf based
programs. Support is added for both variants of sockmap programs,
meaning for tx ULP hooks as well as recv skb hooks, from John.
4) Various improvements on BPF side for the nfp driver, besides others
this work adds BPF map update and delete helper call support from
the datapath, JITing of 32 and 64 bit XADD instructions as well as
offload support of bpf_get_prandom_u32() call. Initial implementation
of nfp packet cache has been tackled that optimizes memory access
(see merge commit for further details), from Jakub and Jiong.
5) Removal of struct bpf_verifier_env argument from the print_bpf_insn()
API has been done in order to prepare to use print_bpf_insn() soon
out of perf tool directly. This makes the print_bpf_insn() API more
generic and pushes the env into private data. bpftool is adjusted
as well with the print_bpf_insn() argument removal, from Jiri.
6) Couple of cleanups and prep work for the upcoming BTF (BPF Type
Format). The latter will reuse the current BPF verifier log as
well, thus bpf_verifier_log() is further generalized, from Martin.
7) For bpf_getsockopt() and bpf_setsockopt() helpers, IPv4 IP_TOS read
and write support has been added in similar fashion to existing
IPv6 IPV6_TCLASS socket option we already have, from Nikita.
8) Fixes in recent sockmap scatterlist API usage, which did not use
sg_init_table() for initialization thus triggering a BUG_ON() in
scatterlist API when CONFIG_DEBUG_SG was enabled. This adds and
uses a small helper sg_init_marker() to properly handle the affected
cases, from Prashant.
9) Let the BPF core follow IDR code convention and therefore use the
idr_preload() and idr_preload_end() helpers, which would also help
idr_alloc_cyclic() under GFP_ATOMIC to better succeed under memory
pressure, from Shaohua.
10) Last but not least, a spelling fix in an error message for the
BPF cookie UID helper under BPF sample code, from Colin.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 03:25:40 +0000 (23:25 -0400)]
Merge branch 'inet-frags-bring-rhashtables-to-IP-defrag'
Eric Dumazet says:
====================
inet: frags: bring rhashtables to IP defrag
IP defrag processing is one of the remaining problematic layer in linux.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket.
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU, and 64bit hosts can now provision whatever amount
of memory needed to handle the expected workloads.
v2: Addressed Herbert and Kirill feedbacks
(Use rhashtable_free_and_destroy(), and split the big patch into small units)
v3: Removed the extra add_frag_mem_limit(...) from inet_frag_create()
Removed the refcount_inc_not_zero() call from inet_frags_free_cb(),
as we can exploit del_timer() return value.
v4: kbuild robot feedback about one missing static (squashed)
Additional patches :
inet: frags: do not clone skb in ip_expire()
ipv6: frags: rewrite ip6_expire_frag_queue()
rhashtable: reorganize struct rhashtable layout
inet: frags: reorganize struct netns_frags
inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
ipv6: frags: get rid of ip6frag_skb_cb/FRAG6_CB
inet: frags: get rid of nf_ct_frag6_skb_cb/NFCT_FRAG6_CB
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:59:00 +0000 (12:59 -0700)]
inet: frags: get rid of nf_ct_frag6_skb_cb/NFCT_FRAG6_CB
nf_ct_frag6_queue() uses skb->cb[] to store the fragment offset,
meaning that we could use two cache lines per skb when finding
the insertion point, if for some reason inet6_skb_parm size
is increased in the future.
By using skb->ip_defrag_offset instead of skb->cb[] we pack all the fields
in a single cache line, matching what we did for IPv4.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:59 +0000 (12:58 -0700)]
ipv6: frags: get rid of ip6frag_skb_cb/FRAG6_CB
ip6_frag_queue uses skb->cb[] to store the fragment offset, meaning that
we could use two cache lines per skb when finding the insertion point,
if for some reason inet6_skb_parm size is increased in the future.
By using skb->ip_defrag_offset instead of skb->cb[], we pack all
the fields in a single cache line, matching what we did for IPv4.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:58 +0000 (12:58 -0700)]
inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately
this integer is currently in a different cache line than skb->next,
meaning that we use two cache lines per skb when finding the insertion point.
By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields
in a single cache line and save precious memory bandwidth.
Note that after the fast path added by Changli Gao in commit
d6bebca92c66 ("fragment: add fast path for in-order fragments")
this change wont help the fast path, since we still need
to access prev->len (2nd cache line), but will show great
benefits when slow path is entered, since we perform
a linear scan of a potentially long list.
Also, note that this potential long list is an attack vector,
we might consider also using an rb-tree there eventually.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:57 +0000 (12:58 -0700)]
inet: frags: reorganize struct netns_frags
Put the read-mostly fields in a separate cache line
at the beginning of struct netns_frags, to reduce
false sharing noticed in inet_frag_kill()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:56 +0000 (12:58 -0700)]
rhashtable: reorganize struct rhashtable layout
While under frags DDOS I noticed unfortunate false sharing between
@nelems and @params.automatic_shrinking
Move @nelems at the end of struct rhashtable so that first cache line
is shared between all cpus, because almost never dirtied.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:55 +0000 (12:58 -0700)]
ipv6: frags: rewrite ip6_expire_frag_queue()
Make it similar to IPv4 ip_expire(), and release the lock
before calling icmp functions.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:54 +0000 (12:58 -0700)]
inet: frags: do not clone skb in ip_expire()
An skb_clone() was added in commit
ec4fbd64751d ("inet: frag: release
spinlock before calling icmp_send()")
While fixing the bug at that time, it also added a very high cost
for DDOS frags, as the ICMP rate limit is applied after this
expensive operation (skb_clone() + consume_skb(), implying memory
allocations, copy, and freeing)
We can use skb_get(head) here, all we want is to make sure skb wont
be freed by another cpu.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:53 +0000 (12:58 -0700)]
inet: frags: break the 2GB limit for frags storage
Some users are willing to provision huge amounts of memory to be able
to perform reassembly reasonnably well under pressure.
Current memory tracking is using one atomic_t and integers.
Switch to atomic_long_t so that 64bit arches can use more than 2GB,
without any cost for 32bit arches.
Note that this patch avoids an overflow error, if high_thresh was set
to ~2GB, since this test in inet_frag_alloc() was never true :
if (... || frag_mem_limit(nf) > nf->high_thresh)
Tested:
$ echo
16000000000 >/proc/sys/net/ipv4/ipfrag_high_thresh
<frag DDOS>
$ grep FRAG /proc/net/sockstat
FRAG: inuse
14705885 memory
16000002880
$ nstat -n ; sleep 1 ; nstat | grep Reas
IpReasmReqds
3317150 0.0
IpReasmFails
3317112 0.0
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:52 +0000 (12:58 -0700)]
inet: frags: remove inet_frag_maybe_warn_overflow()
This function is obsolete, after rhashtable addition to inet defrag.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:51 +0000 (12:58 -0700)]
inet: frags: get rif of inet_frag_evicting()
This refactors ip_expire() since one indentation level is removed.
Note: in the future, we should try hard to avoid the skb_clone()
since this is a serious performance cost.
Under DDOS, the ICMP message wont be sent because of rate limits.
Fact that ip6_expire_frag_queue() does not use skb_clone() is
disturbing too. Presumably IPv6 should have the same
issue than the one we fixed in commit
ec4fbd64751d
("inet: frag: release spinlock before calling icmp_send()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:50 +0000 (12:58 -0700)]
inet: frags: remove some helpers
Remove sum_frag_mem_limit(), ip_frag_mem() & ip6_frag_mem()
Also since we use rhashtable we can bring back the number of fragments
in "grep FRAG /proc/net/sockstat /proc/net/sockstat6" that was
removed in commit
434d305405ab ("inet: frag: don't account number
of fragment queues")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:49 +0000 (12:58 -0700)]
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse
1966916 memory
2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:48 +0000 (12:58 -0700)]
rhashtable: add schedule points
Rehashing and destroying large hash table takes a lot of time,
and happens in process context. It is safe to add cond_resched()
in rhashtable_rehash_table() and rhashtable_free_and_destroy()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:47 +0000 (12:58 -0700)]
inet: frags: refactor ipfrag_init()
We need to call inet_frags_init() before register_pernet_subsys(),
as a prereq for following patch ("inet: frags: use rhashtables for reassembly units")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:46 +0000 (12:58 -0700)]
inet: frags: refactor lowpan_net_frag_init()
We want to call lowpan_net_frag_init() earlier.
Similar to commit "inet: frags: refactor ipv6_frag_init()"
This is a prereq to "inet: frags: use rhashtables for reassembly units"
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:45 +0000 (12:58 -0700)]
inet: frags: refactor ipv6_frag_init()
We want to call inet_frags_init() earlier.
This is a prereq to "inet: frags: use rhashtables for reassembly units"
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:44 +0000 (12:58 -0700)]
inet: frags: add a pointer to struct netns_frags
In order to simplify the API, add a pointer to struct inet_frags.
This will allow us to make things less complex.
These functions no longer have a struct inet_frags parameter :
inet_frag_destroy(struct inet_frag_queue *q /*, struct inet_frags *f */)
inet_frag_put(struct inet_frag_queue *q /*, struct inet_frags *f */)
inet_frag_kill(struct inet_frag_queue *q /*, struct inet_frags *f */)
inet_frags_exit_net(struct netns_frags *nf /*, struct inet_frags *f */)
ip6_expire_frag_queue(struct net *net, struct frag_queue *fq)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:43 +0000 (12:58 -0700)]
inet: frags: change inet_frags_init_net() return value
We will soon initialize one rhashtable per struct netns_frags
in inet_frags_init_net().
This patch changes the return value to eventually propagate an
error.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 31 Mar 2018 19:58:42 +0000 (12:58 -0700)]
ipv6: frag: remove unused field
csum field in struct frag_queue is not used, remove it.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 03:24:20 +0000 (23:24 -0400)]
Merge branch 'bnxt_en-next'
Michael Chan says:
====================
bnxt_en: Update for net-next.
Misc. updates including updated firmware interface, some additional
port statistics, a new IRQ assignment scheme for the RDMA driver, support
for VF trust, and other changes and improvements for SRIOV.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:21 +0000 (13:54 -0400)]
bnxt_en: Add ULP calls to stop and restart IRQs.
When the driver needs to re-initailize the IRQ vectors, we make the
new ulp_irq_stop() call to tell the RDMA driver to disable and free
the IRQ vectors. After IRQ vectors have been re-initailized, we
make the ulp_irq_restart() call to tell the RDMA driver that
IRQs can be restarted.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:20 +0000 (13:54 -0400)]
bnxt_en: Reserve completion rings and MSIX for bnxt_re RDMA driver.
Add additional logic to reserve completion rings for the bnxt_re driver
when it requests MSIX vectors. The function bnxt_cp_rings_in_use()
will return the total number of completion rings used by both drivers
that need to be reserved. If the network interface in up, we will
close and open the NIC to reserve the new set of completion rings and
re-initialize the vectors.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:19 +0000 (13:54 -0400)]
bnxt_en: Refactor bnxt_need_reserve_rings().
Refactor bnxt_need_reserve_rings() slightly so that __bnxt_reserve_rings()
can call it and remove some duplicated code.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:18 +0000 (13:54 -0400)]
bnxt_en: Add IRQ remapping logic.
Add remapping logic so that bnxt_en can use any arbitrary MSIX vectors.
This will allow the driver to reserve one range of MSIX vectors to be
used by both bnxt_en and bnxt_re. bnxt_en can now skip over the MSIX
vectors used by bnxt_re.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:17 +0000 (13:54 -0400)]
bnxt_en: Change IRQ assignment for RDMA driver.
In the current code, the range of MSIX vectors allocated for the RDMA
driver is disjoint from the network driver. This creates a problem
for the new firmware ring reservation scheme. The new scheme requires
the reserved completion rings/MSIX vectors to be in a contiguous
range.
Change the logic to allocate RDMA MSIX vectors to be contiguous with
the vectors used by bnxt_en on new firmware using the new scheme.
The new function bnxt_get_num_msix() calculates the exact number of
vectors needed by both drivers.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:16 +0000 (13:54 -0400)]
bnxt_en: Improve ring allocation logic.
Currently, the driver code makes some assumptions about the group index
and the map index of rings. This makes the code more difficult to
understand and less flexible.
Improve it by adding the grp_idx and map_idx fields explicitly to the
bnxt_ring_struct as a union. The grp_idx is initialized for each tx ring
and rx agg ring during init. time. We do the same for the map_idx for
each cmpl ring.
The grp_idx ties the tx ring to the ring group. The map_idx is the
doorbell index of the ring. With this new infrastructure, we can change
the ring index mapping scheme easily in the future.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:15 +0000 (13:54 -0400)]
bnxt_en: Improve valid bit checking in firmware response message.
When firmware sends a DMA response to the driver, the last byte of the
message will be set to 1 to indicate that the whole response is valid.
The driver waits for the message to be valid before reading the message.
The firmware spec allows these response messages to increase in
length by adding new fields to the end of these messages. The
older spec's valid location may become a new field in a newer
spec. To guarantee compatibility, the driver should zero the valid
byte before interpreting the entire message so that any new fields not
implemented by the older spec will be read as zero.
For messages that are forwarded to VFs, we need to set the length
and re-instate the valid bit so the VF will see the valid response.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:14 +0000 (13:54 -0400)]
bnxt_en: Improve resource accounting for SRIOV.
When VFs are created, the current code subtracts the maximum VF
resources from the PF's pool. This under-estimates the resources
remaining in the PF pool. Instead, we should subtract the minimum
VF resources. The VF minimum resources are guaranteed to the VFs
and only these should be subtracted from the PF's pool.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:13 +0000 (13:54 -0400)]
bnxt_en: Check max_tx_scheduler_inputs value from firmware.
When checking for the maximum pre-set TX channels for ethtool -l, we
need to check the current max_tx_scheduler_inputs parameter from firmware.
This parameter specifies the max input for the internal QoS nodes currently
available to this function. The function's TX rings will be capped by this
parameter. By adding this logic, we provide a more accurate pre-set max
TX channels to the user.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Sat, 31 Mar 2018 17:54:12 +0000 (13:54 -0400)]
bnxt_en: Add extended port statistics support
Gather periodic extended port statistics, if the device is PF and
link is up.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Sat, 31 Mar 2018 17:54:11 +0000 (13:54 -0400)]
bnxt_en: Include additional hardware port statistics in ethtool -S.
Include additional hardware port statistics in ethtool -S, which
are useful for debugging.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Sat, 31 Mar 2018 17:54:10 +0000 (13:54 -0400)]
bnxt_en: Add support for ndo_set_vf_trust
Trusted VFs are allowed to modify MAC address, even when PF
has assigned one.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Scott Branden [Sat, 31 Mar 2018 17:54:09 +0000 (13:54 -0400)]
bnxt_en: fix clear flags in ethtool reset handling
Clear flags when reset command processed successfully for components
specified.
Fixes: 6502ad5963a5 ("bnxt_en: Add ETH_RESET_AP support")
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:08 +0000 (13:54 -0400)]
bnxt_en: Use a dedicated VNIC mode for RDMA.
If the RDMA driver is registered, use a new VNIC mode that allows
RDMA traffic to be seen on the netdev in promiscuous mode.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:07 +0000 (13:54 -0400)]
bnxt_en: Adjust default rings for multi-port NICs.
Change the default ring logic to select default number of rings to be up to
8 per port if the default rings x NIC ports <= total CPUs.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sat, 31 Mar 2018 17:54:06 +0000 (13:54 -0400)]
bnxt_en: Update firmware interface to 1.9.1.15.
Minor changes, such as new extended port statistics.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Sat, 31 Mar 2018 06:11:41 +0000 (06:11 +0000)]
vlan: vlan_hw_filter_capable() can be static
Fixes the following sparse warning:
net/8021q/vlan_core.c:168:6: warning:
symbol 'vlan_hw_filter_capable' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:31:43 +0000 (22:31 -0400)]
Merge tag 'mlx5-updates-2018-03-30' of git://git./linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2018-03-30
This series contains updates to mlx5 core and mlx5e netdev drivers.
The main highlight of this series is the RX optimizations for striding RQ path,
introduced by Tariq.
First Four patches are trivial misc cleanups.
- Spelling mistake fix
- Dead code removal
- Warning messages
RX optimizations for striding RQ:
1) RX refactoring, cleanups and micro optimizations
- MTU calculation simplifications, obsoletes some WQEs-to-packets translation
functions and helps delete ~60 LOC.
- Do not busy-wait a pending UMR completion.
- post the new values of UMR WQE inline, instead of using a data pointer.
- use pre-initialized structures to save calculations in datapath.
2) Use linear SKB in Striding RQ "build_skb", (Using linear SKB has many advantages):
- Saves a memcpy of the headers.
- No page-boundary checks in datapath.
- No filler CQEs.
- Significantly smaller CQ.
- SKB data continuously resides in linear part, and not split to
small amount (linear part) and large amount (fragment).
This saves datapath cycles in driver and improves utilization
of SKB fragments in GRO.
- The fragments of a resulting GRO SKB follow the IP forwarding
assumption of equal-size fragments.
implementation details:
HW writes the packets to the beginning of a stride,
i.e. does not keep headroom. To overcome this we make sure we can
extend backwards and use the last bytes of stride i-1.
Extra care is needed for stride 0 as it has no preceding stride.
We make sure headroom bytes are available by shifting the buffer
pointer passed to HW by headroom bytes.
This configuration now becomes default, whenever capable.
Of course, this implies turning LRO off.
Performance testing:
ConnectX-5, single core, single RX ring, default MTU.
UDP packet rate, early drop in TC layer:
--------------------------------------------
| pkt size | before | after | ratio |
--------------------------------------------
| 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
| 500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
| 64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
--------------------------------------------
TCP streams: ~20% gain
3) Support XDP over Striding RQ:
Now that linear SKB is supported over Striding RQ,
we can support XDP by setting stride size to PAGE_SIZE
and headroom to XDP_PACKET_HEADROOM.
Striding RQ is capable of a higher packet-rate than
conventional RQ.
Performance testing:
ConnectX-5, 24 rings, default MTU.
CQE compression ON (to reduce completions BW in PCI).
XDP_DROP packet rate:
--------------------------------------------------
| pkt size | XDP rate | 100GbE linerate | pct% |
--------------------------------------------------
| 64byte | 126.2 Mpps | 148.0 Mpps | 85% |
| 128byte | 80.0 Mpps | 84.8 Mpps | 94% |
| 256byte | 42.7 Mpps | 42.7 Mpps | 100% |
| 512byte | 23.4 Mpps | 23.4 Mpps | 100% |
--------------------------------------------------
4) Remove mlx5 page_ref bulking in Striding RQ and use page_ref_inc only when needed.
Without this bulking, we have:
- no atomic ops on WQE allocation or free
- one atomic op per SKB
- In the default MTU configuration (1500, stride size is 2K),
the non-bulking method execute 2 atomic ops as before
- For larger MTUs with stride size of 4K, non-bulking method
executes only a single op.
- For XDP (stride size of 4K, no SKBs), non-bulking have no atomic ops per packet at all.
Performance testing:
ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
Single core packet rate (64 bytes).
Early drop in TC: no degradation.
XDP_DROP:
before: 14,270,188 pps
after: 20,503,603 pps, 43% improvement.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:29:12 +0000 (22:29 -0400)]
Merge tag 'rxrpc-next-
20180330' of git://git./linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Fixes and more traces
Here are some patches that add some more tracepoints to AF_RXRPC and fix
some issues therein:
(1) Fix the use of VERSION packets to keep firewall routes open.
(2) Fix the incorrect current time usage in a tracepoint.
(3) Fix Tx ring annotation corruption.
(4) Fix accidental conversion of call-level abort into connection-level
abort.
(5) Fix calculation of resend time.
(6) Remove a couple of unused variables.
(7) Fix a bunch of checker warnings and an error. Note that not all
warnings can be quashed as checker doesn't seem to correctly handle
seqlocks.
(8) Fix a potential race between call destruction and socket/net
destruction.
(9) Add a tracepoint to track rxrpc_local refcounting.
(10) Fix an apparent leak of rxrpc_local objects.
(11) Add a tracepoint to track rxrpc_peer refcounting.
(12) Fix a leak of rxrpc_peer objects.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Fri, 30 Mar 2018 20:57:59 +0000 (13:57 -0700)]
hv_netvsc: Clean up extra parameter from rndis_filter_receive_data()
The variables, msg and data, have the same value. This patch removes
the extra one.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Fri, 30 Mar 2018 19:37:30 +0000 (12:37 -0700)]
ethernet: hisilicon: hns: hns_dsaf_mac: Use generic eth_broadcast_addr
Rather than use an on-stack array to copy a broadcast address, use
the generic eth_broadcast_addr function to save a trivial amount of
object code.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:24:58 +0000 (22:24 -0400)]
Merge branch 'net_rwsem-fixes'
Kirill Tkhai says:
====================
net_rwsem fixes
there is wext_netdev_notifier_call()->wireless_nlevent_flush()
netdevice notifier, which takes net_rwsem, so we can't take
net_rwsem in {,un}register_netdevice_notifier().
Since {,un}register_netdevice_notifier() is executed under
pernet_ops_rwsem, net_namespace_list can't change, while we
holding it, so there is no need net_rwsem in these functions [1/2].
The same is in [2/2]. We make callers of __rtnl_link_unregister()
take pernet_ops_rwsem, and close the race with setup_net()
and cleanup_net(), so __rtnl_link_unregister() does not need it.
This also fixes the problem of that __rtnl_link_unregister() does
not see initializing and exiting nets.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Kirill Tkhai [Fri, 30 Mar 2018 16:38:37 +0000 (19:38 +0300)]
net: Do not take net_rwsem in __rtnl_link_unregister()
This function calls call_netdevice_notifier(), which also
may take net_rwsem. So, we can't use net_rwsem here.
This patch makes callers of this functions take pernet_ops_rwsem,
like register_netdevice_notifier() does. This will protect
the modifications of net_namespace_list, and allows notifiers
to take it (they won't have to care about context).
Since __rtnl_link_unregister() is used on module load
and unload (which are not frequent operations), this looks
for me better, than make all call_netdevice_notifier()
always executing in "protected net_namespace_list" context.
Also, this fixes the problem we had a deal in
328fbe747ad4
"Close race between {un, }register_netdevice_notifier and ...",
and guarantees __rtnl_link_unregister() does not skip
exitting net.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kirill Tkhai [Fri, 30 Mar 2018 16:38:27 +0000 (19:38 +0300)]
net: Remove net_rwsem from {, un}register_netdevice_notifier()
These functions take net_rwsem, while wireless_nlevent_flush()
also takes it. But down_read() can't be taken recursive,
because of rw_semaphore design, which prevents it to be occupied
by only readers forever.
Since we take pernet_ops_rwsem in {,un}register_netdevice_notifier(),
net list can't change, so these down_read()/up_read() can be removed.
Fixes: f0b07bb151b0 "net: Introduce net_rwsem to protect net_namespace_list"
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Fri, 30 Mar 2018 02:24:28 +0000 (02:24 +0000)]
net: hns3: remove unnecessary pci_set_drvdata() and devm_kfree()
There is no need for explicit calls of devm_kfree(), as the allocated
memory will be freed during driver's detach.
The driver core clears the driver data to NULL after device_release.
Thus, it is not needed to manually clear the device driver data to NULL.
So remove the unnecessary pci_set_drvdata() and devm_kfree().
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Fri, 30 Mar 2018 16:28:51 +0000 (09:28 -0700)]
netdevsim: Change nsim_devlink_setup to return error to caller
Change nsim_devlink_setup to return any error back to the caller and
update nsim_init to handle it.
Requested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:19:59 +0000 (22:19 -0400)]
Merge branch 'tipc-slim-down-name-table'
Jon Maloy says:
====================
tipc: slim down name table
We clean up and improve the name binding table:
- Replace the memory consuming 'sub_sequence/service range' array with
an RB tree.
- Introduce support for overlapping service sequences/ranges
v2: #1: Fixed a missing initialization reported by David Miller
#4: Obsoleted and replaced a few more macros to get a consistent
terminology in the API.
#5: Added new commit to fix a potential string overflow bug (it
is still only in net-next) reported by Arnd Bergmann
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Thu, 29 Mar 2018 21:20:45 +0000 (23:20 +0200)]
tipc: avoid possible string overflow
gcc points out that the combined length of the fixed-length inputs to
l->name is larger than the destination buffer size:
net/tipc/link.c: In function 'tipc_link_create':
net/tipc/link.c:465:26: error: '%s' directive writing up to 32 bytes
into a region of size between 26 and 58 [-Werror=format-overflow=]
sprintf(l->name, "%s:%s-%s:unknown", self_str, if_name, peer_str);
net/tipc/link.c:465:2: note: 'sprintf' output 11 or more bytes
(assuming 75) into a destination of size 60
sprintf(l->name, "%s:%s-%s:unknown", self_str, if_name, peer_str);
A detailed analysis reveals that the theoretical maximum length of
a link name is:
max self_str + 1 + max if_name + 1 + max peer_str + 1 + max if_name =
16 + 1 + 15 + 1 + 16 + 1 + 15 = 65
Since we also need space for a trailing zero we now set MAX_LINK_NAME
to 68.
Just to be on the safe side we also replace the sprintf() call with
snprintf().
Fixes: 25b0b9c4e835 ("tipc: handle collisions of 32-bit node address
hash values")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Thu, 29 Mar 2018 21:20:44 +0000 (23:20 +0200)]
tipc: tipc: rename address types in user api
The three address type structs in the user API have names that in
reality reflect the specific, non-Linux environment where they were
originally created.
We now give them more intuitive names, in accordance with how TIPC is
described in the current documentation.
struct tipc_portid -> struct tipc_socket_addr
struct tipc_name -> struct tipc_service_addr
struct tipc_name_seq -> struct tipc_service_range
To avoid confusion, we also update some commmets and macro names to
match the new terminology.
For compatibility, we add macros that map all old names to the new ones.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Thu, 29 Mar 2018 21:20:43 +0000 (23:20 +0200)]
tipc: permit overlapping service ranges in name table
With the new RB tree structure for service ranges it becomes possible to
solve an old problem; - we can now allow overlapping service ranges in
the table.
When inserting a new service range to the tree, we use 'lower' as primary
key, and when necessary 'upper' as secondary key.
Since there may now be multiple service ranges matching an indicated
'lower' value, we must also add the 'upper' value to the functions
used for removing publications, so that the correct, corresponding
range item can be found.
These changes guarantee that a well-formed publication/withdrawal item
from a peer node never will be rejected, and make it possible to
eliminate the problematic backlog functionality we currently have for
handling such cases.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Thu, 29 Mar 2018 21:20:42 +0000 (23:20 +0200)]
tipc: refactor name table translate function
The function tipc_nametbl_translate() function is ugly and hard to
follow. This can be improved somewhat by introducing a stack variable
for holding the publication list to be used and re-ordering the if-
clauses for selection of algorithm.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Maloy [Thu, 29 Mar 2018 21:20:41 +0000 (23:20 +0200)]
tipc: replace name table service range array with rb tree
The current design of the binding table has an unnecessary memory
consuming and complex data structure. It aggregates the service range
items into an array, which is expanded by a factor two every time it
becomes too small to hold a new item. Furthermore, the arrays never
shrink when the number of ranges diminishes.
We now replace this array with an RB tree that is holding the range
items as tree nodes, each range directly holding a list of bindings.
This, along with a few name changes, improves both readability and
volume of the code, as well as reducing memory consumption and hopefully
improving cache hit rate.
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:19:14 +0000 (22:19 -0400)]
Merge branch 'bridge-mtu'
Nikolay Aleksandrov says:
====================
net: bridge: MTU handling changes
As previously discussed the recent changes break some setups and could lead
to packet drops. Thus the first patch reverts the behaviour for the bridge
to follow the minimum MTU but also keeps the ability to set the MTU to the
maximum (out of all ports) if vlan filtering is enabled. Patch 02 is the
bigger change in behaviour - we've always had trouble when configuring
bridges and their MTU which is auto tuning on port events
(add/del/changemtu), which means config software needs to chase it and fix
it after each such event, after patch 02 we allow the user to configure any
MTU (ETH_MIN/MAX limited) but once that is done the bridge stops auto
tuning and relies on the user to keep the MTU correct.
This should be compatible with cases that don't touch the MTU (or set it
to the same value), while allowing to configure the MTU and not worry
about it changing afterwards.
The patches are intentionally split like this, so that if they get accepted
and there are any complaints patch 02 can be reverted.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Fri, 30 Mar 2018 10:46:19 +0000 (13:46 +0300)]
net: bridge: disable bridge MTU auto tuning if it was set manually
As Roopa noted today the biggest source of problems when configuring
bridge and ports is that the bridge MTU keeps changing automatically on
port events (add/del/changemtu). That leads to inconsistent behaviour
and network config software needs to chase the MTU and fix it on each
such event. Let's improve on that situation and allow for the user to
set any MTU within ETH_MIN/MAX limits, but once manually configured it
is the user's responsibility to keep it correct afterwards.
In case the MTU isn't manually set - the behaviour reverts to the
previous and the bridge follows the minimum MTU.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Fri, 30 Mar 2018 10:46:18 +0000 (13:46 +0300)]
net: bridge: set min MTU on port events and allow user to set max
Recently the bridge was changed to automatically set maximum MTU on port
events (add/del/changemtu) when vlan filtering is enabled, but that
actually changes behaviour in a way which breaks some setups and can lead
to packet drops. In order to still allow that maximum to be set while being
compatible, we add the ability for the user to tune the bridge MTU up to
the maximum when vlan filtering is enabled, but that has to be done
explicitly and all port events (add/del/changemtu) lead to resetting that
MTU to the minimum as before.
Suggested-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:18:27 +0000 (22:18 -0400)]
Merge branch 'thunderx-DMAC-filtering'
Vadim Lomovtsev says:
====================
net: thunderx: implement DMAC filtering support
By default CN88XX BGX accepts all incoming multicast and broadcast
packets and filtering is disabled. The nic driver doesn't provide
an ability to change such behaviour.
This series is to implement DMAC filtering management for CN88XX
nic driver allowing user to enable/disable filtering and configure
specific MAC addresses to filter traffic.
Changes from v1:
build issues:
- update code in order to address compiler warnings;
checkpatch.pl reported issues:
- update code in order to fit 80 symbols length;
- update commit descriptions in order to fit 80 symbols length;
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:53 +0000 (04:59 -0700)]
net: thunderx: add ndo_set_rx_mode callback implementation for VF
The ndo_set_rx_mode() is called from atomic context which causes
messages response timeouts while VF to PF communication via MSIx.
To get rid of that we're copy passed mc list, parse flags and queue
handling of kernel request to ordered workqueue.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:52 +0000 (04:59 -0700)]
net: thunderx: add workqueue control structures for handle ndo_set_rx_mode request
The kernel calls ndo_set_rx_mode() callback from atomic context which
causes messaging timeouts between VF and PF (as they’re implemented via
MSIx). So in order to handle ndo_set_rx_mode() we need to get rid of it.
This commit implements necessary workqueue related structures to let VF
queue kernel request processing in non-atomic context later.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:51 +0000 (04:59 -0700)]
net: thunderx: add XCAST messages handlers for PF
This commit is to add message handling for ndo_set_rx_mode()
callback at PF side.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:50 +0000 (04:59 -0700)]
net: thunderx: add new messages for handle ndo_set_rx_mode callback
The kernel calls ndo_set_rx_mode() callback supplying it will all necessary
info, such as device state flags, multicast mac addresses list and so on.
Since we have only 128 bits to communicate with PF we need to initiate
several requests to PF with small/short operation each based on input data.
So this commit implements following PF messages codes along with new
data structures for them:
NIC_MBOX_MSG_RESET_XCAST to flush all filters configured for this
particular network interface (VF)
NIC_MBOX_MSG_ADD_MCAST to add new MAC address to DMAC filter registers
for this particular network interface (VF)
NIC_MBOX_MSG_SET_XCAST to apply filtering configuration to filter control
register
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:49 +0000 (04:59 -0700)]
net: thunderx: add multicast filter management support
The ThunderX NIC could be partitioned to up to 128 VFs and thus
represented to system. Each VF is mapped to pair BGX:LMAC, and each of VF
is configured by kernel individually. Eventually the bunch of VFs could be
mapped onto same pair BGX:LMAC and thus could cause several multicast
filtering configuration requests to LMAC with the same MAC addresses.
This commit is to add ThunderX NIC BGX filtering manipulation routines.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:48 +0000 (04:59 -0700)]
net: thunderx: add MAC address filter tracking for LMAC
The ThunderX NIC has two Ethernet Interfaces (BGX) each of them could has
up to four Logical MACs configured. Each of BGX has 32 filters to be
configured for filtering ingress packets. The number of filters available
to particular LMAC is from 8 (if we have four LMACs configured per BGX)
up to 32 (in case of only one LMAC is configured per BGX).
At the same time the NIC could present up to 128 VFs to OS as network
interfaces, each of them kernel will configure with set of MAC addresses
for filtering. So to prevent dupes in BGX filter registers from different
network interfaces it is required to cache and track all filter
configuration requests prior to applying them onto BGX filter registers.
This commit is to update LMAC structures with control fields to
allocate/releasing filters tracking list along with implementing
dmac array allocate/release per LMAC.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Fri, 30 Mar 2018 11:59:47 +0000 (04:59 -0700)]
net: thunderx: move filter register related macro into proper place
The ThunderX NIC has set of registers which allows to configure
filter policy for ingress packets. There are three possible regimes
of filtering multicasts, broadcasts and unicasts: accept all, reject all
and accept filter allowed only.
Current implementation has enum with all of them and two generic macro
for enabling filtering et all (CAM_ACCEPT) and enabling/disabling
broadcast packets, which also should be corrected in order to represent
register bits properly. All these values are private for driver and
there is no need to ‘publish’ them via header file.
This commit is to move filtering register manipulation values from
header file into source with explicit assignment of exact register
values to them to be used while register configuring.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Apr 2018 02:17:30 +0000 (22:17 -0400)]
Merge branch 'meson8b'
Martin Blumenstingl says:
====================
Meson8m2 support for dwmac-meson8b
The Meson8m2 SoC is an updated version of the Meson8 SoC. Some of the
peripherals are shared with Meson8b (for example the watchdog registers
and the internal temperature sensor calibration procedure).
Meson8m2 also seems to include the same Gigabit MAC register layout as
Meson8b.
The registers in the Amlogic dwmac "glue" seem identical between Meson8b
and Meson8m2. Manual testing seems to confirm this.
To be extra-safe a new compatible string is added because there's no
(public) documentation on the Meson8m2 SoC. This will allow us to
implement any SoC-specific variations later on (if needed).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin Blumenstingl [Thu, 29 Mar 2018 23:00:35 +0000 (01:00 +0200)]
net: stmmac: dwmac-meson8b: Add support for the Meson8m2 SoC
The Meson8m2 SoC uses a similar (potentially even identical) register
layout as the Meson8b and GXBB SoCs for the dwmac glue.
Add a new compatible string and update the module description to
indicate support for these SoCs.
Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin Blumenstingl [Thu, 29 Mar 2018 23:00:34 +0000 (01:00 +0200)]
dt-bindings: net: meson-dwmac: add support for the Meson8m2 SoC
The Meson8m2 SoC uses a similar (potentially even identical) register
layout for the dwmac glue as Meson8b and GXBB. Unfortunately there is no
documentation available.
Testing shows that both, RMII and RGMII PHYs are working if they are
configured as on Meson8b. Add a new compatible string to the
documentation so differences (if there are any) between Meson8m2 and the
other SoCs can be taken care of within the driver.
Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Sat, 31 Mar 2018 00:17:57 +0000 (02:17 +0200)]
Merge branch 'bpf-cgroup-bind-connect'
Andrey Ignatov says:
====================
v2->v3:
- rebase due to conflicts
- fix ipv6=m build
v1->v2:
- support expected_attach_type at prog load time so that prog (incl.
context accesses and calls to helpers) can be validated with regard to
specific attach point it is supposed to be attached to.
Later, at attach time, attach type is checked so that it must be same as
at load time if it was provided
- reworked hooks to rely on expected_attach_type, and reduced number of new
prog types from 6 to just 1: BPF_PROG_TYPE_CGROUP_SOCK_ADDR
- reused BPF_PROG_TYPE_CGROUP_SOCK for sys_bind post-hooks
- add selftests for post-sys_bind hook
For our container management we've been using complicated and fragile setup
consisting of LD_PRELOAD wrapper intercepting bind and connect calls from
all containerized applications. Unfortunately it doesn't work for apps that
don't use glibc and changing all applications that run in the datacenter
is not possible due to 3rd party code and libraries (despite being
open source code) and sheer amount of legacy code that has to be rewritten
(we're rewriting what we can in parallel)
These applications are written without containers in mind and have
builtin assumptions about network services. Like an application X
expects to connect localhost:special_port and find service Y in there.
To move application X and service Y into two different containers
LD_PRELOAD approach is used to help one service connect to another
without rewriting them.
Moving these two applications into different L2 (netns) or L3 (vrf)
network isolation scopes doesn't help to solve the problem, since
applications need to see each other like they were running on
the host without containers.
So if app X and app Y would run in different netns something
would need to punch a connectivity hole in those namespaces.
That would be real layering violation (with corresponding
network debugging pains), since clean l2, l3 abstraction would
suddenly support something that breaks through the layers.
Instead we used LD_PRELOAD (and now bpf programs) at bind/connect
time to help applications discover and connect to each other.
All applications are running in init_nens and there are no vrfs.
After bind/connect the normal fib/neighbor core networking
logic works as it should always do and the whole system is
clean from network point of view and can be debugged with
standard tools.
We also considered resurrecting Hannes's afnetns work,
but all hierarchical namespace abstraction don't work due
to these builtin networking assumptions inside the apps.
To run an application inside cgroup container that was not written
with containers in mind we have to make an illusion of running
in non-containerized environment.
In some cases we remember the port and container id in the post-bind hook
in a bpf map and when some other task in a different container is trying
to connect to a service we need to know where this service is running.
It can be remote and can be local. Both client and service may or may not
be written with containers in mind and this sockaddr rewrite is providing
connectivity and load balancing feature.
BPF+cgroup looks to be the best solution for this problem.
Hence we introduce 3 hooks:
- at entry into sys_bind and sys_connect
to let bpf prog look and modify 'struct sockaddr' provided
by user space and fail bind/connect when appropriate
- post sys_bind after port is allocated
The approach works great and has zero overhead for anyone who doesn't
use it and very low overhead when deployed.
Different use case for this feature is to do low overhead firewall
that doesn't need to inspect all packets and works at bind/connect time.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:08 +0000 (15:08 -0700)]
selftests/bpf: Selftest for sys_bind post-hooks.
Add selftest for attach types `BPF_CGROUP_INET4_POST_BIND` and
`BPF_CGROUP_INET6_POST_BIND`.
The main things tested are:
* prog load behaves as expected (valid/invalid accesses in prog);
* prog attach behaves as expected (load- vs attach-time attach types);
* `BPF_CGROUP_INET_SOCK_CREATE` can be attached in a backward compatible
way;
* post-hooks return expected result and errno.
Example:
# ./test_sock
Test case: bind4 load with invalid access: src_ip6 .. [PASS]
Test case: bind4 load with invalid access: mark .. [PASS]
Test case: bind6 load with invalid access: src_ip4 .. [PASS]
Test case: sock_create load with invalid access: src_port .. [PASS]
Test case: sock_create load w/o expected_attach_type (compat mode) ..
[PASS]
Test case: sock_create load w/ expected_attach_type .. [PASS]
Test case: attach type mismatch bind4 vs bind6 .. [PASS]
Test case: attach type mismatch bind6 vs bind4 .. [PASS]
Test case: attach type mismatch default vs bind4 .. [PASS]
Test case: attach type mismatch bind6 vs sock_create .. [PASS]
Test case: bind4 reject all .. [PASS]
Test case: bind6 reject all .. [PASS]
Test case: bind6 deny specific IP & port .. [PASS]
Test case: bind4 allow specific IP & port .. [PASS]
Test case: bind4 allow all .. [PASS]
Test case: bind6 allow all .. [PASS]
Summary: 16 PASSED, 0 FAILED
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:07 +0000 (15:08 -0700)]
bpf: Post-hooks for sys_bind
"Post-hooks" are hooks that are called right before returning from
sys_bind. At this time IP and port are already allocated and no further
changes to `struct sock` can happen before returning from sys_bind but
BPF program has a chance to inspect the socket and change sys_bind
result.
Specifically it can e.g. inspect what port was allocated and if it
doesn't satisfy some policy, BPF program can force sys_bind to fail and
return EPERM to user.
Another example of usage is recording the IP:port pair to some map to
use it in later calls to sys_connect. E.g. if some TCP server inside
cgroup was bound to some IP:port_n, it can be recorded to a map. And
later when some TCP client inside same cgroup is trying to connect to
127.0.0.1:port_n, BPF hook for sys_connect can override the destination
and connect application to IP:port_n instead of 127.0.0.1:port_n. That
helps forcing all applications inside a cgroup to use desired IP and not
break those applications if they e.g. use localhost to communicate
between each other.
== Implementation details ==
Post-hooks are implemented as two new attach types
`BPF_CGROUP_INET4_POST_BIND` and `BPF_CGROUP_INET6_POST_BIND` for
existing prog type `BPF_PROG_TYPE_CGROUP_SOCK`.
Separate attach types for IPv4 and IPv6 are introduced to avoid access
to IPv6 field in `struct sock` from `inet_bind()` and to IPv4 field from
`inet6_bind()` since those fields might not make sense in such cases.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:06 +0000 (15:08 -0700)]
selftests/bpf: Selftest for sys_connect hooks
Add selftest for BPF_CGROUP_INET4_CONNECT and BPF_CGROUP_INET6_CONNECT
attach types.
Try to connect(2) to specified IP:port and test that:
* remote IP:port pair is overridden;
* local end of connection is bound to specified IP.
All combinations of IPv4/IPv6 and TCP/UDP are tested.
Example:
# tcpdump -pn -i lo -w connect.pcap 2>/dev/null &
[1] 478
# strace -qqf -e connect -o connect.trace ./test_sock_addr.sh
Wait for testing IPv4/IPv6 to become available ... OK
Load bind4 with invalid type (can pollute stderr) ... REJECTED
Load bind4 with valid type ... OK
Attach bind4 with invalid type ... REJECTED
Attach bind4 with valid type ... OK
Load connect4 with invalid type (can pollute stderr) libbpf: load bpf \
program failed: Permission denied
libbpf: -- BEGIN DUMP LOG ---
libbpf:
0: (b7) r2 = 23569
1: (63) *(u32 *)(r1 +24) = r2
2: (b7) r2 =
16777343
3: (63) *(u32 *)(r1 +4) = r2
invalid bpf_context access off=4 size=4
[ 1518.404609] random: crng init done
libbpf: -- END LOG --
libbpf: failed to load program 'cgroup/connect4'
libbpf: failed to load object './connect4_prog.o'
... REJECTED
Load connect4 with valid type ... OK
Attach connect4 with invalid type ... REJECTED
Attach connect4 with valid type ... OK
Test case #1 (IPv4/TCP):
Requested: bind(192.168.1.254, 4040) ..
Actual: bind(127.0.0.1, 4444)
Requested: connect(192.168.1.254, 4040) from (*, *) ..
Actual: connect(127.0.0.1, 4444) from (127.0.0.4, 56068)
Test case #2 (IPv4/UDP):
Requested: bind(192.168.1.254, 4040) ..
Actual: bind(127.0.0.1, 4444)
Requested: connect(192.168.1.254, 4040) from (*, *) ..
Actual: connect(127.0.0.1, 4444) from (127.0.0.4, 56447)
Load bind6 with invalid type (can pollute stderr) ... REJECTED
Load bind6 with valid type ... OK
Attach bind6 with invalid type ... REJECTED
Attach bind6 with valid type ... OK
Load connect6 with invalid type (can pollute stderr) libbpf: load bpf \
program failed: Permission denied
libbpf: -- BEGIN DUMP LOG ---
libbpf:
0: (b7) r6 = 0
1: (63) *(u32 *)(r1 +12) = r6
invalid bpf_context access off=12 size=4
libbpf: -- END LOG --
libbpf: failed to load program 'cgroup/connect6'
libbpf: failed to load object './connect6_prog.o'
... REJECTED
Load connect6 with valid type ... OK
Attach connect6 with invalid type ... REJECTED
Attach connect6 with valid type ... OK
Test case #3 (IPv6/TCP):
Requested: bind(face:b00c:1234:5678::abcd, 6060) ..
Actual: bind(::1, 6666)
Requested: connect(face:b00c:1234:5678::abcd, 6060) from (*, *)
Actual: connect(::1, 6666) from (::6, 37458)
Test case #4 (IPv6/UDP):
Requested: bind(face:b00c:1234:5678::abcd, 6060) ..
Actual: bind(::1, 6666)
Requested: connect(face:b00c:1234:5678::abcd, 6060) from (*, *)
Actual: connect(::1, 6666) from (::6, 39315)
### SUCCESS
# egrep 'connect\(.*AF_INET' connect.trace | \
> egrep -vw 'htons\(1025\)' | fold -b -s -w 72
502 connect(7, {sa_family=AF_INET, sin_port=htons(4040),
sin_addr=inet_addr("192.168.1.254")}, 128) = 0
502 connect(8, {sa_family=AF_INET, sin_port=htons(4040),
sin_addr=inet_addr("192.168.1.254")}, 128) = 0
502 connect(9, {sa_family=AF_INET6, sin6_port=htons(6060),
inet_pton(AF_INET6, "face:b00c:1234:5678::abcd", &sin6_addr),
sin6_flowinfo=0, sin6_scope_id=0}, 128) = 0
502 connect(10, {sa_family=AF_INET6, sin6_port=htons(6060),
inet_pton(AF_INET6, "face:b00c:1234:5678::abcd", &sin6_addr),
sin6_flowinfo=0, sin6_scope_id=0}, 128) = 0
# fg
tcpdump -pn -i lo -w connect.pcap 2> /dev/null
# tcpdump -r connect.pcap -n tcp | cut -c 1-72
reading from file connect.pcap, link-type EN10MB (Ethernet)
17:57:40.383533 IP 127.0.0.4.56068 > 127.0.0.1.4444: Flags [S], seq 1333
17:57:40.383566 IP 127.0.0.1.4444 > 127.0.0.4.56068: Flags [S.], seq 112
17:57:40.383589 IP 127.0.0.4.56068 > 127.0.0.1.4444: Flags [.], ack 1, w
17:57:40.384578 IP 127.0.0.1.4444 > 127.0.0.4.56068: Flags [R.], seq 1,
17:57:40.403327 IP6 ::6.37458 > ::1.6666: Flags [S], seq
406513443, win
17:57:40.403357 IP6 ::1.6666 > ::6.37458: Flags [S.], seq
2448389240, ac
17:57:40.403376 IP6 ::6.37458 > ::1.6666: Flags [.], ack 1, win 342, opt
17:57:40.404263 IP6 ::1.6666 > ::6.37458: Flags [R.], seq 1, ack 1, win
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:05 +0000 (15:08 -0700)]
bpf: Hooks for sys_connect
== The problem ==
See description of the problem in the initial patch of this patch set.
== The solution ==
The patch provides much more reliable in-kernel solution for the 2nd
part of the problem: making outgoing connecttion from desired IP.
It adds new attach types `BPF_CGROUP_INET4_CONNECT` and
`BPF_CGROUP_INET6_CONNECT` for program type
`BPF_PROG_TYPE_CGROUP_SOCK_ADDR` that can be used to override both
source and destination of a connection at connect(2) time.
Local end of connection can be bound to desired IP using newly
introduced BPF-helper `bpf_bind()`. It allows to bind to only IP though,
and doesn't support binding to port, i.e. leverages
`IP_BIND_ADDRESS_NO_PORT` socket option. There are two reasons for this:
* looking for a free port is expensive and can affect performance
significantly;
* there is no use-case for port.
As for remote end (`struct sockaddr *` passed by user), both parts of it
can be overridden, remote IP and remote port. It's useful if an
application inside cgroup wants to connect to another application inside
same cgroup or to itself, but knows nothing about IP assigned to the
cgroup.
Support is added for IPv4 and IPv6, for TCP and UDP.
IPv4 and IPv6 have separate attach types for same reason as sys_bind
hooks, i.e. to prevent reading from / writing to e.g. user_ip6 fields
when user passes sockaddr_in since it'd be out-of-bound.
== Implementation notes ==
The patch introduces new field in `struct proto`: `pre_connect` that is
a pointer to a function with same signature as `connect` but is called
before it. The reason is in some cases BPF hooks should be called way
before control is passed to `sk->sk_prot->connect`. Specifically
`inet_dgram_connect` autobinds socket before calling
`sk->sk_prot->connect` and there is no way to call `bpf_bind()` from
hooks from e.g. `ip4_datagram_connect` or `ip6_datagram_connect` since
it'd cause double-bind. On the other hand `proto.pre_connect` provides a
flexible way to add BPF hooks for connect only for necessary `proto` and
call them at desired time before `connect`. Since `bpf_bind()` is
allowed to bind only to IP and autobind in `inet_dgram_connect` binds
only port there is no chance of double-bind.
bpf_bind() sets `force_bind_address_no_port` to bind to only IP despite
of value of `bind_address_no_port` socket field.
bpf_bind() sets `with_lock` to `false` when calling to __inet_bind()
and __inet6_bind() since all call-sites, where bpf_bind() is called,
already hold socket lock.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:04 +0000 (15:08 -0700)]
net: Introduce __inet_bind() and __inet6_bind
Refactor `bind()` code to make it ready to be called from BPF helper
function `bpf_bind()` (will be added soon). Implementation of
`inet_bind()` and `inet6_bind()` is separated into `__inet_bind()` and
`__inet6_bind()` correspondingly. These function can be used from both
`sk_prot->bind` and `bpf_bind()` contexts.
New functions have two additional arguments.
`force_bind_address_no_port` forces binding to IP only w/o checking
`inet_sock.bind_address_no_port` field. It'll allow to bind local end of
a connection to desired IP in `bpf_bind()` w/o changing
`bind_address_no_port` field of a socket. It's useful since `bpf_bind()`
can return an error and we'd need to restore original value of
`bind_address_no_port` in that case if we changed this before calling to
the helper.
`with_lock` specifies whether to lock socket when working with `struct
sk` or not. The argument is set to `true` for `sk_prot->bind`, i.e. old
behavior is preserved. But it will be set to `false` for `bpf_bind()`
use-case. The reason is all call-sites, where `bpf_bind()` will be
called, already hold that socket lock.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:03 +0000 (15:08 -0700)]
selftests/bpf: Selftest for sys_bind hooks
Add selftest to work with bpf_sock_addr context from
`BPF_PROG_TYPE_CGROUP_SOCK_ADDR` programs.
Try to bind(2) on IP:port and apply:
* loads to make sure context can be read correctly, including narrow
loads (byte, half) for IP and full-size loads (word) for all fields;
* stores to those fields allowed by verifier.
All combination from IPv4/IPv6 and TCP/UDP are tested.
Both scenarios are tested:
* valid programs can be loaded and attached;
* invalid programs can be neither loaded nor attached.
Test passes when expected data can be read from context in the
BPF-program, and after the call to bind(2) socket is bound to IP:port
pair that was written by BPF-program to the context.
Example:
# ./test_sock_addr
Attached bind4 program.
Test case #1 (IPv4/TCP):
Requested: bind(192.168.1.254, 4040) ..
Actual: bind(127.0.0.1, 4444)
Test case #2 (IPv4/UDP):
Requested: bind(192.168.1.254, 4040) ..
Actual: bind(127.0.0.1, 4444)
Attached bind6 program.
Test case #3 (IPv6/TCP):
Requested: bind(face:b00c:1234:5678::abcd, 6060) ..
Actual: bind(::1, 6666)
Test case #4 (IPv6/UDP):
Requested: bind(face:b00c:1234:5678::abcd, 6060) ..
Actual: bind(::1, 6666)
### SUCCESS
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:02 +0000 (15:08 -0700)]
bpf: Hooks for sys_bind
== The problem ==
There is a use-case when all processes inside a cgroup should use one
single IP address on a host that has multiple IP configured. Those
processes should use the IP for both ingress and egress, for TCP and UDP
traffic. So TCP/UDP servers should be bound to that IP to accept
incoming connections on it, and TCP/UDP clients should make outgoing
connections from that IP. It should not require changing application
code since it's often not possible.
Currently it's solved by intercepting glibc wrappers around syscalls
such as `bind(2)` and `connect(2)`. It's done by a shared library that
is preloaded for every process in a cgroup so that whenever TCP/UDP
server calls `bind(2)`, the library replaces IP in sockaddr before
passing arguments to syscall. When application calls `connect(2)` the
library transparently binds the local end of connection to that IP
(`bind(2)` with `IP_BIND_ADDRESS_NO_PORT` to avoid performance penalty).
Shared library approach is fragile though, e.g.:
* some applications clear env vars (incl. `LD_PRELOAD`);
* `/etc/ld.so.preload` doesn't help since some applications are linked
with option `-z nodefaultlib`;
* other applications don't use glibc and there is nothing to intercept.
== The solution ==
The patch provides much more reliable in-kernel solution for the 1st
part of the problem: binding TCP/UDP servers on desired IP. It does not
depend on application environment and implementation details (whether
glibc is used or not).
It adds new eBPF program type `BPF_PROG_TYPE_CGROUP_SOCK_ADDR` and
attach types `BPF_CGROUP_INET4_BIND` and `BPF_CGROUP_INET6_BIND`
(similar to already existing `BPF_CGROUP_INET_SOCK_CREATE`).
The new program type is intended to be used with sockets (`struct sock`)
in a cgroup and provided by user `struct sockaddr`. Pointers to both of
them are parts of the context passed to programs of newly added types.
The new attach types provides hooks in `bind(2)` system call for both
IPv4 and IPv6 so that one can write a program to override IP addresses
and ports user program tries to bind to and apply such a program for
whole cgroup.
== Implementation notes ==
[1]
Separate attach types for `AF_INET` and `AF_INET6` are added
intentionally to prevent reading/writing to offsets that don't make
sense for corresponding socket family. E.g. if user passes `sockaddr_in`
it doesn't make sense to read from / write to `user_ip6[]` context
fields.
[2]
The write access to `struct bpf_sock_addr_kern` is implemented using
special field as an additional "register".
There are just two registers in `sock_addr_convert_ctx_access`: `src`
with value to write and `dst` with pointer to context that can't be
changed not to break later instructions. But the fields, allowed to
write to, are not available directly and to access them address of
corresponding pointer has to be loaded first. To get additional register
the 1st not used by `src` and `dst` one is taken, its content is saved
to `bpf_sock_addr_kern.tmp_reg`, then the register is used to load
address of pointer field, and finally the register's content is restored
from the temporary field after writing `src` value.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:01 +0000 (15:08 -0700)]
libbpf: Support expected_attach_type at prog load
Support setting `expected_attach_type` at prog load time in both
`bpf/bpf.h` and `bpf/libbpf.h`.
Since both headers already have API to load programs, new functions are
added not to break backward compatibility for existing ones:
* `bpf_load_program_xattr()` is added to `bpf/bpf.h`;
* `bpf_prog_load_xattr()` is added to `bpf/libbpf.h`.
Both new functions accept structures, `struct bpf_load_program_attr` and
`struct bpf_prog_load_attr` correspondingly, where new fields can be
added in the future w/o changing the API.
Standard `_xattr` suffix is used to name the new API functions.
Since `bpf_load_program_name()` is not used as heavily as
`bpf_load_program()`, it was removed in favor of more generic
`bpf_load_program_xattr()`.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Fri, 30 Mar 2018 22:08:00 +0000 (15:08 -0700)]
bpf: Check attach type at prog load time
== The problem ==
There are use-cases when a program of some type can be attached to
multiple attach points and those attach points must have different
permissions to access context or to call helpers.
E.g. context structure may have fields for both IPv4 and IPv6 but it
doesn't make sense to read from / write to IPv6 field when attach point
is somewhere in IPv4 stack.
Same applies to BPF-helpers: it may make sense to call some helper from
some attach point, but not from other for same prog type.
== The solution ==
Introduce `expected_attach_type` field in in `struct bpf_attr` for
`BPF_PROG_LOAD` command. If scenario described in "The problem" section
is the case for some prog type, the field will be checked twice:
1) At load time prog type is checked to see if attach type for it must
be known to validate program permissions correctly. Prog will be
rejected with EINVAL if it's the case and `expected_attach_type` is
not specified or has invalid value.
2) At attach time `attach_type` is compared with `expected_attach_type`,
if prog type requires to have one, and, if they differ, attach will
be rejected with EINVAL.
The `expected_attach_type` is now available as part of `struct bpf_prog`
in both `bpf_verifier_ops->is_valid_access()` and
`bpf_verifier_ops->get_func_proto()` () and can be used to check context
accesses and calls to helpers correspondingly.
Initially the idea was discussed by Alexei Starovoitov <ast@fb.com> and
Daniel Borkmann <daniel@iogearbox.net> here:
https://marc.info/?l=linux-netdev&m=
152107378717201&w=2
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tariq Toukan [Thu, 4 Jan 2018 11:09:15 +0000 (13:09 +0200)]
net/mlx5e: RX, Recycle buffer of UMR WQEs
Upon a new UMR post, check if the WQE buffer contains
a previous UMR WQE. If so, modify the dynamic fields
instead of a whole WQE overwrite. This saves a memcpy.
In current setting, after 2 WQ cycles (12 UMR posts),
this will always be the case.
No degradation sensed.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Wed, 20 Dec 2017 09:56:35 +0000 (11:56 +0200)]
net/mlx5e: Keep single pre-initialized UMR WQE per RQ
All UMR WQEs of an RQ share many common fields. We use
pre-initialized structures to save calculations in datapath.
One field (xlt_offset) was the only reason we saved a pre-initialized
copy per WQE index.
Here we remove its initialization (move its calculation to datapath),
and reduce the number of copies to one-per-RQ.
A very small datapath calculation is added, it occurs once per a MPWQE
(i.e. once every 256KB), but reduces memory consumption and gives
better cache utilization.
Performance testing:
Tested packet rate, no degradation sensed.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Tue, 28 Nov 2017 09:03:37 +0000 (11:03 +0200)]
net/mlx5e: Remove page_ref bulking in Striding RQ
When many packets reside on the same page, the bulking of
page_ref modifications reduces the total number of atomic
operations executed.
Besides the necessary 2 operations on page alloc/free, we
have the following extra ops per page:
- one on WQE allocation (bump refcnt to maximum possible),
- zero ops for SKBs,
- one on WQE free,
a constant of two operations in total, no matter how many
packets/SKBs actually populate the page.
Without this bulking, we have:
- no ops on WQE allocation or free,
- one op per SKB,
Comparing the two methods when PAGE_SIZE is 4K:
- As mentioned above, bulking method always executes 2 operations,
not more, but not less.
- In the default MTU configuration (1500, stride size is 2K),
the non-bulking method execute 2 ops as well.
- For larger MTUs with stride size of 4K, non-bulking method
executes only a single op.
- For XDP (stride size of 4K, no SKBs), non-bulking method
executes no ops at all!
Hence, to optimize the flows with linear SKB and XDP over Striding RQ,
we here remove the page_ref bulking method.
Performance testing:
ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
Single core packet rate (64 bytes).
Early drop in TC: no degradation.
XDP_DROP:
before: 14,270,188 pps
after: 20,503,603 pps, 43% improvement.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Wed, 7 Feb 2018 12:46:36 +0000 (14:46 +0200)]
net/mlx5e: Support XDP over Striding RQ
Add XDP support over Striding RQ.
Now that linear SKB is supported over Striding RQ,
we can support XDP by setting stride size to PAGE_SIZE
and headroom to XDP_PACKET_HEADROOM.
Upon a MPWQE free, do not release pages that are being
XDP xmit, they will be released upon completions.
Striding RQ is capable of a higher packet-rate than
conventional RQ.
A performance gain is expected for all cases that had
a HW packet-rate bottleneck. This is the case whenever
using many flows that distribute to many cores.
Performance testing:
ConnectX-5, 24 rings, default MTU.
CQE compression ON (to reduce completions BW in PCI).
XDP_DROP packet rate:
--------------------------------------------------
| pkt size | XDP rate | 100GbE linerate | pct% |
--------------------------------------------------
| 64byte | 126.2 Mpps | 148.0 Mpps | 85% |
| 128byte | 80.0 Mpps | 84.8 Mpps | 94% |
| 256byte | 42.7 Mpps | 42.7 Mpps | 100% |
| 512byte | 23.4 Mpps | 23.4 Mpps | 100% |
--------------------------------------------------
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Tue, 12 Dec 2017 13:46:49 +0000 (15:46 +0200)]
net/mlx5e: Refactor RQ XDP_TX indication
Make the xdp_xmit indication available for Striding RQ
by taking it out of the type-specific union.
This refactor is a preparation for a downstream patch that
adds XDP support over Striding RQ.
In addition, use a bitmap instead of a boolean for possible
future flags.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Wed, 7 Feb 2018 12:41:25 +0000 (14:41 +0200)]
net/mlx5e: Use linear SKB in Striding RQ
Current Striding RQ HW feature utilizes the RX buffers so that
there is no wasted room between the strides. This maximises
the memory utilization.
This prevents the use of build_skb() (which requires headroom
and tailroom), and demands to memcpy the packets headers into
the skb linear part.
In this patch, whenever a set of conditions holds, we apply
an RQ configuration that allows combining the use of linear SKB
on top of a Striding RQ.
To use build_skb() with Striding RQ, the following must hold:
1. packet does not cross a page boundary.
2. there is enough headroom and tailroom surrounding the packet.
We can satisfy 1 and 2 by configuring:
stride size = MTU + headroom + tailoom.
This is possible only when:
a. (MTU - headroom - tailoom) does not exceed PAGE_SIZE.
b. HW LRO is turned off.
Using linear SKB has many advantages:
- Saves a memcpy of the headers.
- No page-boundary checks in datapath.
- No filler CQEs.
- Significantly smaller CQ.
- SKB data continuously resides in linear part, and not split to
small amount (linear part) and large amount (fragment).
This saves datapath cycles in driver and improves utilization
of SKB fragments in GRO.
- The fragments of a resulting GRO SKB follow the IP forwarding
assumption of equal-size fragments.
Some implementation details:
HW writes the packets to the beginning of a stride,
i.e. does not keep headroom. To overcome this we make sure we can
extend backwards and use the last bytes of stride i-1.
Extra care is needed for stride 0 as it has no preceding stride.
We make sure headroom bytes are available by shifting the buffer
pointer passed to HW by headroom bytes.
This configuration now becomes default, whenever capable.
Of course, this implies turning LRO off.
Performance testing:
ConnectX-5, single core, single RX ring, default MTU.
UDP packet rate, early drop in TC layer:
--------------------------------------------
| pkt size | before | after | ratio |
--------------------------------------------
| 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
| 500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
| 64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
--------------------------------------------
TCP streams: ~20% gain
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Mon, 10 Jul 2017 09:52:36 +0000 (12:52 +0300)]
net/mlx5e: Use inline MTTs in UMR WQEs
When modifying the page mapping of a HW memory region
(via a UMR post), post the new values inlined in WQE,
instead of using a data pointer.
This is a micro-optimization, inline UMR WQEs of different
rings scale better in HW.
In addition, this obsoletes a few control flows and helps
delete ~50 LOC.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Sun, 7 Jan 2018 12:15:02 +0000 (14:15 +0200)]
net/mlx5e: Do not busy-wait for UMR completion in Striding RQ
Do not busy-wait a pending UMR completion. Under high HW load,
busy-waiting a delayed completion would fully utilize the CPU core
and mistakenly indicate a SW bottleneck.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Tariq Toukan [Tue, 18 Jul 2017 09:07:06 +0000 (12:07 +0300)]
net/mlx5e: Code movements in RX UMR WQE post
Gets the process of a UMR WQE post in one function,
in preparation for a downstream patch that inlines
the WQE data.
No functional change here.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>