Felix Fietkau [Sat, 27 Jan 2018 15:02:03 +0000 (16:02 +0100)]
mt76: implement AP_LINK_PS
With software A-MPDU reordering in place, frames that notify mac80211 of
powersave changes are reordered as well, which can cause connection
stalls. Fix this by implementing powersave state processing in the
driver.
Fixes: aee5b8cf2477 ("mt76: implement A-MPDU rx reordering in the driver code")
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Kalle Valo [Thu, 1 Feb 2018 08:37:39 +0000 (10:37 +0200)]
Merge git://git./linux/kernel/git/kvalo/wireless-drivers.git
The last pull request didn't make it to v4.15 (I was too late) so pull it to
wireless-drivers-next.git instead so that it can go to v4.16.
Thomas Falcon [Mon, 29 Jan 2018 19:45:05 +0000 (13:45 -0600)]
ibmvnic: Wait for device response when changing MAC
Wait for a response from the VNIC server before exiting after setting
the MAC address. The resolves an issue with bonding a VNIC client in
ALB or TLB modes. The bonding driver was changing the MAC address more
rapidly than the device could respond, causing the following errors.
"bond0: the hw address of slave eth2 is in use by the bond;
couldn't find a slave with a free hw address to give it
(this should not have happened)"
If the function waits until the change is finalized, these errors are
avoided.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Junxiao Bi [Mon, 29 Jan 2018 09:53:42 +0000 (17:53 +0800)]
qlcnic: fix deadlock bug
The following soft lockup was caught. This is a deadlock caused by
recusive locking.
Process kworker/u40:1:28016 was holding spin lock "mbx->queue_lock" in
qlcnic_83xx_mailbox_worker(), while a softirq came in and ask the same spin
lock in qlcnic_83xx_enqueue_mbx_cmd(). This lock should be hold by disable
bh..
[161846.962125] NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/u40:1:28016]
[161846.962367] Modules linked in: tun ocfs2 xen_netback xen_blkback xen_gntalloc xen_gntdev xen_evtchn xenfs xen_privcmd autofs4 ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs bnx2fc fcoe libfcoe libfc sunrpc 8021q mrp garp bridge stp llc bonding dm_round_robin dm_multipath iTCO_wdt iTCO_vendor_support pcspkr sb_edac edac_core i2c_i801 shpchp lpc_ich mfd_core ioatdma ipmi_devintf ipmi_si ipmi_msghandler sg ext4 jbd2 mbcache2 sr_mod cdrom sd_mod igb i2c_algo_bit i2c_core ahci libahci megaraid_sas ixgbe dca ptp pps_core vxlan udp_tunnel ip6_udp_tunnel qla2xxx scsi_transport_fc qlcnic crc32c_intel be2iscsi bnx2i cnic uio cxgb4i cxgb4 cxgb3i libcxgbi ipv6 cxgb3 mdio libiscsi_tcp qla4xxx iscsi_boot_sysfs libiscsi scsi_transport_iscsi dm_mirror dm_region_hash dm_log dm_mod
[161846.962454]
[161846.962460] CPU: 1 PID: 28016 Comm: kworker/u40:1 Not tainted 4.1.12-94.5.9.el6uek.x86_64 #2
[161846.962463] Hardware name: Oracle Corporation SUN SERVER X4-2L /ASSY,MB,X4-2L , BIOS
26050100 09/19/2017
[161846.962489] Workqueue: qlcnic_mailbox qlcnic_83xx_mailbox_worker [qlcnic]
[161846.962493] task:
ffff8801f2e34600 ti:
ffff88004ca5c000 task.ti:
ffff88004ca5c000
[161846.962496] RIP: e030:[<
ffffffff810013aa>] [<
ffffffff810013aa>] xen_hypercall_sched_op+0xa/0x20
[161846.962506] RSP: e02b:
ffff880202e43388 EFLAGS:
00000206
[161846.962509] RAX:
0000000000000000 RBX:
ffff8801f6996b70 RCX:
ffffffff810013aa
[161846.962511] RDX:
ffff880202e433cc RSI:
ffff880202e433b0 RDI:
0000000000000003
[161846.962513] RBP:
ffff880202e433d0 R08:
0000000000000000 R09:
ffff8801fe893200
[161846.962516] R10:
ffff8801fe400538 R11:
0000000000000206 R12:
ffff880202e4b000
[161846.962518] R13:
0000000000000050 R14:
0000000000000001 R15:
000000000000020d
[161846.962528] FS:
0000000000000000(0000) GS:
ffff880202e40000(0000) knlGS:
ffff880202e40000
[161846.962531] CS: e033 DS: 0000 ES: 0000 CR0:
0000000080050033
[161846.962533] CR2:
0000000002612640 CR3:
00000001bb796000 CR4:
0000000000042660
[161846.962536] Stack:
[161846.962538]
ffff880202e43608 0000000000000000 ffffffff813f0442 ffff880202e433b0
[161846.962543]
0000000000000000 ffff880202e433cc ffffffff00000001 0000000000000000
[161846.962547]
00000009813f03d6 ffff880202e433e0 ffffffff813f0460 ffff880202e43440
[161846.962552] Call Trace:
[161846.962555] <IRQ>
[161846.962565] [<
ffffffff813f0442>] ? xen_poll_irq_timeout+0x42/0x50
[161846.962570] [<
ffffffff813f0460>] xen_poll_irq+0x10/0x20
[161846.962578] [<
ffffffff81014222>] xen_lock_spinning+0xe2/0x110
[161846.962583] [<
ffffffff81013f01>] __raw_callee_save_xen_lock_spinning+0x11/0x20
[161846.962592] [<
ffffffff816e5c57>] ? _raw_spin_lock+0x57/0x80
[161846.962609] [<
ffffffffa028acfc>] qlcnic_83xx_enqueue_mbx_cmd+0x7c/0xe0 [qlcnic]
[161846.962623] [<
ffffffffa028e008>] qlcnic_83xx_issue_cmd+0x58/0x210 [qlcnic]
[161846.962636] [<
ffffffffa028caf2>] qlcnic_83xx_sre_macaddr_change+0x162/0x1d0 [qlcnic]
[161846.962649] [<
ffffffffa028cb8b>] qlcnic_83xx_change_l2_filter+0x2b/0x30 [qlcnic]
[161846.962657] [<
ffffffff8160248b>] ? __skb_flow_dissect+0x18b/0x650
[161846.962670] [<
ffffffffa02856e5>] qlcnic_send_filter+0x205/0x250 [qlcnic]
[161846.962682] [<
ffffffffa0285c77>] qlcnic_xmit_frame+0x547/0x7b0 [qlcnic]
[161846.962691] [<
ffffffff8160ac22>] xmit_one+0x82/0x1a0
[161846.962696] [<
ffffffff8160ad90>] dev_hard_start_xmit+0x50/0xa0
[161846.962701] [<
ffffffff81630112>] sch_direct_xmit+0x112/0x220
[161846.962706] [<
ffffffff8160b80f>] __dev_queue_xmit+0x1df/0x5e0
[161846.962710] [<
ffffffff8160bc33>] dev_queue_xmit_sk+0x13/0x20
[161846.962721] [<
ffffffffa0575bd5>] bond_dev_queue_xmit+0x35/0x80 [bonding]
[161846.962729] [<
ffffffffa05769fb>] __bond_start_xmit+0x1cb/0x210 [bonding]
[161846.962736] [<
ffffffffa0576a71>] bond_start_xmit+0x31/0x60 [bonding]
[161846.962740] [<
ffffffff8160ac22>] xmit_one+0x82/0x1a0
[161846.962745] [<
ffffffff8160ad90>] dev_hard_start_xmit+0x50/0xa0
[161846.962749] [<
ffffffff8160bb1e>] __dev_queue_xmit+0x4ee/0x5e0
[161846.962754] [<
ffffffff8160bc33>] dev_queue_xmit_sk+0x13/0x20
[161846.962760] [<
ffffffffa05cfa72>] vlan_dev_hard_start_xmit+0xb2/0x150 [8021q]
[161846.962764] [<
ffffffff8160ac22>] xmit_one+0x82/0x1a0
[161846.962769] [<
ffffffff8160ad90>] dev_hard_start_xmit+0x50/0xa0
[161846.962773] [<
ffffffff8160bb1e>] __dev_queue_xmit+0x4ee/0x5e0
[161846.962777] [<
ffffffff8160bc33>] dev_queue_xmit_sk+0x13/0x20
[161846.962789] [<
ffffffffa05adf74>] br_dev_queue_push_xmit+0x54/0xa0 [bridge]
[161846.962797] [<
ffffffffa05ae4ff>] br_forward_finish+0x2f/0x90 [bridge]
[161846.962807] [<
ffffffff810b0dad>] ? ttwu_do_wakeup+0x1d/0x100
[161846.962811] [<
ffffffff815f929b>] ? __alloc_skb+0x8b/0x1f0
[161846.962818] [<
ffffffffa05ae04d>] __br_forward+0x8d/0x120 [bridge]
[161846.962822] [<
ffffffff815f613b>] ? __kmalloc_reserve+0x3b/0xa0
[161846.962829] [<
ffffffff810be55e>] ? update_rq_runnable_avg+0xee/0x230
[161846.962836] [<
ffffffffa05ae176>] br_forward+0x96/0xb0 [bridge]
[161846.962845] [<
ffffffffa05af85e>] br_handle_frame_finish+0x1ae/0x420 [bridge]
[161846.962853] [<
ffffffffa05afc4f>] br_handle_frame+0x17f/0x260 [bridge]
[161846.962862] [<
ffffffffa05afad0>] ? br_handle_frame_finish+0x420/0x420 [bridge]
[161846.962867] [<
ffffffff8160d057>] __netif_receive_skb_core+0x1f7/0x870
[161846.962872] [<
ffffffff8160d6f2>] __netif_receive_skb+0x22/0x70
[161846.962877] [<
ffffffff8160d913>] netif_receive_skb_internal+0x23/0x90
[161846.962884] [<
ffffffffa07512ea>] ? xenvif_idx_release+0xea/0x100 [xen_netback]
[161846.962889] [<
ffffffff816e5a10>] ? _raw_spin_unlock_irqrestore+0x20/0x50
[161846.962893] [<
ffffffff8160e624>] netif_receive_skb_sk+0x24/0x90
[161846.962899] [<
ffffffffa075269a>] xenvif_tx_submit+0x2ca/0x3f0 [xen_netback]
[161846.962906] [<
ffffffffa0753f0c>] xenvif_tx_action+0x9c/0xd0 [xen_netback]
[161846.962915] [<
ffffffffa07567f5>] xenvif_poll+0x35/0x70 [xen_netback]
[161846.962920] [<
ffffffff8160e01b>] napi_poll+0xcb/0x1e0
[161846.962925] [<
ffffffff8160e1c0>] net_rx_action+0x90/0x1c0
[161846.962931] [<
ffffffff8108aaba>] __do_softirq+0x10a/0x350
[161846.962938] [<
ffffffff8108ae75>] irq_exit+0x125/0x130
[161846.962943] [<
ffffffff813f03a9>] xen_evtchn_do_upcall+0x39/0x50
[161846.962950] [<
ffffffff816e7ffe>] xen_do_hypervisor_callback+0x1e/0x40
[161846.962952] <EOI>
[161846.962959] [<
ffffffff816e5c4a>] ? _raw_spin_lock+0x4a/0x80
[161846.962964] [<
ffffffff816e5b1e>] ? _raw_spin_lock_irqsave+0x1e/0xa0
[161846.962978] [<
ffffffffa028e279>] ? qlcnic_83xx_mailbox_worker+0xb9/0x2a0 [qlcnic]
[161846.962991] [<
ffffffff810a14e1>] ? process_one_work+0x151/0x4b0
[161846.962995] [<
ffffffff8100c3f2>] ? check_events+0x12/0x20
[161846.963001] [<
ffffffff810a1960>] ? worker_thread+0x120/0x480
[161846.963005] [<
ffffffff816e187b>] ? __schedule+0x30b/0x890
[161846.963010] [<
ffffffff810a1840>] ? process_one_work+0x4b0/0x4b0
[161846.963015] [<
ffffffff810a1840>] ? process_one_work+0x4b0/0x4b0
[161846.963021] [<
ffffffff810a6b3e>] ? kthread+0xce/0xf0
[161846.963025] [<
ffffffff810a6a70>] ? kthread_freezable_should_stop+0x70/0x70
[161846.963031] [<
ffffffff816e6522>] ? ret_from_fork+0x42/0x70
[161846.963035] [<
ffffffff810a6a70>] ? kthread_freezable_should_stop+0x70/0x70
[161846.963037] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Li RongQing [Fri, 26 Jan 2018 08:40:41 +0000 (16:40 +0800)]
tcp: release sk_frag.page in tcp_disconnect
socket can be disconnected and gets transformed back to a listening
socket, if sk_frag.page is not released, which will be cloned into
a new socket by sk_clone_lock, but the reference count of this page
is increased, lead to a use after free or double free issue
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tonghao Zhang [Sun, 28 Jan 2018 11:38:58 +0000 (03:38 -0800)]
ipv4: Get the address of interface correctly.
When using ioctl to get address of interface, we can't
get it anymore. For example, the command is show as below.
# ifconfig eth0
In the patch ("
03aef17bb79b3"), the devinet_ioctl does not
return a suitable value, even though we can find it in
the kernel. Then fix it now.
Fixes: 03aef17bb79b3 ("devinet_ioctl(): take copyin/copyout to caller")
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 27 Jan 2018 18:58:43 +0000 (10:58 -0800)]
net_sched: gen_estimator: fix lockdep splat
syzbot reported a lockdep splat in gen_new_estimator() /
est_fetch_counters() when attempting to lock est->stats_lock.
Since est_fetch_counters() is called from BH context from timer
interrupt, we need to block BH as well when calling it from process
context.
Most qdiscs use per cpu counters and are immune to the problem,
but net/sched/act_api.c and net/netfilter/xt_RATEEST.c are using
a spinlock to protect their data. They both call gen_new_estimator()
while object is created and not yet alive, so this bug could
not trigger a deadlock, only a lockdep splat.
Fixes: 1c0d32fde5bd ("net_sched: gen_estimator: complete rewrite of rate estimators")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Harini Katakam [Sat, 27 Jan 2018 06:39:01 +0000 (12:09 +0530)]
net: macb: Handle HRESP error
Handle HRESP error by doing a SW reset of RX and TX and
re-initializing the descriptors, RX and TX queue pointers.
Signed-off-by: Harini Katakam <harinik@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Sat, 27 Jan 2018 00:16:45 +0000 (16:16 -0800)]
net/mlx5e: IPoIB, Fix copy-paste bug in flow steering refactoring
On TTC table creation, the indirection TIRs should be used instead of
the inner indirection TIRs.
Fixes: 1ae1df3a1193 ("net/mlx5e: Refactor RSS related objects and code")
Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Shalom Lagziel <shaloml@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 27 Jan 2018 00:10:43 +0000 (16:10 -0800)]
ipv6: addrconf: break critical section in addrconf_verify_rtnl()
Heiner reported a lockdep splat [1]
This is caused by attempting GFP_KERNEL allocation while RCU lock is
held and BH blocked.
We believe that addrconf_verify_rtnl() could run for a long period,
so instead of using GFP_ATOMIC here as Ido suggested, we should break
the critical section and restart it after the allocation.
[1]
[86220.125562] =============================
[86220.125586] WARNING: suspicious RCU usage
[86220.125612] 4.15.0-rc7-next-
20180110+ #7 Not tainted
[86220.125641] -----------------------------
[86220.125666] kernel/sched/core.c:6026 Illegal context switch in RCU-bh read-side critical section!
[86220.125711]
other info that might help us debug this:
[86220.125755]
rcu_scheduler_active = 2, debug_locks = 1
[86220.125792] 4 locks held by kworker/0:2/1003:
[86220.125817] #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<
00000000da8e9b73>] process_one_work+0x1de/0x680
[86220.125895] #1: ((addr_chk_work).work){+.+.}, at: [<
00000000da8e9b73>] process_one_work+0x1de/0x680
[86220.125959] #2: (rtnl_mutex){+.+.}, at: [<
00000000b06d9510>] rtnl_lock+0x12/0x20
[86220.126017] #3: (rcu_read_lock_bh){....}, at: [<
00000000aef52299>] addrconf_verify_rtnl+0x1e/0x510 [ipv6]
[86220.126111]
stack backtrace:
[86220.126142] CPU: 0 PID: 1003 Comm: kworker/0:2 Not tainted 4.15.0-rc7-next-
20180110+ #7
[86220.126185] Hardware name: ZOTAC ZBOX-CI321NANO/ZBOX-CI321NANO, BIOS B246P105 06/01/2015
[86220.126250] Workqueue: ipv6_addrconf addrconf_verify_work [ipv6]
[86220.126288] Call Trace:
[86220.126312] dump_stack+0x70/0x9e
[86220.126337] lockdep_rcu_suspicious+0xce/0xf0
[86220.126365] ___might_sleep+0x1d3/0x240
[86220.126390] __might_sleep+0x45/0x80
[86220.126416] kmem_cache_alloc_trace+0x53/0x250
[86220.126458] ? ipv6_add_addr+0xfe/0x6e0 [ipv6]
[86220.126498] ipv6_add_addr+0xfe/0x6e0 [ipv6]
[86220.126538] ipv6_create_tempaddr+0x24d/0x430 [ipv6]
[86220.126580] ? ipv6_create_tempaddr+0x24d/0x430 [ipv6]
[86220.126623] addrconf_verify_rtnl+0x339/0x510 [ipv6]
[86220.126664] ? addrconf_verify_rtnl+0x339/0x510 [ipv6]
[86220.126708] addrconf_verify_work+0xe/0x20 [ipv6]
[86220.126738] process_one_work+0x258/0x680
[86220.126765] worker_thread+0x35/0x3f0
[86220.126790] kthread+0x124/0x140
[86220.126813] ? process_one_work+0x680/0x680
[86220.126839] ? kthread_create_worker_on_cpu+0x40/0x40
[86220.126869] ? umh_complete+0x40/0x40
[86220.126893] ? call_usermodehelper_exec_async+0x12a/0x160
[86220.126926] ret_from_fork+0x4b/0x60
[86220.126999] BUG: sleeping function called from invalid context at mm/slab.h:420
[86220.127041] in_atomic(): 1, irqs_disabled(): 0, pid: 1003, name: kworker/0:2
[86220.127082] 4 locks held by kworker/0:2/1003:
[86220.127107] #0: ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<
00000000da8e9b73>] process_one_work+0x1de/0x680
[86220.127179] #1: ((addr_chk_work).work){+.+.}, at: [<
00000000da8e9b73>] process_one_work+0x1de/0x680
[86220.127242] #2: (rtnl_mutex){+.+.}, at: [<
00000000b06d9510>] rtnl_lock+0x12/0x20
[86220.127300] #3: (rcu_read_lock_bh){....}, at: [<
00000000aef52299>] addrconf_verify_rtnl+0x1e/0x510 [ipv6]
[86220.127414] CPU: 0 PID: 1003 Comm: kworker/0:2 Not tainted 4.15.0-rc7-next-
20180110+ #7
[86220.127463] Hardware name: ZOTAC ZBOX-CI321NANO/ZBOX-CI321NANO, BIOS B246P105 06/01/2015
[86220.127528] Workqueue: ipv6_addrconf addrconf_verify_work [ipv6]
[86220.127568] Call Trace:
[86220.127591] dump_stack+0x70/0x9e
[86220.127616] ___might_sleep+0x14d/0x240
[86220.127644] __might_sleep+0x45/0x80
[86220.127672] kmem_cache_alloc_trace+0x53/0x250
[86220.127717] ? ipv6_add_addr+0xfe/0x6e0 [ipv6]
[86220.127762] ipv6_add_addr+0xfe/0x6e0 [ipv6]
[86220.127807] ipv6_create_tempaddr+0x24d/0x430 [ipv6]
[86220.127854] ? ipv6_create_tempaddr+0x24d/0x430 [ipv6]
[86220.127903] addrconf_verify_rtnl+0x339/0x510 [ipv6]
[86220.127950] ? addrconf_verify_rtnl+0x339/0x510 [ipv6]
[86220.127998] addrconf_verify_work+0xe/0x20 [ipv6]
[86220.128032] process_one_work+0x258/0x680
[86220.128063] worker_thread+0x35/0x3f0
[86220.128091] kthread+0x124/0x140
[86220.128117] ? process_one_work+0x680/0x680
[86220.128146] ? kthread_create_worker_on_cpu+0x40/0x40
[86220.128180] ? umh_complete+0x40/0x40
[86220.128207] ? call_usermodehelper_exec_async+0x12a/0x160
[86220.128243] ret_from_fork+0x4b/0x60
Fixes: f3d9832e56c4 ("ipv6: addrconf: cleanup locking in ipv6_add_addr")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Wang [Fri, 26 Jan 2018 19:40:17 +0000 (11:40 -0800)]
ipv6: change route cache aging logic
In current route cache aging logic, if a route has both RTF_EXPIRE and
RTF_GATEWAY set, the route will only be removed if the neighbor cache
has no NTF_ROUTER flag. Otherwise, even if the route has expired, it
won't get deleted.
Fix this logic to always check if the route has expired first and then
do the gateway neighbor cache check if previous check decide to not
remove the exception entry.
Fixes: 1859bac04fb6 ("ipv6: remove from fib tree aged out RTF_CACHE dst")
Signed-off-by: Wei Wang <weiwan@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Fri, 26 Jan 2018 16:54:45 +0000 (08:54 -0800)]
i40e/i40evf: Update DESC_NEEDED value to reflect larger value
When compared to ixgbe and other previous Intel drivers the i40e and i40evf
drivers actually reserve 2 additional descriptors in maybe_stop_tx for
cache line alignment. We need to update DESC_NEEDED to reflect this as
otherwise we are more likely to return TX_BUSY which will cause issues with
things like xmit_more.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Gospodarek [Fri, 26 Jan 2018 15:27:47 +0000 (10:27 -0500)]
bnxt_en: cleanup DIM work on device shutdown
Make sure to cancel any pending work that might update driver coalesce
settings when taking down an interface.
Fixes: 6a8788f25625 ("bnxt_en: add support for software dynamic interrupt moderation")
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Cc: Michael Chan <michael.chan@broadcom.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Fri, 26 Jan 2018 04:16:29 +0000 (20:16 -0800)]
net: ipv6: send unsolicited NA after DAD
Unsolicited IPv6 neighbor advertisements should be sent after DAD
completes. Update ndisc_send_unsol_na to skip tentative, non-optimistic
addresses and have those sent by addrconf_dad_completed after DAD.
Fixes: 4a6e3c5def13c ("net: ipv6: send unsolicited NA on admin up")
Reported-by: Vivek Venkatraman <vivek@cumulusnetworks.com>
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Spencer [Fri, 26 Jan 2018 03:37:50 +0000 (19:37 -0800)]
gianfar: prevent integer wrapping in the rx handler
When the frame check sequence (FCS) is split across the last two frames
of a fragmented packet, part of the FCS gets counted twice, once when
subtracting the FCS, and again when subtracting the previously received
data.
For example, if 1602 bytes are received, and the first fragment contains
the first 1600 bytes (including the first two bytes of the FCS), and the
second fragment contains the last two bytes of the FCS:
'skb->len == 1600' from the first fragment
size = lstatus & BD_LENGTH_MASK; # 1602
size -= ETH_FCS_LEN; # 1598
size -= skb->len; # -2
Since the size is unsigned, it wraps around and causes a BUG later in
the packet handling, as shown below:
kernel BUG at ./include/linux/skbuff.h:2068!
Oops: Exception in kernel mode, sig: 5 [#1]
...
NIP [
c021ec60] skb_pull+0x24/0x44
LR [
c01e2fbc] gfar_clean_rx_ring+0x498/0x690
Call Trace:
[
df7edeb0] [
c01e2c1c] gfar_clean_rx_ring+0xf8/0x690 (unreliable)
[
df7edf20] [
c01e33a8] gfar_poll_rx_sq+0x3c/0x9c
[
df7edf40] [
c023352c] net_rx_action+0x21c/0x274
[
df7edf90] [
c0329000] __do_softirq+0xd8/0x240
[
df7edff0] [
c000c108] call_do_irq+0x24/0x3c
[
c0597e90] [
c00041dc] do_IRQ+0x64/0xc4
[
c0597eb0] [
c000d920] ret_from_except+0x0/0x18
--- interrupt: 501 at arch_cpu_idle+0x24/0x5c
Change the size to a signed integer and then trim off any part of the
FCS that was received prior to the last fragment.
Fixes: 6c389fc931bc ("gianfar: fix size of scatter-gathered frames")
Signed-off-by: Andy Spencer <aspencer@spacex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 17:42:15 +0000 (12:42 -0500)]
Merge branch 'net_sched-reflect-tx_queue_len-change-for-pfifo_fast'
Cong Wang says:
====================
net_sched: reflect tx_queue_len change for pfifo_fast
This pathcset restores the pfifo_fast qdisc behavior of dropping
packets based on latest dev->tx_queue_len. Patch 1 introduces
a helper, patch 2 introduces a new Qdisc ops which is called when
we modify tx_queue_len, patch 3 implements this ops for pfifo_fast.
Please see each patch for details.
---
v3: use skb_array_resize_multiple()
v2: handle error case for ->change_tx_queue_len()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang [Fri, 26 Jan 2018 02:26:24 +0000 (18:26 -0800)]
net_sched: implement ->change_tx_queue_len() for pfifo_fast
pfifo_fast used to drop based on qdisc_dev(qdisc)->tx_queue_len,
so we have to resize skb array when we change tx_queue_len.
Other qdiscs which read tx_queue_len are fine because they
all save it to sch->limit or somewhere else in qdisc during init.
They don't have to implement this, it is nicer if they do so
that users don't have to re-configure qdisc after changing
tx_queue_len.
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang [Fri, 26 Jan 2018 02:26:23 +0000 (18:26 -0800)]
net_sched: plug in qdisc ops change_tx_queue_len
Introduce a new qdisc ops ->change_tx_queue_len() so that
each qdisc could decide how to implement this if it wants.
Previously we simply read dev->tx_queue_len, after pfifo_fast
switches to skb array, we need this API to resize the skb array
when we change dev->tx_queue_len.
To avoid handling race conditions with TX BH, we need to
deactivate all TX queues before change the value and bring them
back after we are done, this also makes implementation easier.
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cong Wang [Fri, 26 Jan 2018 02:26:22 +0000 (18:26 -0800)]
net: introduce helper dev_change_tx_queue_len()
This patch promotes the local change_tx_queue_len() to a core
helper function, dev_change_tx_queue_len(), so that rtnetlink
and net-sysfs could share the code. This also prepares for the
following patch.
Note, the -EFAULT in the original code doesn't make sense,
we should propagate the errno from notifiers.
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 25 Jan 2018 14:03:52 +0000 (22:03 +0800)]
vhost_net: stop device during reset owner
We don't stop device before reset owner, this means we could try to
serve any virtqueue kick before reset dev->worker. This will result a
warn since the work was pending at llist during owner resetting. Fix
this by stopping device during owner reset.
Reported-by: syzbot+eb17c6162478cc50632c@syzkaller.appspotmail.com
Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 17:23:53 +0000 (12:23 -0500)]
Merge branch 'net-Ease-to-follow-an-interface-that-moves-to-another-netns'
Nicolas Dichtel says:
====================
net: Ease to follow an interface that moves to another netns
The goal of this series is to ease the user to follow an interface that
moves to another netns.
After this series, with a patched iproute2:
$ ip netns
bar
foo
$ ip monitor link &
$ ip link set dummy0 netns foo
Deleted 5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 6e:a7:82:35:96:46 brd ff:ff:ff:ff:ff:ff new-nsid 0 new-ifindex 6
=> new nsid: 0, new ifindex: 6 (was 5 in the previous netns)
$ ip link set eth1 netns bar
Deleted 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 52:54:01:12:34:57 brd ff:ff:ff:ff:ff:ff new-nsid 1 new-ifindex 3
=> new nsid: 1, new ifindex: 3 (same ifindex)
$ ip netns
bar (id: 1)
foo (id: 0)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 25 Jan 2018 14:01:39 +0000 (15:01 +0100)]
dev: advertise the new ifindex when the netns iface changes
The goal is to let the user follow an interface that moves to another
netns.
CC: Jiri Benc <jbenc@redhat.com>
CC: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Reviewed-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Dichtel [Thu, 25 Jan 2018 14:01:38 +0000 (15:01 +0100)]
dev: always advertise the new nsid when the netns iface changes
The user should be able to follow any interface that moves to another
netns. There is no reason to hide physical interfaces.
CC: Jiri Benc <jbenc@redhat.com>
CC: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Reviewed-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Lomovtsev [Thu, 25 Jan 2018 11:38:17 +0000 (03:38 -0800)]
net: ethernet: cavium: Correct Cavium Thunderx NIC driver names accordingly to module name
It was found that ethtool provides unexisting module name while
it queries the specified network device for associated driver
information. Then user tries to unload that module by provided
module name and fails.
This happens because ethtool reads value of DRV_NAME macro,
while module name is defined at the driver's Makefile.
This patch is to correct Cavium CN88xx Thunder NIC driver names
(DRV_NAME macro) 'thunder-nicvf' to 'nicvf' and 'thunder-nic'
to 'nicpf', sync bgx and xcv driver names accordingly to their
module names.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 17:02:55 +0000 (12:02 -0500)]
Merge branch 'ptr_ring-fixes'
Michael S. Tsirkin says:
====================
ptr_ring fixes
This fixes a bunch of issues around ptr_ring use in net core.
One of these: "tap: fix use-after-free" is also needed on net,
but can't be backported cleanly.
I will post a net patch separately.
Lightly tested - Jason, could you pls confirm this
addresses the security issue you saw with ptr_ring?
Testing reports would be appreciated too.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Jason Wang <jasowang@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:44 +0000 (01:36 +0200)]
tools/virtio: fix smp_mb on x86
Offset 128 overlaps the last word of the redzone.
Use 132 which is always beyond that.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:42 +0000 (01:36 +0200)]
tools/virtio: copy READ/WRITE_ONCE
This is to make ptr_ring test build again.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:40 +0000 (01:36 +0200)]
tools/virtio: more stubs to fix tools build
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:39 +0000 (01:36 +0200)]
tools/virtio: switch to __ptr_ring_empty
We don't rely on lockless guarantees, but it
seems cleaner than inverting __ptr_ring_peek.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:38 +0000 (01:36 +0200)]
ptr_ring: prevent queue load/store tearing
In theory compiler could tear queue loads or stores in two. It does not
seem to be happening in practice but it seems easier to convert the
cases where this would be a problem to READ/WRITE_ONCE than worry about
it.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:36 +0000 (01:36 +0200)]
skb_array: use __ptr_ring_empty
__skb_array_empty should use __ptr_ring_empty since that's the only
legal lockless function.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:35 +0000 (01:36 +0200)]
Revert "net: ptr_ring: otherwise safe empty checks can overrun array bounds"
This reverts commit
bcecb4bbf88aa03171c30652bca761cf27755a6b.
If we try to allocate an extra entry as the above commit did, and when
the requested size is UINT_MAX, addition overflows causing zero size to
be passed to kmalloc().
kmalloc then returns ZERO_SIZE_PTR with a subsequent crash.
Reported-by: syzbot+87678bcf753b44c39b67@syzkaller.appspotmail.com
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:32 +0000 (01:36 +0200)]
ptr_ring: disallow lockless __ptr_ring_full
Similar to
bcecb4bbf88a ("net: ptr_ring: otherwise safe empty checks can
overrun array bounds") a lockless use of __ptr_ring_full might
cause an out of bounds access.
We can fix this, but it's easier to just disallow lockless
__ptr_ring_full for now.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:31 +0000 (01:36 +0200)]
tap: fix use-after-free
Lockless access to __ptr_ring_full is only legal if ring is
never resized, otherwise it might cause use-after free errors.
Simply drop the lockless test, we'll drop the packet
a bit later when produce fails.
Fixes: 362899b8 ("macvtap: switch to use skb array")
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:29 +0000 (01:36 +0200)]
ptr_ring: READ/WRITE_ONCE for __ptr_ring_empty
Lockless __ptr_ring_empty requires that consumer head is read and
written at once, atomically. Annotate accordingly to make sure compiler
does it correctly. Switch locked callers to __ptr_ring_peek which does
not support the lockless operation.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:28 +0000 (01:36 +0200)]
ptr_ring: clean up documentation
The only function safe to call without locks
is __ptr_ring_empty. Move documentation about
lockless use there to make sure people do not
try to use __ptr_ring_peek outside locks.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael S. Tsirkin [Thu, 25 Jan 2018 23:36:27 +0000 (01:36 +0200)]
ptr_ring: keep consumer_head valid at all times
The comment near __ptr_ring_peek says:
* If ring is never resized, and if the pointer is merely
* tested, there's no need to take the lock - see e.g. __ptr_ring_empty.
but this was in fact never possible since consumer_head would sometimes
point outside the ring. Refactor the code so that it's always
pointing within a ring.
Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Thu, 25 Jan 2018 07:15:27 +0000 (23:15 -0800)]
ipv6: Fix SO_REUSEPORT UDP socket with implicit sk_ipv6only
If a sk_v6_rcv_saddr is !IPV6_ADDR_ANY and !IPV6_ADDR_MAPPED, it
implicitly implies it is an ipv6only socket. However, in inet6_bind(),
this addr_type checking and setting sk->sk_ipv6only to 1 are only done
after sk->sk_prot->get_port(sk, snum) has been completed successfully.
This inconsistency between sk_v6_rcv_saddr and sk_ipv6only confuses
the 'get_port()'.
In particular, when binding SO_REUSEPORT UDP sockets,
udp_reuseport_add_sock(sk,...) is called. udp_reuseport_add_sock()
checks "ipv6_only_sock(sk2) == ipv6_only_sock(sk)" before adding sk to
sk2->sk_reuseport_cb. In this case, ipv6_only_sock(sk2) could be
1 while ipv6_only_sock(sk) is still 0 here. The end result is,
reuseport_alloc(sk) is called instead of adding sk to the existing
sk2->sk_reuseport_cb.
It can be reproduced by binding two SO_REUSEPORT UDP sockets on an
IPv6 address (!ANY and !MAPPED). Only one of the socket will
receive packet.
The fix is to set the implicit sk_ipv6only before calling get_port().
The original sk_ipv6only has to be saved such that it can be restored
in case get_port() failed. The situation is similar to the
inet_reset_saddr(sk) after get_port() has failed.
Thanks to Calvin Owens <calvinowens@fb.com> who created an easy
reproduction which leads to a fix.
Fixes: e32ea7e74727 ("soreuseport: fast reuseport UDP socket selection")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 16:31:07 +0000 (11:31 -0500)]
Merge branch 'rtnetlink-enable-IFLA_IF_NETNSID-for-RTM_DELLINK-RTM_SETINK'
Christian Brauner says:
====================
rtnetlink: enable IFLA_IF_NETNSID for RTM_{DEL,SET}LINK
Based on the previous discussion this enables passing a IFLA_IF_NETNSID
property along with RTM_SETLINK and RTM_DELLINK requests. The patch for
RTM_NEWLINK will be sent out in a separate patch since there are more
corner-cases to think about.
Changelog 2018-01-24:
* Preserve old behavior and report -ENODEV when either ifindex or ifname is
provided and IFLA_GROUP is set. Spotted by Wolfgang Bumiller.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Brauner [Wed, 24 Jan 2018 14:26:34 +0000 (15:26 +0100)]
rtnetlink: enable IFLA_IF_NETNSID for RTM_DELLINK
- Backwards Compatibility:
If userspace wants to determine whether RTM_DELLINK supports the
IFLA_IF_NETNSID property they should first send an RTM_GETLINK request
with IFLA_IF_NETNSID on lo. If either EACCESS is returned or the reply
does not include IFLA_IF_NETNSID userspace should assume that
IFLA_IF_NETNSID is not supported on this kernel.
If the reply does contain an IFLA_IF_NETNSID property userspace
can send an RTM_DELLINK with a IFLA_IF_NETNSID property. If they receive
EOPNOTSUPP then the kernel does not support the IFLA_IF_NETNSID property
with RTM_DELLINK. Userpace should then fallback to other means.
- Security:
Callers must have CAP_NET_ADMIN in the owning user namespace of the
target network namespace.
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Brauner [Wed, 24 Jan 2018 14:26:33 +0000 (15:26 +0100)]
rtnetlink: enable IFLA_IF_NETNSID for RTM_SETLINK
- Backwards Compatibility:
If userspace wants to determine whether RTM_SETLINK supports the
IFLA_IF_NETNSID property they should first send an RTM_GETLINK request
with IFLA_IF_NETNSID on lo. If either EACCESS is returned or the reply
does not include IFLA_IF_NETNSID userspace should assume that
IFLA_IF_NETNSID is not supported on this kernel.
If the reply does contain an IFLA_IF_NETNSID property userspace
can send an RTM_SETLINK with a IFLA_IF_NETNSID property. If they receive
EOPNOTSUPP then the kernel does not support the IFLA_IF_NETNSID property
with RTM_SETLINK. Userpace should then fallback to other means.
To retain backwards compatibility the kernel will first check whether a
IFLA_NET_NS_PID or IFLA_NET_NS_FD property has been passed. If either
one is found it will be used to identify the target network namespace.
This implies that users who do not care whether their running kernel
supports IFLA_IF_NETNSID with RTM_SETLINK can pass both
IFLA_NET_NS_{FD,PID} and IFLA_IF_NETNSID referring to the same network
namespace.
- Security:
Callers must have CAP_NET_ADMIN in the owning user namespace of the
target network namespace.
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christian Brauner [Wed, 24 Jan 2018 14:26:32 +0000 (15:26 +0100)]
rtnetlink: enable IFLA_IF_NETNSID in do_setlink()
RTM_{NEW,SET}LINK already allow operations on other network namespaces
by identifying the target network namespace through IFLA_NET_NS_{FD,PID}
properties. This is done by looking for the corresponding properties in
do_setlink(). Extend do_setlink() to also look for the IFLA_IF_NETNSID
property. This introduces no functional changes since all callers of
do_setlink() currently block IFLA_IF_NETNSID by reporting an error before
they reach do_setlink().
This introduces the helpers:
static struct net *rtnl_link_get_net_by_nlattr(struct net *src_net, struct
nlattr *tb[])
static struct net *rtnl_link_get_net_capable(const struct sk_buff *skb,
struct net *src_net,
struct nlattr *tb[], int cap)
to simplify permission checks and target network namespace retrieval for
RTM_* requests that already support IFLA_NET_NS_{FD,PID} but get extended
to IFLA_IF_NETNSID. To perserve backwards compatibility the helpers look
for IFLA_NET_NS_{FD,PID} properties first before checking for
IFLA_IF_NETNSID.
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 15:14:59 +0000 (10:14 -0500)]
Merge git://git./linux/kernel/git/davem/net
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 03:00:16 +0000 (22:00 -0500)]
Merge tag 'wireless-drivers-next-for-davem-2018-01-26' of git://git./linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for 4.16
Major changes:
wil6210
* add PCI device id for Talyn
* support flashless device
ath9k
* improve RSSI/signal accuracy on AR9003 series
mt76
* validate CCMP PN from received frames to avoid replay attacks
qtnfmac
* support 64-bit network stats
* report more hardware information to kernel log and some via ethtool
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
kbuild test robot [Fri, 26 Jan 2018 17:00:39 +0000 (17:00 +0000)]
sfc: mark some unexported symbols as static
efx_default_channel_want_txqs() is only used in efx.c, while
efx_ptp_want_txqs() and efx_ptp_channel_type (a struct) are only used
in ptp.c. In all cases these symbols should be static.
Fixes: 2935e3c38228 ("sfc: on 8000 series use TX queues for TX timestamps")
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
[ecree@solarflare.com: rewrote commit message]
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 02:26:34 +0000 (21:26 -0500)]
Merge branch '40GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2018-01-26
This series contains updates to i40e and i40evf.
Michal updates the driver to pass critical errors from the firmware to
the caller.
Patryk fixes an issue of creating multiple identical filters with the
same location, by simply moving the functions so that we remove the
existing filter and then add the new filter.
Paweł adds back in the ability to turn off offloads when VLAN is set for
the VF driver. Fixed an issue where the number of TC queue pairs was
exceeding MSI-X vectors count, causing messages about invalid TC mapping
and wrong selected Tx queue.
Alex cleans up the i40e/i40evf_set_itr_per_queue() by dropping all the
unneeded pointer chases. Puts to use the reg_idx value, which was going
unused, so that we can avoid having to compute the vector every time
throughout the driver.
Upasana enable the driver to display LLDP information on the vSphere Web
Client by exposing DCB parameters.
Alice converts our flags from 32 to 64 bit size, since we have added
more flags.
Dave implements a private ethtool flag to disable the processing of LLDP
packets by the firmware, so that the firmware will not consume LLDPDU
and cause them to be sent up the stack.
Alan adds a mechanism for detecting/storing the flag for processing of
LLDP packets by the firmware, so that its current state is persistent
across reboots/reloads of the driver.
Avinash fixes kdump with i40e due to resource constraints. We were
enabling VMDq and iWARP when we just have a single CPU, which was
starving kdump for the lack of IRQs.
Jake adds support to program the fragmented IPv4 input set PCTYPE.
Fixed the reported masks to properly report that the entire field is
masked, since we had accidentally swapped the mask values for the IPv4
addresses with the L4 port numbers.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 29 Jan 2018 02:22:46 +0000 (21:22 -0500)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2018-01-26
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) A number of extensions to tcp-bpf, from Lawrence.
- direct R or R/W access to many tcp_sock fields via bpf_sock_ops
- passing up to 3 arguments to bpf_sock_ops functions
- tcp_sock field bpf_sock_ops_cb_flags for controlling callbacks
- optionally calling bpf_sock_ops program when RTO fires
- optionally calling bpf_sock_ops program when packet is retransmitted
- optionally calling bpf_sock_ops program when TCP state changes
- access to tclass and sk_txhash
- new selftest
2) div/mod exception handling, from Daniel.
One of the ugly leftovers from the early eBPF days is that div/mod
operations based on registers have a hard-coded src_reg == 0 test
in the interpreter as well as in JIT code generators that would
return from the BPF program with exit code 0. This was basically
adopted from cBPF interpreter for historical reasons.
There are multiple reasons why this is very suboptimal and prone
to bugs. To name one: the return code mapping for such abnormal
program exit of 0 does not always match with a suitable program
type's exit code mapping. For example, '0' in tc means action 'ok'
where the packet gets passed further up the stack, which is just
undesirable for such cases (e.g. when implementing policy) and
also does not match with other program types.
After considering _four_ different ways to address the problem,
we adapt the same behavior as on some major archs like ARMv8:
X div 0 results in 0, and X mod 0 results in X. aarch64 and
aarch32 ISA do not generate any traps or otherwise aborts
of program execution for unsigned divides.
Given the options, it seems the most suitable from
all of them, also since major archs have similar schemes in
place. Given this is all in the realm of undefined behavior,
we still have the option to adapt if deemed necessary.
3) sockmap sample refactoring, from John.
4) lpm map get_next_key fixes, from Yonghong.
5) test cleanups, from Alexei and Prashant.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 28 Jan 2018 15:19:48 +0000 (10:19 -0500)]
Merge branch '10GbE' of git://git./linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
10GbE Intel Wired LAN Driver Updates 2018-01-26
This series contains updates to ixgbe and ixgbevf.
Emil updates ixgbevf to match ixgbe functionality, starting with the
consolidating of functions that represent logical steps in the receive
process so we can later update them more easily. Updated ixgbevf to
only synchronize the length of the frame, which will typically be the
MTU or smaller. Updated the VF driver to use the length of the packet
instead of the DD status bit to determine if a new descriptor is ready
to be processed, which saves on reads and we can save time on
initialization. Added support for DMA_ATTR_SKIP_CPU_SYNC/WEAK_ORDERING
to help improve performance on some platforms. Updated the VF driver to
do bulk updates of the page reference count instead of just incrementing
it by one reference at a time. Updated the VF driver to only go through
the region of the receive ring that was designated to be cleaned up,
rather than process the entire ring.
Colin Ian King adds the use of ARRAY_SIZE() on various arrays.
Miroslav Lichvar fixes an issue where ethtool was reporting timestamping
filters unsupported for X550, which is incorrect.
Paul adds support for reporting 5G link speed for some devices.
Dan Carpenter fixes a typo where && was used when it should have been
||.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Leon Romanovsky [Sun, 28 Jan 2018 13:54:38 +0000 (15:54 +0200)]
net/rocker: Remove unreachable return instruction
The "return 0" instruction follows other return instruction
and it makes it impossible to execute, hence remove it.
Fixes: 00fc0c51e35b ("rocker: Change world_ops API and implementation to be switchdev independant")
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Sat, 27 Jan 2018 01:06:23 +0000 (17:06 -0800)]
Merge branch 'fix-lpm-map'
Yonghong Song says:
====================
A kernel page fault which happens in lpm map trie_get_next_key is reported
by syzbot and Eric. The issue was introduced by commit
b471f2f1de8b
("bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE map").
Patch #1 fixed the issue in the kernel and patch #2 adds a multithreaded
test case in tools/testing/selftests/bpf/test_lpm_map.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Fri, 26 Jan 2018 23:06:08 +0000 (15:06 -0800)]
tools/bpf: add a multithreaded stress test in bpf selftests test_lpm_map
The new test will spawn four threads, doing map update, delete, lookup
and get_next_key in parallel. It is able to reproduce the issue in the
previous commit found by syzbot and Eric Dumazet.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Fri, 26 Jan 2018 23:06:07 +0000 (15:06 -0800)]
bpf: fix kernel page fault in lpm map trie_get_next_key
Commit
b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command
for LPM_TRIE map") introduces a bug likes below:
if (!rcu_dereference(trie->root))
return -ENOENT;
if (!key || key->prefixlen > trie->max_prefixlen) {
root = &trie->root;
goto find_leftmost;
}
......
find_leftmost:
for (node = rcu_dereference(*root); node;) {
In the code after label find_leftmost, it is assumed
that *root should not be NULL, but it is not true as
it is possbile trie->root is changed to NULL by an
asynchronous delete operation.
The issue is reported by syzbot and Eric Dumazet with the
below error log:
......
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Dumping ftrace buffer:
(ftrace buffer empty)
Modules linked in:
CPU: 1 PID: 8033 Comm: syz-executor3 Not tainted 4.15.0-rc8+ #4
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:trie_get_next_key+0x3c2/0xf10 kernel/bpf/lpm_trie.c:682
......
This patch fixed the issue by use local rcu_dereferenced
pointer instead of *(&trie->root) later on.
Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command or LPM_TRIE map")
Reported-by: syzbot <syzkaller@googlegroups.com>
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Sat, 27 Jan 2018 00:42:07 +0000 (16:42 -0800)]
Merge branch 'bpf-improvements-and-fixes'
Daniel Borkmann says:
====================
This set contains a small cleanup in cBPF prologue generation and
otherwise fixes an outstanding issue related to BPF to BPF calls
and exception handling. For details please see related patches.
Last but not least, BPF selftests is extended with several new
test cases.
Thanks!
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:48 +0000 (23:33 +0100)]
bpf: add further test cases around div/mod and others
Update selftests to relfect recent changes and add various new
test cases.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:47 +0000 (23:33 +0100)]
bpf, arm: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from arm32 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:46 +0000 (23:33 +0100)]
bpf, mips64: remove unneeded zero check from div/mod with k
The verifier in both cBPF and eBPF reject div/mod by 0 imm,
so this can never load. Remove emitting such test and reject
it from being JITed instead (the latter is actually also not
needed, but given practice in sparc64, ppc64 today, so
doesn't hurt to add it here either).
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Daney <david.daney@cavium.com>
Reviewed-by: David Daney <david.daney@cavium.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:45 +0000 (23:33 +0100)]
bpf, mips64: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from mips64 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Daney <david.daney@cavium.com>
Reviewed-by: David Daney <david.daney@cavium.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:44 +0000 (23:33 +0100)]
bpf, sparc64: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from sparc64 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:43 +0000 (23:33 +0100)]
bpf, ppc64: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from ppc64 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:42 +0000 (23:33 +0100)]
bpf, s390x: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from s390x JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:41 +0000 (23:33 +0100)]
bpf, arm64: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from arm64 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:40 +0000 (23:33 +0100)]
bpf, x86_64: remove obsolete exception handling from div/mod
Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from x86_64 JIT.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:39 +0000 (23:33 +0100)]
bpf: fix subprog verifier bypass by div/mod by 0 exception
One of the ugly leftovers from the early eBPF days is that div/mod
operations based on registers have a hard-coded src_reg == 0 test
in the interpreter as well as in JIT code generators that would
return from the BPF program with exit code 0. This was basically
adopted from cBPF interpreter for historical reasons.
There are multiple reasons why this is very suboptimal and prone
to bugs. To name one: the return code mapping for such abnormal
program exit of 0 does not always match with a suitable program
type's exit code mapping. For example, '0' in tc means action 'ok'
where the packet gets passed further up the stack, which is just
undesirable for such cases (e.g. when implementing policy) and
also does not match with other program types.
While trying to work out an exception handling scheme, I also
noticed that programs crafted like the following will currently
pass the verifier:
0: (bf) r6 = r1
1: (85) call pc+8
caller:
R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
callee:
frame1: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_1
10: (b4) (u32) r2 = (u32) 0
11: (b4) (u32) r3 = (u32) 1
12: (3c) (u32) r3 /= (u32) r2
13: (61) r0 = *(u32 *)(r1 +76)
14: (95) exit
returning from callee:
frame1: R0_w=pkt(id=0,off=0,r=0,imm=0)
R1=ctx(id=0,off=0,imm=0) R2_w=inv0
R3_w=inv(id=0,umax_value=
4294967295,var_off=(0x0; 0xffffffff))
R10=fp0,call_1
to caller at 2:
R0_w=pkt(id=0,off=0,r=0,imm=0) R6=ctx(id=0,off=0,imm=0)
R10=fp0,call_-1
from 14 to 2: R0=pkt(id=0,off=0,r=0,imm=0)
R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
2: (bf) r1 = r6
3: (61) r1 = *(u32 *)(r1 +80)
4: (bf) r2 = r0
5: (07) r2 += 8
6: (2d) if r2 > r1 goto pc+1
R0=pkt(id=0,off=0,r=8,imm=0) R1=pkt_end(id=0,off=0,imm=0)
R2=pkt(id=0,off=8,r=8,imm=0) R6=ctx(id=0,off=0,imm=0)
R10=fp0,call_-1
7: (71) r0 = *(u8 *)(r0 +0)
8: (b7) r0 = 1
9: (95) exit
from 6 to 8: safe
processed 16 insns (limit 131072), stack depth 0+0
Basically what happens is that in the subprog we make use of a
div/mod by 0 exception and in the 'normal' subprog's exit path
we just return skb->data back to the main prog. This has the
implication that the verifier thinks we always get a pkt pointer
in R0 while we still have the implicit 'return 0' from the div
as an alternative unconditional return path earlier. Thus, R0
then contains 0, meaning back in the parent prog we get the
address range of [0x0, skb->data_end] as read and writeable.
Similar can be crafted with other pointer register types.
Since i) BPF_ABS/IND is not allowed in programs that contain
BPF to BPF calls (and generally it's also disadvised to use in
native eBPF context), ii) unknown opcodes don't return zero
anymore, iii) we don't return an exception code in dead branches,
the only last missing case affected and to fix is the div/mod
handling.
What we would really need is some infrastructure to propagate
exceptions all the way to the original prog unwinding the
current stack and returning that code to the caller of the
BPF program. In user space such exception handling for similar
runtimes is typically implemented with setjmp(3) and longjmp(3)
as one possibility which is not available in the kernel,
though (kgdb used to implement it in kernel long time ago). I
implemented a PoC exception handling mechanism into the BPF
interpreter with porting setjmp()/longjmp() into x86_64 and
adding a new internal BPF_ABRT opcode that can use a program
specific exception code for all exception cases we have (e.g.
div/mod by 0, unknown opcodes, etc). While this seems to work
in the constrained BPF environment (meaning, here, we don't
need to deal with state e.g. from memory allocations that we
would need to undo before going into exception state), it still
has various drawbacks: i) we would need to implement the
setjmp()/longjmp() for every arch supported in the kernel and
for x86_64, arm64, sparc64 JITs currently supporting calls,
ii) it has unconditional additional cost on main program
entry to store CPU register state in initial setjmp() call,
and we would need some way to pass the jmp_buf down into
___bpf_prog_run() for main prog and all subprogs, but also
storing on stack is not really nice (other option would be
per-cpu storage for this, but it also has the drawback that
we need to disable preemption for every BPF program types).
All in all this approach would add a lot of complexity.
Another poor-man's solution would be to have some sort of
additional shared register or scratch buffer to hold state
for exceptions, and test that after every call return to
chain returns and pass R0 all the way down to BPF prog caller.
This is also problematic in various ways: i) an additional
register doesn't map well into JITs, and some other scratch
space could only be on per-cpu storage, which, again has the
side-effect that this only works when we disable preemption,
or somewhere in the input context which is not available
everywhere either, and ii) this adds significant runtime
overhead by putting conditionals after each and every call,
as well as implementation complexity.
Yet another option is to teach verifier that div/mod can
return an integer, which however is also complex to implement
as verifier would need to walk such fake 'mov r0,<code>; exit;'
sequeuence and there would still be no guarantee for having
propagation of this further down to the BPF caller as proper
exception code. For parent prog, it is also is not distinguishable
from a normal return of a constant scalar value.
The approach taken here is a completely different one with
little complexity and no additional overhead involved in
that we make use of the fact that a div/mod by 0 is undefined
behavior. Instead of bailing out, we adapt the same behavior
as on some major archs like ARMv8 [0] into eBPF as well:
X div 0 results in 0, and X mod 0 results in X. aarch64 and
aarch32 ISA do not generate any traps or otherwise aborts
of program execution for unsigned divides. I verified this
also with a test program compiled by gcc and clang, and the
behavior matches with the spec. Going forward we adapt the
eBPF verifier to emit such rewrites once div/mod by register
was seen. cBPF is not touched and will keep existing 'return 0'
semantics. Given the options, it seems the most suitable from
all of them, also since major archs have similar schemes in
place. Given this is all in the realm of undefined behavior,
we still have the option to adapt if deemed necessary and
this way we would also have the option of more flexibility
from LLVM code generation side (which is then fully visible
to verifier). Thus, this patch i) fixes the panic seen in
above program and ii) doesn't bypass the verifier observations.
[0] ARM Architecture Reference Manual, ARMv8 [ARM DDI 0487B.b]
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0487b.b/DDI0487B_b_armv8_arm.pdf
1) aarch64 instruction set: section C3.4.7 and C6.2.279 (UDIV)
"A division by zero results in a zero being written to
the destination register, without any indication that
the division by zero occurred."
2) aarch32 instruction set: section F1.4.8 and F5.1.263 (UDIV)
"For the SDIV and UDIV instructions, division by zero
always returns a zero result."
Fixes: f4d7e40a5b71 ("bpf: introduce function calls (verification)")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:38 +0000 (23:33 +0100)]
bpf: make unknown opcode handling more robust
Recent findings by syzcaller fixed in
7891a87efc71 ("bpf: arsh is
not supported in 32 bit alu thus reject it") triggered a warning
in the interpreter due to unknown opcode not being rejected by
the verifier. The 'return 0' for an unknown opcode is really not
optimal, since with BPF to BPF calls, this would go untracked by
the verifier.
Do two things here to improve the situation: i) perform basic insn
sanity check early on in the verification phase and reject every
non-uapi insn right there. The bpf_opcode_in_insntable() table
reuses the same mapping as the jumptable in ___bpf_prog_run() sans
the non-public mappings. And ii) in ___bpf_prog_run() we do need
to BUG in the case where the verifier would ever create an unknown
opcode due to some rewrites.
Note that JITs do not have such issues since they would punt to
interpreter in these situations. Moreover, the BPF_JIT_ALWAYS_ON
would also help to avoid such unknown opcodes in the first place.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:37 +0000 (23:33 +0100)]
bpf: improve dead code sanitizing
Given we recently had
c131187db2d3 ("bpf: fix branch pruning
logic") and
95a762e2c8c9 ("bpf: fix incorrect sign extension in
check_alu_op()") in particular where before verifier skipped
verification of the wrongly assumed dead branch, we should not
just replace the dead code parts with nops (mov r0,r0). If there
is a bug such as fixed in
95a762e2c8c9 in future again, where
runtime could execute those insns, then one of the potential
issues with the current setting would be that given the nops
would be at the end of the program, we could execute out of
bounds at some point.
The best in such case would be to just exit the BPF program
altogether and return an exception code. However, given this
would require two instructions, and such a dead code gap could
just be a single insn long, we would need to place 'r0 = X; ret'
snippet at the very end after the user program or at the start
before the program (where we'd skip that region on prog entry),
and then place unconditional ja's into the dead code gap.
While more complex but possible, there's still another block
in the road that currently prevents from this, namely BPF to
BPF calls. The issue here is that such exception could be
returned from a callee, but the caller would not know that
it's an exception that needs to be propagated further down.
Alternative that has little complexity is to just use a ja-1
code for now which will trap the execution here instead of
silently doing bad things if we ever get there due to bugs.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 26 Jan 2018 22:33:36 +0000 (23:33 +0100)]
bpf: xor of a/x in cbpf can be done in 32 bit alu
Very minor optimization; saves 1 byte per program in x86_64
JIT in cBPF prologue.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Mickaël Salaün [Fri, 26 Jan 2018 00:39:30 +0000 (01:39 +0100)]
samples/bpf: Partially fixes the bpf.o build
Do not build lib/bpf/bpf.o with this Makefile but use the one from the
library directory. This avoid making a buggy bpf.o file (e.g. missing
symbols).
This patch is useful if some code (e.g. Landlock tests) needs both the
bpf.o (from tools/lib/bpf) and the bpf_load.o (from samples/bpf).
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Lawrence Brakmo [Fri, 26 Jan 2018 20:06:07 +0000 (12:06 -0800)]
bpf: clean up from test_tcpbpf_kern.c
Removed commented lines from test_tcpbpf_kern.c
Fixes: d6d4f60c3a09 bpf: add selftest for tcpbpf
Signed-off-by: Lawrence Brakmo <brakmo@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Paweł Jabłoński [Fri, 29 Dec 2017 13:49:10 +0000 (08:49 -0500)]
i40e: Do not allow use more TC queue pairs than MSI-X vectors exist
This patch suppresses the message about invalid TC mapping and wrong
selected TX queue. The root cause of this bug was setting too many
TC queue pairs on huge multiprocessor machines. When quantity of the
TC queue pairs is exceeding MSI-X vectors count then TX queue number
can be selected beyond actual TX queues amount.
Signed-off-by: Paweł Jabłoński <pawel.jablonski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Fri, 29 Dec 2017 13:48:53 +0000 (08:48 -0500)]
i40e/i40evf: Record ITR register location in the q_vector
The drivers for i40e and i40evf had a reg_idx value stored in the q_vector
that was going completely unused. I can only assume this was copied over
from ixgbe and nobody knew how to use it.
I'm going to make use of the value to avoid having to compute the vector
and thus the register index for multiple paths throughout the drivers.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 27 Dec 2017 13:26:33 +0000 (08:26 -0500)]
i40e: fix reported mask for ntuple filters
In commit
36777d9fa24c ("i40e: check current configured input set when
adding ntuple filters") some code was added to report the input set
mask for a given filter when reporting it to the user.
This code is necessary so that the reported filter correctly displays
that it is or is not masking certain fields.
Unfortunately the code was incorrect. Development error accidentally
swapped the mask values for the IPv4 addresses with the L4 port numbers.
The port numbers are only 16bits wide while IPv4 addresses are 32 bits.
Unfortunately we assigned only 16 bits to the IPv4 address masks.
Additionally we assigned 32bit value 0xFFFFFFF to the TCP port numbers.
This second part does not matter as the value would be truncated to
16bits regardless, but it is unnecessary.
Fix the reported masks to properly report that the entire field is
masked.
Fixes: 36777d9fa24c ("i40e: check current configured input set when adding ntuple filters")
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 27 Dec 2017 13:25:32 +0000 (08:25 -0500)]
i40e: disallow programming multiple filters with same criteria
Our hardware does not allow situations where two filters might conflict
when matching. Essentially hardware only programs one filter for each
set of matching criteria. We don't support filters with overlapping
input sets, because each flow type can only use a single input set.
Additionally, different flow types will never have overlapping matches,
because of how the hardware parses the flow type before checking
matching criteria.
For this reason, we do not need or use the location number when
programming filters to hardware.
In order to avoid confusing scenarios with filters that match the same
criteria but program the flow to different queues, do not allow multiple
filters that match identical criteria to be programmed.
This ensures that we avoid odd scenarios when deleting filters, and when
programming new filters that match the same criteria.
Instead, users that wish to update the criteria for a filter must use
the same location id, or must delete all the matching filters first.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 27 Dec 2017 13:24:12 +0000 (08:24 -0500)]
i40e: program fragmented IPv4 filter input set
When implementing support for IP_USER_FLOW filters, we correctly
programmed a filter for both the non fragmented IPv4/Other filter, as
well as the fragmented IPv4 filters. However, we did not properly
program the input set for fragmented IPv4 PCTYPE. This meant that the
filters would almost certainly not match, unless the user specified all
of the flow types.
Add support to program the fragmented IPv4 filter input set. Since we
always program these filters together, we'll assume that the two input
sets must match, and will thus always program the input sets to the same
value.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Avinash Dayanand [Wed, 27 Dec 2017 13:22:11 +0000 (08:22 -0500)]
i40e: Fix kdump failure
kdump fails in the system when used in conjunction with Ethernet driver
X722/X710. This is mainly because when we are resource constrained i.e.
when we have just one online_cpus, we are enabling VMDq and iWARP. It
doesn't make sense to enable them with just one CPU and starve kdump
for lack of IRQs.
So don't enable VMDq or iWARP when we just have a single CPU.
Signed-off-by: Avinash Dayanand <avinash.dayanand@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeff Kirsher [Wed, 27 Dec 2017 13:21:03 +0000 (08:21 -0500)]
i40e: cleanup unnecessary parens
Clean up unnecessary parenthesis.
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Alan Brady [Wed, 27 Dec 2017 13:19:19 +0000 (08:19 -0500)]
i40e: fix FW_LLDP flag on init
Using ethtool --set-priv-flags disable-fw-lldp <on/off> is persistent
across reboots/reloads so we need some mechanism in the driver to detect
if it's on or off on init so we can set the ethtool private flag
appropriately. Without this, every time the driver is reloaded the flag
will default to off regardless of whether it's on or off in FW.
We detect this by first attempting to program DCB and if AQ fails
returning I40E_AQ_RC_EPERM, we know that LLDP is disabled in FW.
Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Dave Ertman [Wed, 27 Dec 2017 13:18:21 +0000 (08:18 -0500)]
i40e: Implement an ethtool private flag to stop LLDP in FW
Implement the private flag disable-fw-lldp for ethtool
to disable the processing of LLDP packets by the FW.
This will stop the FW from consuming LLDPDU and cause
them to be sent up the stack.
The FW is also being configured to apply a default DCB
configuration on link up.
Toggling the value of this flag will also cause a PF reset.
Disabling FW DCB will also disable DCBx.
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alice Michael [Wed, 27 Dec 2017 13:17:50 +0000 (08:17 -0500)]
i40e: change flags to use 64 bits
As we have added more flags, we need to now use more
bits and have over flooded the 32 bit size. So
make it 64.
Also change all the existing bits to unsigned long long
bits.
Signed-off-by: Alice Michael <alice.michael@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Upasana Menon [Wed, 27 Dec 2017 13:17:07 +0000 (08:17 -0500)]
i40e: Display LLDP information on vSphere Web Client
This patch enables driver to display LLDP information on the vSphere Web
Client with Intel adapters (X710, XL710) and Distributed Virtual Switch.
Signed-off-by: Upasana Menon <upasana.menon@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alexander Duyck [Wed, 27 Dec 2017 13:15:51 +0000 (08:15 -0500)]
i40e/i40evf: Use ring pointers to clean up _set_itr_per_queue
This change cleans up the i40e/i40evf_set_itr_per_queue function by
dropping all the unneeded pointer chases. Instead we can just pull out the
pointers for the Tx and Rx rings and use them throughout the function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Paweł Jabłoński [Wed, 27 Dec 2017 12:32:32 +0000 (07:32 -0500)]
i40evf: Allow turning off offloads when the VF has VLAN set
This patch adds back the capability to turn off offloads when VF has
VLAN set. The commit
0a3b4f702fb1 ("i40evf: enable support for VF VLAN
tag stripping control") adds the i40evf_set_features function and
changes the 'turn off' flow for offloads. This patch adds that
capability back by moving checking the VLAN option for VF to the
next statement.
Signed-off-by: Paweł Jabłoński <pawel.jablonski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Patryk Małek [Wed, 27 Dec 2017 12:32:31 +0000 (07:32 -0500)]
i40e: Fix for adding multiple ethtool filters on the same location
This patch reorders i40e_add_del_fdir and i40e_update_ethtool_fdir_entry
calls so that we first remove an already existing filter (inside
i40e_update_ethtool_fdir_entry using i40e_add_del_fdir) and then
we add a new one with i40e_add_del_fdir.
After applying this patch, creating multiple identical filters (with
the same location) one after another doesn't revert their behavior
but behaves correctly.
Signed-off-by: Patryk Małek <patryk.malek@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Michal Kosiarz [Wed, 27 Dec 2017 13:14:40 +0000 (08:14 -0500)]
i40e: Add returning AQ critical error to SW
The FW has the ability to return a critical error on every AQ command.
When this critical error occurs then we need to send the correct response
to the caller.
Signed-off-by: Michal Kosiarz <michal.kosiarz@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Emil Tantilov [Fri, 12 Jan 2018 22:02:56 +0000 (14:02 -0800)]
ixgbe: don't set RXDCTL.RLPML for 82599
commit
2de6aa3a666e ("ixgbe: Add support for padding packet")
Uses RXDCTL.RLPML to limit the maximum frame size on Rx when using
build_skb. Unfortunately that register does not work on 82599.
Added an explicit check to avoid setting this register on 82599 MAC.
Extended the comment related to the setting of RXDCTL.RLPML to better
explain its purpose.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Dan Carpenter [Thu, 11 Jan 2018 21:46:01 +0000 (00:46 +0300)]
ixgbe: Fix && vs || typo
"offset" can't be both 0x0 and 0xFFFF so presumably || was intended
instead of &&. That matches with how this check is done in other
functions.
Fixes: 73834aec7199 ("ixgbe: extend firmware version support")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Paul Greenwalt [Thu, 11 Jan 2018 14:10:51 +0000 (09:10 -0500)]
ixgbe: add support for reporting 5G link speed
Since 5G link speed is supported by some devices, add reporting of 5G link
speed.
Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Miroslav Lichvar [Tue, 9 Jan 2018 11:37:05 +0000 (12:37 +0100)]
ixgbe: Don't report unsupported timestamping filters for X550
The current code enables on X550 timestamping of all packets for any
filter, which means ethtool should not report any PTP-specific filters
as unsupported.
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Sun, 7 Jan 2018 23:17:51 +0000 (23:17 +0000)]
ixgbe: use ARRAY_SIZE for array sizing calculation on array buf
Use the ARRAY_SIZE macro on array buf to determine size of the array.
Improvement suggested by coccinelle.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Colin Ian King [Sun, 7 Jan 2018 14:51:46 +0000 (14:51 +0000)]
ixgbevf: use ARRAY_SIZE for various array sizing calculations
Use the ARRAY_SIZE macro on various arrays to determine
size of the arrays. Improvement suggested by coccinelle.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Emil Tantilov [Mon, 11 Dec 2017 18:37:31 +0000 (10:37 -0800)]
ixgbevf: don't bother clearing tx_buffer_info in ixgbevf_clean_tx_ring()
In the case of the Tx rings we need to only clear the Tx buffer_info when
we are resetting the rings. Ideally we do this when we configure the ring
to bring it back up instead of when we are taking it down in order to avoid
dirtying pages we don't need to.
In addition we don't need to clear the Tx descriptor ring since we will
fully repopulate it when we begin transmitting frames and next_to_watch can
be cleared to prevent the ring from being cleaned beyond that point instead
of needing to touch anything in the Tx descriptor ring.
Finally with these changes we can avoid having to reset the skb member of
the Tx buffer_info structure in the cleanup path since the skb will always
be associated with the first buffer which has next_to_watch set.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Linus Torvalds [Fri, 26 Jan 2018 17:03:16 +0000 (09:03 -0800)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
1) The per-network-namespace loopback device, and thus its namespace,
can have its teardown deferred for a long time if a kernel created
TCP socket closes and the namespace is exiting meanwhile. The kernel
keeps trying to finish the close sequence until it times out (which
takes quite some time).
Fix this by forcing the socket closed in this situation, from Dan
Streetman.
2) Fix regression where we're trying to invoke the update_pmtu method
on route types (in this case metadata tunnel routes) that don't
implement the dst_ops method. Fix from Nicolas Dichtel.
3) Fix long standing memory corruption issues in r8169 driver by
performing the chip statistics DMA programming more correctly. From
Francois Romieu.
4) Handle local broadcast sends over VRF routes properly, from David
Ahern.
5) Don't refire the DCCP CCID2 timer endlessly, otherwise the socket
can never be released. From Alexey Kodanev.
6) Set poll flags properly in VSOCK protocol layer, from Stefan
Hajnoczi.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
VSOCK: set POLLOUT | POLLWRNORM for TCP_CLOSING
dccp: don't restart ccid2_hc_tx_rto_expire() if sk in closed state
net: vrf: Add support for sends to local broadcast address
r8169: fix memory corruption on retrieval of hardware statistics.
net: don't call update_pmtu unconditionally
net: tcp: close sock if net namespace is exiting
Linus Torvalds [Fri, 26 Jan 2018 16:59:57 +0000 (08:59 -0800)]
Merge tag 'drm-fixes-for-v4.15-rc10-2' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"A fairly urgent nouveau regression fix for broken irqs across
suspend/resume came in. This was broken before but a patch in 4.15 has
made it much more obviously broken and now s/r fails a lot more often.
The fix removes freeing the irq across s/r which never should have
been done anyways.
Also two vc4 fixes for a NULL deference and some misrendering /
flickering on screen"
* tag 'drm-fixes-for-v4.15-rc10-2' of git://people.freedesktop.org/~airlied/linux:
drm/nouveau: Move irq setup/teardown to pci ctor/dtor
drm/vc4: Fix NULL pointer dereference in vc4_save_hang_state()
drm/vc4: Flush the caches before the bin jobs, as well.
Stefan Hajnoczi [Fri, 26 Jan 2018 11:48:25 +0000 (11:48 +0000)]
VSOCK: set POLLOUT | POLLWRNORM for TCP_CLOSING
select(2) with wfds but no rfds must return when the socket is shut down
by the peer. This way userspace notices socket activity and gets -EPIPE
from the next write(2).
Currently select(2) does not return for virtio-vsock when a SEND+RCV
shutdown packet is received. This is because vsock_poll() only sets
POLLOUT | POLLWRNORM for TCP_CLOSE, not the TCP_CLOSING state that the
socket is in when the shutdown is received.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexey Kodanev [Fri, 26 Jan 2018 12:14:16 +0000 (15:14 +0300)]
dccp: don't restart ccid2_hc_tx_rto_expire() if sk in closed state
ccid2_hc_tx_rto_expire() timer callback always restarts the timer
again and can run indefinitely (unless it is stopped outside), and after
commit
120e9dabaf55 ("dccp: defer ccid_hc_tx_delete() at dismantle time"),
which moved ccid_hc_tx_delete() (also includes sk_stop_timer()) from
dccp_destroy_sock() to sk_destruct(), this started to happen quite often.
The timer prevents releasing the socket, as a result, sk_destruct() won't
be called.
Found with LTP/dccp_ipsec tests running on the bonding device,
which later couldn't be unloaded after the tests were completed:
unregister_netdevice: waiting for bond0 to become free. Usage count = 148
Fixes: 2a91aa396739 ("[DCCP] CCID2: Initial CCID2 (TCP-Like) implementation")
Signed-off-by: Alexey Kodanev <alexey.kodanev@oracle.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 26 Jan 2018 16:00:23 +0000 (11:00 -0500)]
Merge branch 'cxgb4-fix-dump-collection-when-firmware-crashed'
Rahul Lakkireddy says:
====================
cxgb4: fix dump collection when firmware crashed
Patch 1 resets FW_OK flag, if firmware reports error.
Patch 2 fixes incorrect condition for using firmware LDST commands.
Patch 3 fixes dump collection logic to use backdoor register
access to collect dumps when firmware is crashed.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Fri, 26 Jan 2018 11:35:56 +0000 (17:05 +0530)]
cxgb4: use backdoor access to collect dumps when firmware crashed
Fallback to backdoor register access to collect dumps if firmware
is crashed. Fixes TID, SGE Queue Context, and MPS TCAM dump collection.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Fri, 26 Jan 2018 11:35:55 +0000 (17:05 +0530)]
cxgb4: fix incorrect condition for using firmware LDST commands
Only contact firmware if it's alive _AND_ if use_bd (use backdoor
access) is not set when issuing FW_LDST_CMD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Fri, 26 Jan 2018 11:35:54 +0000 (17:05 +0530)]
cxgb4: reset FW_OK flag on firmware crash
If firmware reports error, reset FW_OK flag.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 26 Jan 2018 15:58:30 +0000 (10:58 -0500)]
Merge branch 'hns3-next'
Peng Li says:
====================
net: hns3: add support ethtool_ops.{set|get}_coalesce for VF
This patch-set adds ethtool_ops.{get|set}_coalesce to VF and
fix one related bug.
HNS3 PF and VF driver use the common enet layer, as the
ethtool_ops.{get|set}_coalesce to PF have upstreamed, just
need add the ops to hns3vf_ethtool_ops.
[Patch 1/2] fix a related bug for the VF ethtool_ops.{set|
get}_coalesce.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fuyun Liang [Fri, 26 Jan 2018 11:31:25 +0000 (19:31 +0800)]
net: hns3: add int_gl_idx setup for VF
Just like PF, if the int_gl_idx of VF does not be set, the default
interrupt coalesce index of VF is 0. But it should be GL1 for TX
queues and GL0 for RX queues.
This patch adds the int_gl_idx setup for VF.
Fixes: 200ecda42598 ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support")
Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>