blk-mq: fix calling unplug callbacks with preempt disabled
Liu reported that running certain parts of xfstests threw the
following error:
BUG: sleeping function called from invalid context at mm/page_alloc.c:3190
in_atomic(): 1, irqs_disabled(): 0, pid: 6, name: kworker/u16:0
3 locks held by kworker/u16:0/6:
#0: ("writeback"){++++.+}, at: [<
ffffffff8107f083>] process_one_work+0x173/0x730
#1: ((&(&wb->dwork)->work)){+.+.+.}, at: [<
ffffffff8107f083>] process_one_work+0x173/0x730
#2: (&type->s_umount_key#44){+++++.}, at: [<
ffffffff811e6805>] trylock_super+0x25/0x60
CPU: 5 PID: 6 Comm: kworker/u16:0 Tainted: G OE 4.3.0+ #3
Hardware name: Red Hat KVM, BIOS Bochs 01/01/2011
Workqueue: writeback wb_workfn (flush-btrfs-108)
ffffffff81a3abab ffff88042e282ba8 ffffffff8130191b ffffffff81a3abab
0000000000000c76 ffff88042e282ba8 ffff88042e27c180 ffff88042e282bd8
ffffffff8108ed95 ffff880400000004 0000000000000000 0000000000000c76
Call Trace:
[<
ffffffff8130191b>] dump_stack+0x4f/0x74
[<
ffffffff8108ed95>] ___might_sleep+0x185/0x240
[<
ffffffff8108eea2>] __might_sleep+0x52/0x90
[<
ffffffff811817e8>] __alloc_pages_nodemask+0x268/0x410
[<
ffffffff8109a43c>] ? sched_clock_local+0x1c/0x90
[<
ffffffff8109a6d1>] ? local_clock+0x21/0x40
[<
ffffffff810b9eb0>] ? __lock_release+0x420/0x510
[<
ffffffff810b534c>] ? __lock_acquired+0x16c/0x3c0
[<
ffffffff811ca265>] alloc_pages_current+0xc5/0x210
[<
ffffffffa0577105>] ? rbio_is_full+0x55/0x70 [btrfs]
[<
ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
[<
ffffffff81666d50>] ? _raw_spin_unlock_irqrestore+0x40/0x60
[<
ffffffffa0578c0a>] full_stripe_write+0x5a/0xc0 [btrfs]
[<
ffffffffa0578ca9>] __raid56_parity_write+0x39/0x60 [btrfs]
[<
ffffffffa0578deb>] run_plug+0x11b/0x140 [btrfs]
[<
ffffffffa0578e33>] btrfs_raid_unplug+0x23/0x70 [btrfs]
[<
ffffffff812d36c2>] blk_flush_plug_list+0x82/0x1f0
[<
ffffffff812e0349>] blk_sq_make_request+0x1f9/0x740
[<
ffffffff812ceba2>] ? generic_make_request_checks+0x222/0x7c0
[<
ffffffff812cf264>] ? blk_queue_enter+0x124/0x310
[<
ffffffff812cf1d2>] ? blk_queue_enter+0x92/0x310
[<
ffffffff812d0ae2>] generic_make_request+0x172/0x2c0
[<
ffffffff812d0ad4>] ? generic_make_request+0x164/0x2c0
[<
ffffffff812d0ca0>] submit_bio+0x70/0x140
[<
ffffffffa0577b29>] ? rbio_add_io_page+0x99/0x150 [btrfs]
[<
ffffffffa0578a89>] finish_rmw+0x4d9/0x600 [btrfs]
[<
ffffffffa0578c4c>] full_stripe_write+0x9c/0xc0 [btrfs]
[<
ffffffffa057ab7f>] raid56_parity_write+0xef/0x160 [btrfs]
[<
ffffffffa052bd83>] btrfs_map_bio+0xe3/0x2d0 [btrfs]
[<
ffffffffa04fbd6d>] btrfs_submit_bio_hook+0x8d/0x1d0 [btrfs]
[<
ffffffffa05173c4>] submit_one_bio+0x74/0xb0 [btrfs]
[<
ffffffffa0517f55>] submit_extent_page+0xe5/0x1c0 [btrfs]
[<
ffffffffa0519b18>] __extent_writepage_io+0x408/0x4c0 [btrfs]
[<
ffffffffa05179c0>] ? alloc_dummy_extent_buffer+0x140/0x140 [btrfs]
[<
ffffffffa051dc88>] __extent_writepage+0x218/0x3a0 [btrfs]
[<
ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
[<
ffffffffa051e2c9>] extent_write_cache_pages.clone.0+0x2f9/0x400 [btrfs]
[<
ffffffffa051e422>] extent_writepages+0x52/0x70 [btrfs]
[<
ffffffffa05001f0>] ? btrfs_set_inode_index+0x70/0x70 [btrfs]
[<
ffffffffa04fcc17>] btrfs_writepages+0x27/0x30 [btrfs]
[<
ffffffff81184df3>] do_writepages+0x23/0x40
[<
ffffffff81212229>] __writeback_single_inode+0x89/0x4d0
[<
ffffffff81212a60>] ? writeback_sb_inodes+0x260/0x480
[<
ffffffff81212a60>] ? writeback_sb_inodes+0x260/0x480
[<
ffffffff8121295f>] ? writeback_sb_inodes+0x15f/0x480
[<
ffffffff81212ad2>] writeback_sb_inodes+0x2d2/0x480
[<
ffffffff810b1397>] ? down_read_trylock+0x57/0x60
[<
ffffffff811e6805>] ? trylock_super+0x25/0x60
[<
ffffffff810d629f>] ? rcu_read_lock_sched_held+0x4f/0x90
[<
ffffffff81212d0c>] __writeback_inodes_wb+0x8c/0xc0
[<
ffffffff812130b5>] wb_writeback+0x2b5/0x500
[<
ffffffff810b7ed8>] ? mark_held_locks+0x78/0xa0
[<
ffffffff810660a8>] ? __local_bh_enable_ip+0x68/0xc0
[<
ffffffff81213362>] ? wb_do_writeback+0x62/0x310
[<
ffffffff812133c1>] wb_do_writeback+0xc1/0x310
[<
ffffffff8107c3d9>] ? set_worker_desc+0x79/0x90
[<
ffffffff81213842>] wb_workfn+0x92/0x330
[<
ffffffff8107f133>] process_one_work+0x223/0x730
[<
ffffffff8107f083>] ? process_one_work+0x173/0x730
[<
ffffffff8108035f>] ? worker_thread+0x18f/0x430
[<
ffffffff810802ed>] worker_thread+0x11d/0x430
[<
ffffffff810801d0>] ? maybe_create_worker+0xf0/0xf0
[<
ffffffff810801d0>] ? maybe_create_worker+0xf0/0xf0
[<
ffffffff810858df>] kthread+0xef/0x110
[<
ffffffff8108f74e>] ? schedule_tail+0x1e/0xd0
[<
ffffffff810857f0>] ? __init_kthread_worker+0x70/0x70
[<
ffffffff816673bf>] ret_from_fork+0x3f/0x70
[<
ffffffff810857f0>] ? __init_kthread_worker+0x70/0x70
The issue is that we've got the software context pinned while
calling blk_flush_plug_list(), which flushes callbacks that
are allowed to sleep. btrfs and raid has such callbacks.
Flip the checks around a bit, so we can enable preempt a bit
earlier and flush plugs without having preempt disabled.
This only affects blk-mq driven devices, and only those that
register a single queue.
Reported-by: Liu Bo <bo.li.liu@oracle.com>
Tested-by: Liu Bo <bo.li.liu@oracle.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>