Keith Busch [Thu, 24 May 2018 22:16:04 +0000 (16:16 -0600)]
nvme-pci: Fix AER reset handling
The nvme timeout handling doesn't do anything if the pci channel is
offline, which is the case when recovering from PCI error event, so it
was a bad idea to sync the controller reset in this state. This patch
flushes the reset work in the error_resume callback instead when the
channel is back to online. This keeps AER handling serialized and
can recover from timeouts.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=199757
Fixes: cc1d5e749a2e ("nvme/pci: Sync controller reset for AER slot_reset")
Reported-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Tested-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jianchao Wang [Thu, 24 May 2018 09:51:33 +0000 (17:51 +0800)]
nvme-pci: set nvmeq->cq_vector after alloc cq/sq
Set cq_vector after alloc cq/sq, otherwise nvme_suspend_queue will invoke
free_irq for it and cause a 'Trying to free already-free IRQ xxx'
warning if the create CQ/SQ command times out.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
[hch: fixed to pass a s16 and clean up the comment]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Ivan Bornyakov [Wed, 23 May 2018 14:56:11 +0000 (17:56 +0300)]
nvme: host: core: fix precedence of ternary operator
Ternary operator have lower precedence then bitwise or, so 'cdw10' was
calculated wrong.
Signed-off-by: Ivan Bornyakov <brnkv.i1@gmail.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Johannes Thumshirn [Thu, 17 May 2018 11:52:50 +0000 (13:52 +0200)]
nvme: fix lockdep warning in nvme_mpath_clear_current_path
When running blktest's nvme/005 with a lockdep enabled kernel the test
case fails due to the following lockdep splat in dmesg:
=============================
WARNING: suspicious RCU usage
4.17.0-rc5 #881 Not tainted
-----------------------------
drivers/nvme/host/nvme.h:457 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
3 locks held by kworker/u32:5/1102:
#0: (ptrval) ((wq_completion)"nvme-wq"){+.+.}, at: process_one_work+0x152/0x5c0
#1: (ptrval) ((work_completion)(&ctrl->scan_work)){+.+.}, at: process_one_work+0x152/0x5c0
#2: (ptrval) (&subsys->lock#2){+.+.}, at: nvme_ns_remove+0x43/0x1c0 [nvme_core]
The only caller of nvme_mpath_clear_current_path() is nvme_ns_remove()
which holds the subsys lock so it's likely a false positive, but when
using rcu_access_pointer(), we're telling rcu and lockdep that we're
only after the pointer falue.
Fixes: 32acab3181c7 ("nvme: implement multipath access to nvme subsystems")
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Suggested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Bart Van Assche [Tue, 22 May 2018 15:27:22 +0000 (08:27 -0700)]
blkdev_report_zones_ioctl(): Use vmalloc() to allocate large buffers
Avoid that complaints similar to the following appear in the kernel log
if the number of zones is sufficiently large:
fio: page allocation failure: order:9, mode:0x140c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
Call Trace:
dump_stack+0x63/0x88
warn_alloc+0xf5/0x190
__alloc_pages_slowpath+0x8f0/0xb0d
__alloc_pages_nodemask+0x242/0x260
alloc_pages_current+0x6a/0xb0
kmalloc_order+0x18/0x50
kmalloc_order_trace+0x26/0xb0
__kmalloc+0x20e/0x220
blkdev_report_zones_ioctl+0xa5/0x1a0
blkdev_ioctl+0x1ba/0x930
block_ioctl+0x41/0x50
do_vfs_ioctl+0xaa/0x610
SyS_ioctl+0x79/0x90
do_syscall_64+0x79/0x1b0
entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Shaun Tancheff <shaun.tancheff@seagate.com>
Cc: Damien Le Moal <damien.lemoal@hgst.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dan Melnic [Mon, 18 Sep 2017 20:08:51 +0000 (13:08 -0700)]
block/ndb: add WQ_UNBOUND to the knbd-recv workqueue
Add WQ_UNBOUND to the knbd-recv workqueue so we're not bound
to a single CPU that is selected at device creation time.
Signed-off-by: Dan Melnic <dmm@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
huhai [Tue, 22 May 2018 09:39:34 +0000 (17:39 +0800)]
blk-mq: remove wrong 'unlikely' check
When dispatch_rq_from_ctx is called, in the vast majority of cases
the ctx->rq_list is not empty.
Signed-off-by: huhai <huhai@kylinos.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 21 May 2018 14:41:52 +0000 (08:41 -0600)]
nvme-pci: fix race between poll and IRQ completions
If polling completions are racing with the IRQ triggered by a
completion, the IRQ handler will find no work and return IRQ_NONE.
This can trigger complaints about spurious interrupts:
[ 560.169153] irq 630: nobody cared (try booting with the "irqpoll" option)
[ 560.175988] CPU: 40 PID: 0 Comm: swapper/40 Not tainted 4.17.0-rc2+ #65
[ 560.175990] Hardware name: Intel Corporation S2600STB/S2600STB, BIOS SE5C620.86B.00.01.0010.
010920180151 01/09/2018
[ 560.175991] Call Trace:
[ 560.175994] <IRQ>
[ 560.176005] dump_stack+0x5c/0x7b
[ 560.176010] __report_bad_irq+0x30/0xc0
[ 560.176013] note_interrupt+0x235/0x280
[ 560.176020] handle_irq_event_percpu+0x51/0x70
[ 560.176023] handle_irq_event+0x27/0x50
[ 560.176026] handle_edge_irq+0x6d/0x180
[ 560.176031] handle_irq+0xa5/0x110
[ 560.176036] do_IRQ+0x41/0xc0
[ 560.176042] common_interrupt+0xf/0xf
[ 560.176043] </IRQ>
[ 560.176050] RIP: 0010:cpuidle_enter_state+0x9b/0x2b0
[ 560.176052] RSP: 0018:
ffffa0ed4659fe98 EFLAGS:
00000246 ORIG_RAX:
ffffffffffffffdd
[ 560.176055] RAX:
ffff9527beb20a80 RBX:
000000826caee491 RCX:
000000000000001f
[ 560.176056] RDX:
000000826caee491 RSI:
00000000335206ee RDI:
0000000000000000
[ 560.176057] RBP:
0000000000000001 R08:
00000000ffffffff R09:
0000000000000008
[ 560.176059] R10:
ffffa0ed4659fe78 R11:
0000000000000001 R12:
ffff9527beb29358
[ 560.176060] R13:
ffffffffa235d4b8 R14:
0000000000000000 R15:
000000826caed593
[ 560.176065] ? cpuidle_enter_state+0x8b/0x2b0
[ 560.176071] do_idle+0x1f4/0x260
[ 560.176075] cpu_startup_entry+0x6f/0x80
[ 560.176080] start_secondary+0x184/0x1d0
[ 560.176085] secondary_startup_64+0xa5/0xb0
[ 560.176088] handlers:
[ 560.178387] [<
00000000efb612be>] nvme_irq [nvme]
[ 560.183019] Disabling IRQ #630
A previous commit removed ->cqe_seen that was handling this case,
but we need to handle this a bit differently due to completions
now running outside the queue lock. Return IRQ_HANDLED from the
IRQ handler, if the completion ring head was moved since we last
saw it.
Fixes: 5cb525c8315f ("nvme-pci: handle completions outside of the queue lock")
Reported-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 21 May 2018 14:33:37 +0000 (08:33 -0600)]
Merge branch 'nvme-4.18' of git://git.infradead.org/nvme into for-4.18/block
Pull NVMe changes from Keith:
"This is just the first nvme pull request for 4.18. There are several
fabrics and target patches that I missed, so there will be more to
come."
* 'nvme-4.18' of git://git.infradead.org/nvme:
nvme-pci: drop IRQ disabling on submission queue lock
nvme-pci: split the nvme queue lock into submission and completion locks
nvme-pci: handle completions outside of the queue lock
nvme-pci: move ->cq_vector == -1 check outside of ->q_lock
nvme-pci: remove cq check after submission
nvme-pci: simplify nvme_cqe_valid
nvme: mark the result argument to nvme_complete_async_event volatile
nvme/pci: Sync controller reset for AER slot_reset
nvme/pci: Hold controller reference during async probe
nvme: only reconfigure discard if necessary
nvme/pci: Use async_schedule for initial reset work
nvme: lightnvm: add granby support
NVMe: Add Quirk Delay before CHK RDY for Seagate Nytro Flash Storage
nvme: change order of qid and cmdid in completion trace
nvme: fc: provide a descriptive error
Jens Axboe [Thu, 17 May 2018 16:31:52 +0000 (18:31 +0200)]
nvme-pci: drop IRQ disabling on submission queue lock
Since we aren't sharing the lock for completions now, we don't
have to make it IRQ safe.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Thu, 17 May 2018 16:31:51 +0000 (18:31 +0200)]
nvme-pci: split the nvme queue lock into submission and completion locks
This is now feasible. We protect the submission queue ring with
->sq_lock, and the completion side with ->cq_lock.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Thu, 17 May 2018 16:31:50 +0000 (18:31 +0200)]
nvme-pci: handle completions outside of the queue lock
Split the completion of events into a two part process:
1) Reap the events inside the queue lock
2) Complete the events outside the queue lock
Since we never wrap the queue, we can access it locklessly after we've
updated the completion queue head. This patch started off with batching
events on the stack, but with this trick we don't have to. Keith Busch
<keith.busch@intel.com> came up with that idea.
Note that this kills the ->cqe_seen as well. I haven't been able to
trigger any ill effects of this. If we do race with polling every so
often, it should be rare enough NOT to trigger any issues.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Keith Busch <keith.busch@intel.com>
[hch: refactored, restored poll early exit optimization]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Thu, 17 May 2018 16:31:49 +0000 (18:31 +0200)]
nvme-pci: move ->cq_vector == -1 check outside of ->q_lock
We only clear it dynamically in nvme_suspend_queue(). When we do, ensure
to do a full flush so that any nvme_queue_rq() invocation will see it.
Ideally we'd kill this check completely, but we're using it to flush
requests on a dying queue.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Thu, 17 May 2018 16:31:48 +0000 (18:31 +0200)]
nvme-pci: remove cq check after submission
We always check the completion queue after submitting, but in my testing
this isn't a win even on DRAM/xpoint devices. In some cases it's
actually worse. Kill it.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Fri, 18 May 2018 14:37:04 +0000 (08:37 -0600)]
nvme-pci: simplify nvme_cqe_valid
We always look at the current CQ head and phase, so don't pass these
as separate arguments, and rename the function to nvme_cqe_pending.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 17 May 2018 16:31:46 +0000 (18:31 +0200)]
nvme: mark the result argument to nvme_complete_async_event volatile
We'll need that in the PCIe driver soon as we'll read it straight off the
CQ.
Signed-off-by: Christoph Hellwig <hch@lst.de>
huhai [Fri, 18 May 2018 14:32:30 +0000 (08:32 -0600)]
blk-mq: clear hctx->dispatch_from when mappings change
When the number of hardware queues is changed, the drivers will call
blk_mq_update_nr_hw_queues() to remap hardware queues. This changes
the ctx mappings, but the current code doesn't clear the
->dispatch_from hint. This can result in dispatch_from pointing to
a ctx that isn't mapped to the hctx anymore.
Fixes: b347689ffbca ("blk-mq-sched: improve dispatching from sw queue")
Signed-off-by: huhai <huhai@kylinos.cn>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Moved the placement of the clearing to where we clear other items
pertaining to the existing mapping, added Fixes line, and reworded
the commit message.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:22 +0000 (14:51 -0400)]
nbd: call nbd_bdev_reset instead of bd_set_size on disconnect
We need to make sure we don't just set the size of the bdev to 0 while
it's being used by a file system. We have the appropriate check in
nbd_bdev_reset, simply use that helper instead.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:21 +0000 (14:51 -0400)]
nbd: fix how we set bd_invalidated
bd_invalidated is kind of a pain wrt partitions as it really only
triggers the partition rescan if it is set after bd_ops->open() runs, so
setting it when we reset the device isn't useful. We also sporadically
would still have partitions left over in some disconnect cases, so fix
this by always setting bd_invalidated on open if there's no
configuration or if we've had a disconnect action happen, that way the
partition table gets invalidated and rescanned properly.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:20 +0000 (14:51 -0400)]
nbd: clear_sock on netlink disconnect
This is what the ioctl based nbd disconnect does as well. Without this
the device will just sit there and wait for the connection to go away
(or IO to occur) before the device gets torn down. Instead clear
everything up on our end so the configuration goes away as quickly as
possible.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:19 +0000 (14:51 -0400)]
nbd: use bd_set_size when updating disk size
When we stopped relying on the bdev everywhere I broke updating the
block device size on the fly, which ceph relies on. We can't just do
set_capacity, we also have to do bd_set_size so things like parted will
notice the device size change.
Fixes: 29eaadc ("nbd: stop using the bdev everywhere")
cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:18 +0000 (14:51 -0400)]
nbd: update size when connected
I messed up changing the size of an NBD device while it was connected by
not actually updating the device or doing the uevent. Fix this by
updating everything if we're connected and we change the size.
cc: stable@vger.kernel.org
Fixes: 639812a ("nbd: don't set the device size until we're connected")
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:51:17 +0000 (14:51 -0400)]
nbd: fix nbd device deletion
This fixes a use after free bug, we shouldn't be doing disk->queue right
after we do del_gendisk(disk). Save the queue and do the cleanup after
the del_gendisk.
Fixes: c6a4759ea0c9 ("nbd: add device refcounting")
cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Wed, 16 May 2018 18:36:01 +0000 (14:36 -0400)]
block: fix MAINTAINERS email for nbd
I've been missing stuff because it's been going into my work email which
is a black hole. Update to the email I actually use so I stop missing
patches and bug reports.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
huhai [Wed, 16 May 2018 14:21:21 +0000 (08:21 -0600)]
blk-mq: remove redundant insert case in blk_mq_make_request()
We can use blk_mq_sched_insert_request() even if we don't have
an IO scheduler attached, since that case will end up being
exactly the same as what blk_mq_queue_io() was doing now.
Signed-off-by: huhai <huhai@kylinos.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 15 May 2018 19:54:11 +0000 (13:54 -0600)]
Remove jsflash driver
Nobody is using it anymore, and it's been abandoned. Since David
is fine with removing it, kill it.
Suggested-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:58 +0000 (21:33 -0400)]
block: Add sysfs entry for fua support
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:57 +0000 (21:33 -0400)]
block: Export bio check/set pages_dirty
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:56 +0000 (21:33 -0400)]
block: Add warning for bi_next not NULL in bio_endio()
Recently found a bug where a driver left bi_next not NULL and then
called bio_endio(), and then the submitter of the bio used
bio_copy_data() which was treating src and dst as lists of bios.
Fixed that bug by splitting out bio_list_copy_data(), but in case other
things are depending on bi_next in weird ways, add a warning to help
avoid more bugs like that in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:55 +0000 (21:33 -0400)]
block: Add missing flush_dcache_page() call
Since a bio can point to userspace pages (e.g. direct IO), this is
generally necessary.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:54 +0000 (21:33 -0400)]
block: Split out bio_list_copy_data()
Found a bug (with ASAN) where we were passing a bio to bio_copy_data()
with bi_next not NULL, when it should have been - a driver had left
bi_next set to something after calling bio_endio().
Since the normal case is only copying single bios, split out
bio_list_copy_data() to avoid more bugs like this in the future.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:53 +0000 (21:33 -0400)]
block: Add bio_copy_data_iter(), zero_fill_bio_iter()
Add versions that take bvec_iter args instead of using bio->bi_iter - to
be used by bcachefs.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:52 +0000 (21:33 -0400)]
block: Use bioset_init() for fs_bio_set
Minor optimization - remove a pointer indirection when using fs_bio_set.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:51 +0000 (21:33 -0400)]
block: Add bioset_init()/bioset_exit()
Similarly to mempool_init()/mempool_exit(), take a pointer indirection
out of allocation/freeing by allowing biosets to be embedded in other
structs.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Wed, 9 May 2018 01:33:50 +0000 (21:33 -0400)]
block: Convert bio_set to mempool_init()
Minor performance improvement by getting rid of pointer indirections
from allocation/freeing fastpaths.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Kent Overstreet [Mon, 4 May 2015 23:52:20 +0000 (16:52 -0700)]
mempool: Add mempool_init()/mempool_exit()
Allows mempools to be embedded in other structs, getting rid of a
pointer indirection from allocation fastpaths.
mempool_exit() is safe to call on an uninitialized but zeroed mempool.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 14 May 2018 18:17:31 +0000 (12:17 -0600)]
sbitmap: fix race in wait batch accounting
If we have multiple callers of sbq_wake_up(), we can end up in a
situation where the wait_cnt will continually go more and more
negative. Consider the case where our wake batch is 1, hence
wait_cnt will start out as 1.
wait_cnt == 1
CPU0 CPU1
atomic_dec_return(), cnt == 0
atomic_dec_return(), cnt == -1
cmpxchg(-1, 0) (succeeds)
[wait_cnt now 0]
cmpxchg(0, 1) (fails)
This ends up with wait_cnt being 0, we'll wakeup immediately
next time. Going through the same loop as above again, and
we'll have wait_cnt -1.
For the case where we have a larger wake batch, the only
difference is that the starting point will be higher. We'll
still end up with continually smaller batch wakeups, which
defeats the purpose of the rolling wakeups.
Always reset the wait_cnt to the batch value. Then it doesn't
matter who wins the race. But ensure that whomever does win
the race is the one that increments the ws index and wakes up
our batch count, loser gets to call __sbq_wake_up() again to
account his wakeups towards the next active wait state index.
Fixes: 6c0ca7ae292a ("sbitmap: fix wakeup hang after sbq resize")
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:08 +0000 (09:54 +0200)]
block: consistently use GFP_NOIO instead of __GFP_NORECLAIM
Same numerical value (for now at least), but a much better documentation
of intent.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:07 +0000 (09:54 +0200)]
block: use GFP_NOIO instead of __GFP_DIRECT_RECLAIM
We just can't do I/O when doing block layer requests allocations,
so use GFP_NOIO instead of the even more limited __GFP_DIRECT_RECLAIM.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:06 +0000 (09:54 +0200)]
block: pass an explicit gfp_t to get_request
blk_old_get_request already has it at hand, and in blk_queue_bio, which
is the fast path, it is constant.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:05 +0000 (09:54 +0200)]
block: sanitize blk_get_request calling conventions
Switch everyone to blk_get_request_flags, and then rename
blk_get_request_flags to blk_get_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:04 +0000 (09:54 +0200)]
block: fix __get_request documentation
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 07:54:03 +0000 (09:54 +0200)]
scsi/osd: remove the gfp argument to osd_start_request
Always GFP_KERNEL, and keeping it would cause serious complications for
the next change.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Mon, 14 May 2018 08:24:34 +0000 (10:24 +0200)]
memstick: remove unused variables
Fixes: 7c2d748e8476 ("memstick: don't call blk_queue_bounce_limit")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:48 +0000 (15:59 +0200)]
ps3disk: handle highmem pages
The ps3disk driver already kmaps all pages when copying from/to the
internal bounce buffer, so it can accept highmem pages just fine.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:47 +0000 (15:59 +0200)]
jsflash: handle highmem pages
Just kmap the bio single page payload before processing it.
(and yes, now highmem on sparc32 anyway, but kmap_(un)map atomic are nops,
so this gives the right example)
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:46 +0000 (15:59 +0200)]
aoe: handle highmem pages
Use kmap_atomic when copying out of a bio_vec.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:45 +0000 (15:59 +0200)]
mtd_blkdevs: handle highmem pages
Just kmap the single payload page before passing it on to the FTL.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:44 +0000 (15:59 +0200)]
memstick: don't call blk_queue_bounce_limit
All in-tree host drivers set up a proper dma mask and use the dma-mapping
helpers. This means they will be able to deal with any address that we
are throwing at them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:43 +0000 (15:59 +0200)]
DAC960: don't use block layer bounce buffers
DAC960 just sets the block bounce limit to the dma mask, which means
that the iommu or swiotlb already take care of the bounce buffering,
and the block bouncing can be removed.
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 9 May 2018 13:59:42 +0000 (15:59 +0200)]
mtip32xx: don't use block layer bounce buffers
mtip32xx just sets the block bounce limit to the dma mask, which means
that the iommu or swiotlb already take care of the bounce buffering,
and the block bouncing can be removed.
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Thu, 10 May 2018 14:34:20 +0000 (08:34 -0600)]
nvme/pci: Sync controller reset for AER slot_reset
AER handling expects a successful return from slot_reset means the
driver made the device functional again. The nvme driver had been using
an asynchronous reset to recover the device, so the device
may still be initializing after control is returned to the
AER handler. This creates problems for subsequent event handling,
causing the initializion to fail.
This patch fixes that by syncing the controller reset before returning
to the AER driver, and reporting the true state of the reset.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=199657
Reported-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Cc: Sinan Kaya <okaya@codeaurora.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: stable@vger.kernel.org
Tested-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Omar Sandoval [Thu, 10 May 2018 00:29:24 +0000 (17:29 -0700)]
sbitmap: warn if using smaller shallow depth than was setup
Make sure the user passed the right value to
sbitmap_queue_min_shallow_depth().
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 19:55:14 +0000 (13:55 -0600)]
kyber-iosched: update shallow depth when setting up hardware queue
We don't expect the async depth to be smaller than the wake batch
count for sbitmap, but just in case, inform sbitmap of what shallow
depth kyber may use.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 21:26:55 +0000 (15:26 -0600)]
bfq-iosched: update shallow depth to smallest one used
If our shallow depth is smaller than the wake batching of sbitmap,
we can introduce hangs. Ensure that sbitmap knows how low we'll go.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Thu, 10 May 2018 00:16:31 +0000 (17:16 -0700)]
sbitmap: fix missed wakeups caused by sbitmap_queue_get_shallow()
The sbitmap queue wake batch is calculated such that once allocations
start blocking, all of the bits which are already allocated must be
enough to fulfill the batch counters of all of the waitqueues. However,
the shallow allocation depth can break this invariant, since we block
before our full depth is being utilized. Add
sbitmap_queue_min_shallow_depth(), which saves the minimum shallow depth
the sbq will use, and update sbq_calc_wake_batch() to take it into
account.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 21:25:22 +0000 (15:25 -0600)]
bfq-iosched: remove unused variable
bfqd->sb_shift was attempted used as a cache for the sbitmap queue
shift, but we don't need it, as it never changes. Kill it with fire.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 19:27:21 +0000 (13:27 -0600)]
bfq: calculate shallow depths at init time
It doesn't change, so don't put it in the per-IO hot path.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 19:12:10 +0000 (13:12 -0600)]
bfq-iosched: don't worry about reserved tags in limit_depth
Reserved tags are used for error handling, we don't need to
care about them for regular IO. The core won't call us for these
anyway.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 9 May 2018 19:28:50 +0000 (13:28 -0600)]
blk-mq: don't call into depth limiting for reserved tags
It's not useful, they are internal and/or error handling recovery
commands.
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Paolo Valente [Fri, 4 May 2018 17:17:01 +0000 (19:17 +0200)]
block, bfq: postpone rq preparation to insert or merge
When invoked for an I/O request rq, the prepare_request hook of bfq
increments reference counters in the destination bfq_queue for rq. In
this respect, after this hook has been invoked, rq may still be
transformed into a request with no icq attached, i.e., for bfq, a
request not associated with any bfq_queue. No further hook is invoked
to signal this tranformation to bfq (in general, to the destination
elevator for rq). This leads bfq into an inconsistent state, because
bfq has no chance to correctly lower these counters back. This
inconsistency may in its turn cause incorrect scheduling and hangs. It
certainly causes memory leaks, by making it impossible for bfq to free
the involved bfq_queue.
On the bright side, no transformation can still happen for rq after rq
has been inserted into bfq, or merged with another, already inserted,
request. Exploiting this fact, this commit addresses the above issue
by delaying the preparation of an I/O request to when the request is
inserted or merged.
This change also gives a performance bonus: a lock-contention point
gets removed. To prepare a request, bfq needs to hold its scheduler
lock. After postponing request preparation to insertion or merging, no
lock needs to be grabbed any longer in the prepare_request hook, while
the lock already taken to perform insertion or merging is used to
preparare the request as well.
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christophe JAILLET [Thu, 10 May 2018 07:27:31 +0000 (09:27 +0200)]
mtip32xx: Fix an error handling path in 'mtip_pci_probe()'
Branch to the right label in the error handling path in order to keep it
logical.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
SeongJae Park [Thu, 3 May 2018 09:53:26 +0000 (18:53 +0900)]
brd: Mark as non-rotational
This commit sets QUEUE_FLAG_NONROT and clears up QUEUE_FLAG_ADD_RANDOM
to mark the ramdisks as non-rotational device.
Signed-off-by: SeongJae Park <sj38.park@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:53 +0000 (02:08 -0700)]
block: consolidate struct request timestamp fields
Currently, struct request has four timestamp fields:
- A start time, set at get_request time, in jiffies, used for iostats
- An I/O start time, set at start_request time, in ktime nanoseconds,
used for blk-stats (i.e., wbt, kyber, hybrid polling)
- Another start time and another I/O start time, used for cfq and bfq
These can all be consolidated into one start time and one I/O start
time, both in ktime nanoseconds, shaving off up to 16 bytes from struct
request depending on the kernel config.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:52 +0000 (02:08 -0700)]
block: move blk_stat_add() to __blk_mq_end_request()
We want this next to blk_account_io_done() for the next change so that
we can call ktime_get() only once for both.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:51 +0000 (02:08 -0700)]
block: use ktime_get_ns() instead of sched_clock() for cfq and bfq
cfq and bfq have some internal fields that use sched_clock() which can
trivially use ktime_get_ns() instead. Their timestamp fields in struct
request can also use ktime_get_ns(), which resolves the 8 year old
comment added by commit
28f4197e5d47 ("block: disable preemption before
using sched_clock()").
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:50 +0000 (02:08 -0700)]
block: get rid of struct blk_issue_stat
struct blk_issue_stat squashes three things into one u64:
- The time the driver started working on a request
- The original size of the request (for the io.low controller)
- Flags for writeback throttling
It turns out that on x86_64, we have a 4 byte hole in struct request
which we can fill with the non-timestamp fields from blk_issue_stat,
simplifying things quite a bit.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:49 +0000 (02:08 -0700)]
block: replace bio->bi_issue_stat with bio-specific type
struct blk_issue_stat is going away, and bio->bi_issue_stat doesn't even
use the blk-stats interface, so we can provide a separate implementation
specific for bios. The helpers work the same way as the blk-stats
helpers.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:48 +0000 (02:08 -0700)]
block: pass struct request instead of struct blk_issue_stat to wbt
issue_stat is going to go away, so first make writeback throttling take
the containing request, update the internal wbt helpers accordingly, and
change rwb->sync_cookie to be the request pointer instead of the
issue_stat pointer. No functional change.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval [Wed, 9 May 2018 09:08:47 +0000 (02:08 -0700)]
block: move some wbt helpers to blk-wbt.c
A few helpers are only used from blk-wbt.c, so move them there, and put
wbt_track() behind the CONFIG_BLK_WBT typedef. This is in preparation
for changing how the wbt flags are tracked.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 7 May 2018 16:03:23 +0000 (10:03 -0600)]
blk-wbt: throttle discards like background writes
Throttle discards like we would any background write. Discards should
be background activity, so if they are impacting foreground IO, then
we will throttle them down.
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 7 May 2018 15:57:08 +0000 (09:57 -0600)]
blk-wbt: pass in enum wbt_flags to get_rq_wait()
This is in preparation for having more write queues, in which
case we would have needed to pass in more information than just
a simple 'is_kswapd' boolean.
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Thu, 3 May 2018 15:14:57 +0000 (09:14 -0600)]
blk-wbt: account any writing command as a write
We currently special case WRITE and FLUSH, but we should really
just include any command with the write bit set. This ensures
that we account DISCARD.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 8 May 2018 21:09:41 +0000 (15:09 -0600)]
block: break discard submissions into the user defined size
Don't build discards bigger than what the user asked for, if the
user decided to limit the size by writing to 'discard_max_bytes'.
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tetsuo Handa [Fri, 4 May 2018 16:58:09 +0000 (10:58 -0600)]
loop: remember whether sysfs_create_group() was done
syzbot is hitting WARN() triggered by memory allocation fault
injection [1] because loop module is calling sysfs_remove_group()
when sysfs_create_group() failed.
Fix this by remembering whether sysfs_create_group() succeeded.
[1] https://syzkaller.appspot.com/bug?id=
3f86c0edf75c86d2633aeb9dd69eccc70bc7e90b
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-by: syzbot <syzbot+9f03168400f56df89dbc6f1751f4458fe739ff29@syzkaller.appspotmail.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Renamed sysfs_ready -> sysfs_inited.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Thomas Gleixner [Fri, 4 May 2018 14:32:47 +0000 (16:32 +0200)]
block: Shorten interrupt disabled regions
Commit
9c40cef2b799 ("sched: Move blk_schedule_flush_plug() out of
__schedule()") moved the blk_schedule_flush_plug() call out of the
interrupt/preempt disabled region in the scheduler. This allows to replace
local_irq_save/restore(flags) by local_irq_disable/enable() in
blk_flush_plug_list().
But it makes more sense to disable interrupts explicitly when the request
queue is locked end reenable them when the request to is unlocked. This
shortens the interrupt disabled section which is important when the plug
list contains requests for more than one queue. The comment which claims
that disabling interrupts around the loop is misleading as the called
functions can reenable interrupts unconditionally anyway and obfuscates the
scope badly:
local_irq_save(flags);
spin_lock(q->queue_lock);
...
queue_unplugged(q...);
scsi_request_fn();
spin_unlock_irq(q->queue_lock);
-------------------^^^ ????
spin_lock_irq(q->queue_lock);
spin_unlock(q->queue_lock);
local_irq_restore(flags);
Aside of that the detached interrupt disabling is a constant pain for
PREEMPT_RT as it requires patching and special casing when RT is enabled
while with the spin_*_irq() variants this happens automatically.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Anna-Maria Gleixner [Fri, 4 May 2018 14:32:46 +0000 (16:32 +0200)]
block: Remove redundant WARN_ON()
Commit
2fff8a924d4c ("block: Check locking assumptions at runtime") added a
lockdep_assert_held(q->queue_lock) which makes the WARN_ON() redundant
because lockdep will detect and warn about context violations.
The unconditional WARN_ON() does not provide real additional value, so it
can be removed.
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Sebastian Andrzej Siewior [Fri, 4 May 2018 14:32:45 +0000 (16:32 +0200)]
block: don't disable interrupts during kmap_atomic()
bounce_copy_vec() disables interrupts around kmap_atomic(). This is a
leftover from the old kmap_atomic() implementation which relied on fixed
mapping slots, so the caller had to make sure that the same slot could not
be reused from an interrupting context.
kmap_atomic() was changed to dynamic slots long ago and commit
1ec9c5ddc17a
("include/linux/highmem.h: remove the second argument of k[un]map_atomic()")
removed the slot assignements, but the callers were not checked for now
redundant interrupt disabling.
Remove the conditional interrupt disable.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Florian La Roche [Sun, 6 May 2018 17:34:07 +0000 (19:34 +0200)]
Fix typo in comment.
CONFIG_PRREMPT -> CONFIG_PREEMPT
Signed-off-by: Florian La Roche <Florian.LaRoche@googlemail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Mon, 7 May 2018 15:33:29 +0000 (05:33 -1000)]
Merge tag 'devicetree-fixes-for-4.17' of git://git./linux/kernel/git/robh/linux
Pull DeviceTree fixes from Rob Herring:
- fix path to display timing binding
- fix some typos in interrupt-names and clock-names
- fix a resource leak on overlay removal
- add missing documentation for R8A77965 DMA, serial, and net
- cleanup sunxi pinctrl description
- add Kieback & Peter GmbH vendor prefix
* tag 'devicetree-fixes-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
dt-bindings: panel: lvds: Fix path to display timing bindings
dt-bindings: mvebu-uart: DT fix s/interrupts-names/interrupt-names/
dt-bindings: meson-uart: DT fix s/clocks-names/clock-names/
of: overlay: Stop leaking resources on overlay removal
dtc: checks: drop warning for missing PCI bridge bus-range
dt-bindings: dmaengine: rcar-dmac: document R8A77965 support
dt-bindings: serial: sh-sci: Add support for r8a77965 (H)SCIF
dt-bindings: net: ravb: Add support for r8a77965 SoC
dt-bindings: pinctrl: sunxi: Fix reference to driver
doc: Add vendor prefix for Kieback & Peter GmbH
Keith Busch [Mon, 7 May 2018 14:30:24 +0000 (08:30 -0600)]
nvme/pci: Hold controller reference during async probe
It is possible the driver's remove may have freed the controller if
the remove callback is invoked prior to the async_schedule starting
the reset_work. This patch fixes that by holding a reference on the
controller.
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Linus Torvalds [Mon, 7 May 2018 02:57:38 +0000 (16:57 -1000)]
Linux 4.17-rc4
Linus Torvalds [Sun, 6 May 2018 15:46:29 +0000 (05:46 -1000)]
Merge tag 'for-linus' of git://git./virt/kvm/kvm
Pll KVM fixes from Radim Krčmář:
"ARM:
- Fix proxying of GICv2 CPU interface accesses
- Fix crash when switching to BE
- Track source vcpu git GICv2 SGIs
- Fix an outdated bit of documentation
x86:
- Speed up injection of expired timers (for stable)"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: remove APIC Timer periodic/oneshot spikes
arm64: vgic-v2: Fix proxying of cpuif access
KVM: arm/arm64: vgic_init: Cleanup reference to process_maintenance
KVM: arm64: Fix order of vcpu_write_sys_reg() arguments
KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI
Linus Torvalds [Sun, 6 May 2018 15:42:24 +0000 (05:42 -1000)]
Merge tag 'iommu-fixes-v4.17-rc4' of git://git./linux/kernel/git/joro/iommu
Pull iommu fixes from Joerg Roedel:
- fix a compile warning in the AMD IOMMU driver with irq remapping
disabled
- fix for VT-d interrupt remapping and invalidation size (caused a
BUG_ON when trying to invalidate more than 4GB)
- build fix and a regression fix for broken graphics with old DTS for
the rockchip iommu driver
- a revert in the PCI window reservation code which fixes a regression
with VFIO.
* tag 'iommu-fixes-v4.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu: rockchip: fix building without CONFIG_OF
iommu/vt-d: Use WARN_ON_ONCE instead of BUG_ON in qi_flush_dev_iotlb()
iommu/vt-d: fix shift-out-of-bounds in bug checking
iommu/dma: Move PCI window region reservation back into dma specific path.
iommu/rockchip: Make clock handling optional
iommu/amd: Hide unused iommu_table_lock
iommu/vt-d: Fix usage of force parameter in intel_ir_reconfigure_irte()
Linus Torvalds [Sun, 6 May 2018 15:37:24 +0000 (05:37 -1000)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull x86 fix from Thomas Gleixner:
"Unbreak the CPUID CPUID_8000_0008_EBX reload which got dropped when
the evaluation of physical and virtual bits which uses the same CPUID
leaf was moved out of get_cpu_cap()"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/cpu: Restore CPUID_8000_0008_EBX reload
Linus Torvalds [Sun, 6 May 2018 15:35:23 +0000 (05:35 -1000)]
Merge branch 'timers-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull clocksource fixes from Thomas Gleixner:
"The recent addition of the early TSC clocksource breaks on machines
which have an unstable TSC because in case that TSC is disabled, then
the clocksource selection logic falls back to the early TSC which is
obviously bogus.
That also unearthed a few robustness issues in the clocksource
derating code which are addressed as well"
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
clocksource: Rework stale comment
clocksource: Consistent de-rate when marking unstable
x86/tsc: Fix mark_tsc_unstable()
clocksource: Initialize cs->wd_list
clocksource: Allow clocksource_mark_unstable() on unregistered clocksources
x86/tsc: Always unregister clocksource_tsc_early
Linus Torvalds [Sun, 6 May 2018 15:34:06 +0000 (05:34 -1000)]
Merge branch 'irq-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull irq fix from Thomas Gleixner:
"A single fix to prevent false positives in the spurious interrupt
detector when more than a single demultiplex register is evaluated in
the Qualcom irq combiner driver"
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/qcom: Fix check for spurious interrupts
Linus Torvalds [Sun, 6 May 2018 03:30:58 +0000 (17:30 -1000)]
Merge tag 'platform-drivers-x86-v4.17-2' of git://git.infradead.org/linux-platform-drivers-x86
Pull x86 platform driver fixes from Darren Hart:
- We missed a case in the Dell config dependencies resulting in a
possible bad configuration, resolve it by giving up on trying to keep
DELL_LAPTOP visible in the menu and make it depend on DELL_SMBIOS.
- Fix a null pointer dereference at module unload for the asus-wireless
driver.
* tag 'platform-drivers-x86-v4.17-2' of git://git.infradead.org/linux-platform-drivers-x86:
platform/x86: Kconfig: Fix dell-laptop dependency chain.
platform/x86: asus-wireless: Fix NULL pointer dereference
Linus Torvalds [Sun, 6 May 2018 03:28:08 +0000 (17:28 -1000)]
Merge tag 'usb-4.17-rc4' of git://git./linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here are some USB driver fixes for 4.17-rc4.
The majority of them are some USB gadget fixes that missed my last
pull request. The "largest" patch in here is a fix for the old visor
driver that syzbot found 6 months or so ago and I finally remembered
to fix it.
All of these have been in linux-next with no reported issues"
* tag 'usb-4.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
Revert "usb: host: ehci: Use dma_pool_zalloc()"
usb: typec: tps6598x: handle block reads separately with plain-I2C adapters
usb: typec: tcpm: Release the role mux when exiting
USB: Accept bulk endpoints with 1024-byte maxpacket
xhci: Fix use-after-free in xhci_free_virt_device
USB: serial: visor: handle potential invalid device configuration
USB: serial: option: adding support for ublox R410M
usb: musb: trace: fix NULL pointer dereference in musb_g_tx()
usb: musb: host: fix potential NULL pointer dereference
usb: gadget: composite Allow for larger configuration descriptors
usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue
usb: dwc3: gadget: dwc3_gadget_del_and_unmap_request() can be static
usb: dwc2: pci: Fix error return code in dwc2_pci_probe()
usb: dwc2: WA for Full speed ISOC IN in DDMA mode.
usb: dwc2: dwc2_vbus_supply_init: fix error check
usb: gadget: f_phonet: fix pn_net_xmit()'s return type
Anthoine Bourgeois [Sun, 29 Apr 2018 22:05:58 +0000 (22:05 +0000)]
KVM: x86: remove APIC Timer periodic/oneshot spikes
Since the commit "
8003c9ae204e: add APIC Timer periodic/oneshot mode VMX
preemption timer support", a Windows 10 guest has some erratic timer
spikes.
Here the results on a 150000 times 1ms timer without any load:
Before
8003c9ae204e | After
8003c9ae204e
Max 1834us | 86000us
Mean 1100us | 1021us
Deviation 59us | 149us
Here the results on a 150000 times 1ms timer with a cpu-z stress test:
Before
8003c9ae204e | After
8003c9ae204e
Max 32000us | 140000us
Mean 1006us | 1997us
Deviation 140us | 11095us
The root cause of the problem is starting hrtimer with an expiry time
already in the past can take more than 20 milliseconds to trigger the
timer function. It can be solved by forward such past timers
immediately, rather than submitting them to hrtimer_start().
In case the timer is periodic, update the target expiration and call
hrtimer_start with it.
v2: Check if the tsc deadline is already expired. Thank you Mika.
v3: Execute the past timers immediately rather than submitting them to
hrtimer_start().
v4: Rearm the periodic timer with advance_periodic_target_expiration() a
simpler version of set_target_expiration(). Thank you Paolo.
Cc: Mika Penttilä <mika.penttila@nextfour.com>
Cc: Wanpeng Li <kernellwp@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Anthoine Bourgeois <anthoine.bourgeois@blade-group.com>
8003c9ae204e ("KVM: LAPIC: add APIC Timer periodic/oneshot mode VMX preemption timer support")
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Radim Krčmář [Sat, 5 May 2018 21:05:31 +0000 (23:05 +0200)]
Merge tag 'kvmarm-fixes-for-4.17-2' of git://git./linux/kernel/git/kvmarm/kvmarm
KVM/arm fixes for 4.17, take #2
- Fix proxying of GICv2 CPU interface accesses
- Fix crash when switching to BE
- Track source vcpu git GICv2 SGIs
- Fix an outdated bit of documentation
Linus Torvalds [Sat, 5 May 2018 07:15:25 +0000 (21:15 -1000)]
Merge tag 'kbuild-fixes-v4.17' of git://git./linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- remove state comment in modpost
- extend MAINTAINERS entry to cover modpost and more makefiles
- fix missed building of SANCOV gcc-plugin
- replace left-over 'bison' with $(YACC)
- display short log when generating parer of genksyms
* tag 'kbuild-fixes-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
genksyms: fix typo in parse.tab.{c,h} generation rules
kbuild: replace hardcoded bison in cmd_bison_h with $(YACC)
gcc-plugins: fix build condition of SANCOV plugin
MAINTAINERS: Update Kbuild entry with a few paths
modpost: delete stale comment
Linus Torvalds [Sat, 5 May 2018 07:12:06 +0000 (21:12 -1000)]
Merge tag 'clk-fixes-for-linus' of git://git./linux/kernel/git/clk/linux
Pull clk fixes froom Stephen Boyd:
"A handful of fixes for the stm32mp1 clk driver came in during the
merge window for the driver that got merged in the merge window.
Plus a warning fix for unused PM ops and a couple fixes for the meson
clk driver clk names that went unnoticed with the regmap rework.
There's also another fix in here for the mux rounding flag which
wasn't doing what it said it did, but now it does"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: meson: meson8b: fix meson8b_cpu_clk parent clock name
clk: meson: meson8b: fix meson8b_fclk_div3_div clock name
clk: meson: drop meson_aoclk_gate_regmap_ops
clk: meson: honor CLK_MUX_ROUND_CLOSEST in clk_regmap
clk: honor CLK_MUX_ROUND_CLOSEST in generic clk mux
clk: cs2000: mark resume function as __maybe_unused
clk: stm32mp1: remove ck_apb_dbg clock
clk: stm32mp1: set stgen_k clock as critical
clk: stm32mp1: add missing tzc2 clock
clk: stm32mp1: fix SAI3 & SAI4 clocks
clk: stm32mp1: remove unused dfsdm_src[] const
clk: stm32mp1: add missing static
Linus Torvalds [Sat, 5 May 2018 07:07:43 +0000 (21:07 -1000)]
Merge tag 'rproc-v4.17-1' of git://github.com/andersson/remoteproc
Pull remoteproc and rpmsg fixes from Bjorn Andersson:
- fix screw-up when reversing boolean for rproc_stop()
- add missing OF node refcounting dereferences
- add missing MODULE_ALIAS in rpmsg_char
* tag 'rproc-v4.17-1' of git://github.com/andersson/remoteproc:
rpmsg: added MODULE_ALIAS for rpmsg_char
remoteproc: qcom: Fix potential device node leaks
remoteproc: fix crashed parameter logic on stop call
Linus Torvalds [Sat, 5 May 2018 07:05:12 +0000 (21:05 -1000)]
Merge tag 'drm-fixes-for-v4.17-rc4' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"vmwgfx, i915, vc4, vga dac fixes.
This seems eerily quiet, so I expect it will explode next week or
something.
One i915 model firmware, two vmwgfx fixes, one vc4 fix and one bridge
leak fix"
* tag 'drm-fixes-for-v4.17-rc4' of git://people.freedesktop.org/~airlied/linux:
drm/bridge: vga-dac: Fix edid memory leak
drm/vc4: Make sure vc4_bo_{inc,dec}_usecnt() calls are balanced
drm/i915/glk: Add MODULE_FIRMWARE for Geminilake
drm/vmwgfx: Fix a buffer object leak
drm/vmwgfx: Clean up fbdev modeset locking
Linus Torvalds [Sat, 5 May 2018 06:57:28 +0000 (20:57 -1000)]
Merge tag 'trace-v4.17-rc1-3' of git://git./linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"Some of the files in the tracing directory show file mode 0444 when
they are writable by root. To fix the confusion, they should be 0644.
Note, either case root can still write to them.
Zhengyuan asked why I never applied that patch (the first one is from
2014!). I simply forgot about it. /me lowers head in shame"
* tag 'trace-v4.17-rc1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Fix the file mode of stack tracer
ftrace: Have set_graph_* files have normal file modes
Linus Torvalds [Sat, 5 May 2018 06:51:10 +0000 (20:51 -1000)]
Merge tag 'for-linus' of git://git./linux/kernel/git/rdma/rdma
Pull rdma fixes from Doug Ledford:
"This is our first pull request of the rc cycle. It's not that it's
been overly quiet, we were just waiting on a few things before sending
this off.
For instance, the 6 patch series from Intel for the hfi1 driver had
actually been pulled in on Tuesday for a Wednesday pull request, only
to have Jason notice something I missed, so we held off for some
testing, and then on Thursday had to respin the series because the
very first patch needed a minor fix (unnecessary cast is all).
There is a sizable hns patch series in here, as well as a reasonably
largish hfi1 patch series, then all of the lines of uapi updates are
just the change to the new official Linux-OpenIB SPDX tag (a bunch of
our files had what amounts to a BSD-2-Clause + MIT Warranty statement
as their license as a result of the initial code submission years ago,
and the SPDX folks decided it was unique enough to warrant a unique
tag), then the typical mlx4 and mlx5 updates, and finally some cxgb4
and core/cache/cma updates to round out the bunch.
None of it was overly large by itself, but in the 2 1/2 weeks we've
been collecting patches, it has added up :-/.
As best I can tell, it's been through 0day (I got a notice about my
last for-next push, but not for my for-rc push, but Jason seems to
think that failure messages are prioritized and success messages not
so much). It's also been through linux-next. And yes, we did notice in
the context portion of the CMA query gid fix patch that there is a
dubious BUG_ON() in the code, and have plans to audit our BUG_ON usage
and remove it anywhere we can.
Summary:
- Various build fixes (USER_ACCESS=m and ADDR_TRANS turned off)
- SPDX license tag cleanups (new tag Linux-OpenIB)
- RoCE GID fixes related to default GIDs
- Various fixes to: cxgb4, uverbs, cma, iwpm, rxe, hns (big batch),
mlx4, mlx5, and hfi1 (medium batch)"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (52 commits)
RDMA/cma: Do not query GID during QP state transition to RTR
IB/mlx4: Fix integer overflow when calculating optimal MTT size
IB/hfi1: Fix memory leak in exception path in get_irq_affinity()
IB/{hfi1, rdmavt}: Fix memory leak in hfi1_alloc_devdata() upon failure
IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used
IB/hfi1: Fix loss of BECN with AHG
IB/hfi1 Use correct type for num_user_context
IB/hfi1: Fix handling of FECN marked multicast packet
IB/core: Make ib_mad_client_id atomic
iw_cxgb4: Atomically flush per QP HW CQEs
IB/uverbs: Fix kernel crash during MR deregistration flow
IB/uverbs: Prevent reregistration of DM_MR to regular MR
RDMA/mlx4: Add missed RSS hash inner header flag
RDMA/hns: Fix a couple misspellings
RDMA/hns: Submit bad wr
RDMA/hns: Update assignment method for owner field of send wqe
RDMA/hns: Adjust the order of cleanup hem table
RDMA/hns: Only assign dqpn if IB_QP_PATH_DEST_QPN bit is set
RDMA/hns: Remove some unnecessary attr_mask judgement
RDMA/hns: Only assign mtu if IB_QP_PATH_MTU bit is set
...
Linus Torvalds [Sat, 5 May 2018 06:41:44 +0000 (20:41 -1000)]
Merge tag 'for-linus-
20180504' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
"A collection of fixes that should to into this release. This contains:
- Set of bcache fixes from Coly, fixing regression in patches that
went into this series.
- Set of NVMe fixes by way of Keith.
- Set of bdi related fixes, one from Jan and two from Tetsuo Handa,
fixing various issues around device addition/removal.
- Two block inflight fixes from Omar, fixing issues around the
transition to using tags for blk-mq inflight accounting that we
did a few releases ago"
* tag 'for-linus-
20180504' of git://git.kernel.dk/linux-block:
bdi: Fix oops in wb_workfn()
nvmet: switch loopback target state to connecting when resetting
nvme/multipath: Fix multipath disabled naming collisions
nvme/multipath: Disable runtime writable enabling parameter
nvme: Set integrity flag for user passthrough commands
nvme: fix potential memory leak in option parsing
bdi: Fix use after free bug in debugfs_remove()
bdi: wake up concurrent wb_shutdown() callers.
bcache: use pr_info() to inform duplicated CACHE_SET_IO_DISABLE set
bcache: set dc->io_disable to true in conditional_stop_bcache_device()
bcache: add wait_for_kthread_stop() in bch_allocator_thread()
bcache: count backing device I/O error for writeback I/O
bcache: set CACHE_SET_IO_DISABLE in bch_cached_dev_error()
bcache: store disk name in struct cache and struct cached_dev
blk-mq: fix sysfs inflight counter
blk-mq: count allocated but not started requests in iostats inflight
Linus Torvalds [Sat, 5 May 2018 06:36:50 +0000 (20:36 -1000)]
Merge tag 'xfs-4.17-fixes-2' of git://git./fs/xfs/xfs-linux
Pull xfs fixes from Darrick Wong:
"I've got one more bug fix for xfs for 4.17-rc4, which caps the amount
of data we try to handle in one dedupe request so that userspace can't
livelock the kernel.
This series has been run through a full xfstests run during the week
and through a quick xfstests run against this morning's master, with
no ajor failures reported.
Summary:
- Cap the maximum length of a deduplication request at MAX_RW_COUNT/2
to avoid kernel livelock due to excessively large IO requests"
* tag 'xfs-4.17-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
xfs: cap the length of deduplication requests
Linus Torvalds [Sat, 5 May 2018 06:32:18 +0000 (20:32 -1000)]
Merge tag 'for-4.17-rc3-tag' of git://git./linux/kernel/git/kdave/linux
Pull btrfs fixes from David Sterba:
"Two regression fixes and one fix for stable"
* tag 'for-4.17-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
Btrfs: send, fix missing truncate for inode with prealloc extent past eof
btrfs: Take trans lock before access running trans in check_delayed_ref
btrfs: Fix wrong first_key parameter in replace_path