Ming Lei [Fri, 11 Nov 2016 12:05:40 +0000 (20:05 +0800)]
fs: logfs: remove unnecesary check
The check on bio->bi_vcnt doesn't make sense in erase_end_io().
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:39 +0000 (20:05 +0800)]
fs: logfs: use bio_add_page() in do_erase()
Also code gets simplified a bit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:38 +0000 (20:05 +0800)]
fs: logfs: use bio_add_page() in __bdev_writeseg()
Also this patch simplify the code a bit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:37 +0000 (20:05 +0800)]
fs: logfs: convert to bio_add_page() in sync_request()
Always bio_add_page() is the standard and preferred way to
do the task.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:33 +0000 (20:05 +0800)]
bcache: debug: avoid accessing .bi_io_vec directly
Instead we use standard iterator way to do that.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:32 +0000 (20:05 +0800)]
target: avoid accessing .bi_vcnt directly
When the bio is full, bio_add_pc_page() will return zero,
so use this information tell when the bio is full.
Also replace access to .bi_vcnt for pr_debug() with bio_segments().
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:31 +0000 (20:05 +0800)]
block: floppy: use bio_add_page()
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Fri, 11 Nov 2016 12:05:30 +0000 (20:05 +0800)]
block: drbd: remove impossible failure handling
For a non-cloned bio, bio_add_page() only returns failure when
the io vec table is full, but in that case, bio->bi_vcnt can't
be zero at all.
So remove the impossible failure handling.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Tue, 22 Nov 2016 15:57:21 +0000 (08:57 -0700)]
block: bio: pass bvec table to bio_init()
Some drivers often use external bvec table, so introduce
this helper for this case. It is always safe to access the
bio->bi_io_vec in this way for this case.
After converting to this usage, it will becomes a bit easier
to evaluate the remaining direct access to bio->bi_io_vec,
so it can help to prepare for the following multipage bvec
support.
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Fixed up the new O_DIRECT cases.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 22 Nov 2016 15:12:39 +0000 (08:12 -0700)]
block_dev: get rid of blksize bits calculation
We store the bits in the bdev sector size locally, but we don't use
the calculation anymore. All we do with it is shift it back up to
the bdev sector size. So let's just use that directly and kill the
variable and bits calculation.
Signed-off-by: Jens Axboe <axboe@fb.com>
Damien Le Moal [Tue, 22 Nov 2016 06:38:49 +0000 (15:38 +0900)]
block_dev: Fixed direct I/O bio sector calculation
A direct I/O alignment must be always checked against the device blocks size,
but the I/O offset (bio->bi_iter.bi_sector must always use 512B sector unit, and
not the actual logical block size.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaun Tancheff [Mon, 21 Nov 2016 21:52:23 +0000 (15:52 -0600)]
block: apply blk_partition_remap to REQ_OP_ZONE_RESET
If a ZBC device is partitioned and operations are performed on the partition
the zone information is rebased to the partition, however the zone reset
is not mapped from the partition to device as are other operations.
This causes the API (report zones / reset zone) to be unbalanced in this
regard. Checking for the zone reset op code explicitly will balance the
API.
Signed-off-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 21 Nov 2016 15:18:18 +0000 (16:18 +0100)]
block: clear all of bi_opf in bio_set_op_attrs
Since commit
87374179 ("block: add a proper block layer data direction
encoding") we only or the new op and flags into bi_opf in bio_set_op_attrs
instead of clearing the old value. I've not seen any breakage with the
new behavior, but it seems dangerous.
Also convert it to an inline function to make the argument passing
safer.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Mon, 21 Nov 2016 16:33:17 +0000 (09:33 -0700)]
pktcdvd: mark as unmaintained and deprecated
This driver is both orphaned, and not really useful anymore. Mark
it as such, and remove it in a future kernel after a release or
two.
Signed-off-by: Jens Axboe <axboe@fb.com>
Tobias Klauser [Fri, 18 Nov 2016 14:16:06 +0000 (15:16 +0100)]
block: Change extern inline to static inline
With compilers which follow the C99 standard (like modern versions of
gcc and clang), "extern inline" does the opposite thing from older
versions of gcc (emits code for an externally linkable version of the
inline function).
"static inline" does the intended behavior in all cases instead.
Description taken from commit
6d91857d4826 ("staging, rtl8192e,
LLVMLinux: Change extern inline to static inline").
This also fixes the following GCC warning when building with CONFIG_PM
disabled:
./include/linux/blkdev.h:1143:20: warning: no previous prototype for 'blk_set_runtime_active' [-Wmissing-prototypes]
Fixes: d07ab6d11477 ("block: Add blk_set_runtime_active()")
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Jens Axboe <axboe@fb.com>
Geliang Tang [Fri, 18 Nov 2016 14:21:16 +0000 (22:21 +0800)]
skd_main: drop duplicate header scatterlist.h
Drop duplicate header scatterlist.h from skd_main.c.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 18 Nov 2016 05:23:02 +0000 (22:23 -0700)]
block: document the 'io_poll_delay' queue sysfs file
This was documented in the original commit,
64f1c21e86f7, but it
never made it into the proper location for queue sysfs files.
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 17 Nov 2016 06:14:22 +0000 (23:14 -0700)]
block: new direct I/O implementation
Similar to the simple fast path, but we now need a dio structure to
track multiple-bio completions. It's basically a cut-down version
of the new iomap-based direct I/O code for filesystems, but without
all the logic to call into the filesystem for extent lookup or
allocation, and without the complex I/O completion workqueue handler
for AIO - instead we just use the FUA bit on the bios to ensure
data is flushed to stable storage.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 17 Nov 2016 16:50:47 +0000 (17:50 +0100)]
block: make __blkdev_direct_IO_sync() support O_SYNC/DSYNC
Split the op setting code into a helper, use it in both places.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 17 Nov 2016 06:11:42 +0000 (23:11 -0700)]
block: support a full bio worth of IO for simplified bdev direct-io
Just alloc the bio_vec array if we exceed the inline limit.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Mon, 14 Nov 2016 20:03:03 +0000 (13:03 -0700)]
blk-mq: make the polling code adaptive
The previous commit introduced the hybrid sleep/poll mode. Take
that one step further, and use the completion latencies to
automatically sleep for half the mean completion time. This is
a good approximation.
This changes the 'io_poll_delay' sysfs file a bit to expose the
various options. Depending on the value, the polling code will
behave differently:
-1 Never enter hybrid sleep mode
0 Use half of the completion mean for the sleep delay
>0 Use this specific value as the sleep delay
Signed-off-by: Jens Axboe <axboe@fb.com>
Tested-By: Stephen Bates <sbates@raithlin.com>
Reviewed-By: Stephen Bates <sbates@raithlin.com>
Jens Axboe [Mon, 14 Nov 2016 20:01:59 +0000 (13:01 -0700)]
blk-mq: implement hybrid poll mode for sync O_DIRECT
This patch enables a hybrid polling mode. Instead of polling after IO
submission, we can induce an artificial delay, and then poll after that.
For example, if the IO is presumed to complete in 8 usecs from now, we
can sleep for 4 usecs, wake up, and then do our polling. This still puts
a sleep/wakeup cycle in the IO path, but instead of the wakeup happening
after the IO has completed, it'll happen before. With this hybrid
scheme, we can achieve big latency reductions while still using the same
(or less) amount of CPU.
Signed-off-by: Jens Axboe <axboe@fb.com>
Tested-By: Stephen Bates <sbates@raithlin.com>
Reviewed-By: Stephen Bates <sbates@raithlin.com>
Christoph Hellwig [Mon, 31 Oct 2016 17:59:25 +0000 (11:59 -0600)]
block: fast-path for small and simple direct I/O requests
This patch adds a small and simple fast patch for small direct I/O
requests on block devices that don't use AIO. Between the neat
bio_iov_iter_get_pages helper that avoids allocating a page array
for get_user_pages and the on-stack bio and biovec this avoid memory
allocations and atomic operations entirely in the direct I/O code
(lower levels might still do memory allocations and will usually
have at least some atomic operations, though).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Tested-By: Stephen Bates <sbates@raithlin.com>
Reviewed-By: Stephen Bates <sbates@raithlin.com>
Jens Axboe [Thu, 17 Nov 2016 19:30:37 +0000 (12:30 -0700)]
nbd: fix use-after-free of rq/bio in the xmit path
For writes, we can get a completion in while we're still iterating
the request and bio chain. If that happens, we're reading freed
memory and we can crash.
Break out after the last segment and avoid having the iterator
read freed memory.
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Arnd Bergmann [Wed, 16 Nov 2016 15:29:57 +0000 (16:29 +0100)]
blk-wbt: fix old-style function declaration
The newly added driver causes a harmless warning in some configurations:
block/blk-wbt.c:250:1: error: ‘inline’ is not at beginning of declaration [-Werror=old-style-declaration]
static bool inline stat_sample_valid(struct blk_rq_stat *stat)
This makes it use the expected format for the declaration.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Yasuaki Ishimatsu [Wed, 16 Nov 2016 15:26:11 +0000 (08:26 -0700)]
null_blk: add usage hints for NVM
If CONFIG_NVM is disabled, loading null_block module with use_lightnvm=1
fails. But there are no messages and documents related to the failure.
Add the appropriate error message.
Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Massaged the text a bit.
Signed-off-by: Jens Axboe <axboe@fb.com>
Ming Lei [Wed, 16 Nov 2016 10:07:05 +0000 (18:07 +0800)]
block: deal with stale req count of plug list
In both legacy and mq path, req count of plug list is computed
before allocating request, so the number can be stale when falling
back to slept allocation, also the new introduced wbt can sleep
too.
This patch deals with the case by checking if plug list becomes
empty, and fixes the KASAN report of 'BUG: KASAN: stack-out-of-bounds'
which is introduced by Shaohua's patches of dispatching big request.
Fixes: 600271d900002(blk-mq: immediately dispatch big size request)
Fixes: 50d24c34403c6(block: immediately dispatch big size request)
Cc: Shaohua Li <shli@fb.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Omar Sandoval [Tue, 15 Nov 2016 19:11:59 +0000 (11:11 -0800)]
scsi_lib: untangle 0 and BLK_MQ_RQ_QUEUE_OK
Let's not depend on any of the BLK_MQ_RQ_QUEUE_* constants having
specific values. No functional change.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Omar Sandoval [Tue, 15 Nov 2016 19:11:58 +0000 (11:11 -0800)]
nvme: untangle 0 and BLK_MQ_RQ_QUEUE_OK
Let's not depend on any of the BLK_MQ_RQ_QUEUE_* constants having
specific values. No functional change.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Omar Sandoval [Mon, 14 Nov 2016 22:56:17 +0000 (14:56 -0800)]
loop: return proper error from loop_queue_rq()
->queue_rq() should return one of the BLK_MQ_RQ_QUEUE_* constants, not
an errno.
f4aa4c7bbac6 ("block: loop: convert to per-device workqueue")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Damien Le Moal [Fri, 11 Nov 2016 05:53:26 +0000 (14:53 +0900)]
sd_zbc: Force use of READ16/WRITE16
Normally, sd_read_capacity sets sdp->use_16_for_rw to 1 based on the
disk capacity so that READ16/WRITE16 are used for large drives.
However, for a zoned disk with RC_BASIS set to 0, the capacity reported
through READ_CAPACITY may be very small, leading to use_16_for_rw not being
set and READ10/WRITE10 commands being used, even after the actual zoned disk
capacity is corrected in sd_zbc_read_zones. This causes LBA offset overflow for
accesses beyond 2TB.
As the ZBC standard makes it mandatory for ZBC drives to support
the READ16/WRITE16 commands anyway, make sure that use_16_for_rw is set.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
eviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Mon, 26 Sep 2016 02:54:38 +0000 (19:54 -0700)]
bsg: Add sparse annotations to bsg_request_fn()
Avoid that sparse complains about unbalanced lock actions.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 11 Nov 2016 15:23:53 +0000 (08:23 -0700)]
blk-wbt: use BLK_STAT_{READ,WRITE} instead of 0/1
Since we have proper enums for the stats directions, use them.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 11 Nov 2016 04:50:51 +0000 (21:50 -0700)]
blk-wbt: remove stat ops
Again a leftover from when the throttling code was generic. Now that we
just have the block user, get rid of the stat ops and indirections.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 11 Nov 2016 04:52:53 +0000 (21:52 -0700)]
blk-wbt: store queue instead of bdi
The bdi was a leftover from when the code was block layer agnostic.
Now that we just support a block layer user, store the queue directly.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 4 Nov 2016 15:34:34 +0000 (09:34 -0600)]
block: move poll code to blk-mq
The poll code is blk-mq specific, let's move it to blk-mq.c. This
is a prep patch for improving the polling code.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Sachin Shukla [Fri, 11 Nov 2016 09:04:51 +0000 (14:34 +0530)]
Block: mtip32xx: Improvement in code readability when memdup_user() fails.
There is no need to call kfree() if memdup_user() fails, as no memory
was allocated and the error in the error-valued pointer should be returned.
Signed-off-by: Sachin Shukla <sachin.s5@samsung.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 11 Nov 2016 19:24:46 +0000 (12:24 -0700)]
blk-mq: blk_mq_try_issue_directly() should lookup hardware queue
A previous commit changed this to pass in the hardware queue, but
it was using the wrong hardware queue. Hence a request that was
allocated on one hardware queue ended up being issued on another
one, and that caused IO timeouts and oopses on some drivers. Since
the request holds hardware queue private resources, like a tag,
we can't just issue it on a different hardware queue.
Fixes: 2253efc850c4 ("blk-mq: Move more code into blk_mq_direct_issue_request()")
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Nov 2016 19:38:14 +0000 (12:38 -0700)]
block: hook up writeback throttling
Enable throttling of buffered writeback to make it a lot
more smooth, and has way less impact on other system activity.
Background writeback should be, by definition, background
activity. The fact that we flush huge bundles of it at the time
means that it potentially has heavy impacts on foreground workloads,
which isn't ideal. We can't easily limit the sizes of writes that
we do, since that would impact file system layout in the presence
of delayed allocation. So just throttle back buffered writeback,
unless someone is waiting for it.
The algorithm for when to throttle takes its inspiration in the
CoDel networking scheduling algorithm. Like CoDel, blk-wb monitors
the minimum latencies of requests over a window of time. In that
window of time, if the minimum latency of any request exceeds a
given target, then a scale count is incremented and the queue depth
is shrunk. The next monitoring window is shrunk accordingly. Unlike
CoDel, if we hit a window that exhibits good behavior, then we
simply increment the scale count and re-calculate the limits for that
scale value. This prevents us from oscillating between a
close-to-ideal value and max all the time, instead remaining in the
windows where we get good behavior.
Unlike CoDel, blk-wb allows the scale count to to negative. This
happens if we primarily have writes going on. Unlike positive
scale counts, this doesn't change the size of the monitoring window.
When the heavy writers finish, blk-bw quickly snaps back to it's
stable state of a zero scale count.
The patch registers a sysfs entry, 'wb_lat_usec'. This sets the latency
target to me met. It defaults to 2 msec for non-rotational storage, and
75 msec for rotational storage. Setting this value to '0' disables
blk-wb. Generally, a user would not have to touch this setting.
We don't enable WBT on devices that are managed with CFQ, and have
a non-root block cgroup attached. If we have a proportional share setup
on this particular disk, then the wbt throttling will interfere with
that. We don't have a strong need for wbt for that case, since we will
rely on CFQ doing that for us.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Nov 2016 19:36:15 +0000 (12:36 -0700)]
blk-wbt: add general throttling mechanism
We can hook this up to the block layer, to help throttle buffered
writes.
wbt registers a few trace points that can be used to track what is
happening in the system:
wbt_lat: 259:0: latency
2446318
wbt_stat: 259:0: rmean=
2446318, rmin=
2446318, rmax=
2446318, rsamples=1,
wmean=518866, wmin=15522, wmax=
5330353, wsamples=57
wbt_step: 259:0: step down: step=1, window=
72727272, background=8, normal=16, max=32
This shows a sync issue event (wbt_lat) that exceeded it's time. wbt_stat
dumps the current read/write stats for that window, and wbt_step shows a
step down event where we now scale back writes. Each trace includes the
device, 259:0 in this case.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 8 Nov 2016 04:32:37 +0000 (21:32 -0700)]
block: add scalable completion tracking of requests
For legacy block, we simply track them in the request queue. For
blk-mq, we track them on a per-sw queue basis, which we can then
sum up through the hardware queues and finally to a per device
state.
The stats are tracked in, roughly, 0.1s interval windows.
Add sysfs files to display the stats.
The feature is off by default, to avoid any extra overhead. In-kernel
users of it can turn it on by setting QUEUE_FLAG_STATS in the queue
flags. We currently don't turn it on if someone just reads any of
the stats files, that is something we could add as well.
Signed-off-by: Jens Axboe <axboe@fb.com>
Tejun Heo [Thu, 10 Nov 2016 16:16:37 +0000 (11:16 -0500)]
block: cfq_cpd_alloc() should use @gfp
cfq_cpd_alloc() which is the cpd_alloc_fn implementation for cfq was
incorrectly hard coding GFP_KERNEL instead of using the mask specified
through the @gfp parameter. This currently doesn't cause any actual
issues because all current callers specify GFP_KERNEL. Fix it.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: e4a9bde9589f ("blkcg: replace blkcg_policy->cpd_size with ->cpd_alloc/free_fn() methods")
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 10 Nov 2016 15:32:34 +0000 (07:32 -0800)]
nvme: don't pass the full CQE to nvme_complete_async_event
We only need the status and result fields, and passing them explicitly
makes life a lot easier for the Fibre Channel transport which doesn't
have a full CQE for the fast path case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 10 Nov 2016 15:32:33 +0000 (07:32 -0800)]
nvme: introduce struct nvme_request
This adds a shared per-request structure for all NVMe I/O. This structure
is embedded as the first member in all NVMe transport drivers request
private data and allows to implement common functionality between the
drivers.
The first use is to replace the current abuse of the SCSI command
passthrough fields in struct request for the NVMe command passthrough,
but it will grow a field more fields to allow implementing things
like common abort handlers in the future.
The passthrough commands are handled by having a pointer to the SQE
(struct nvme_command) in struct nvme_request, and the union of the
possible result fields, which had to be turned from an anonymous
into a named union for that purpose. This avoids having to pass
a reference to a full CQE around and thus makes checking the result
a lot more lightweight.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Arnd Bergmann [Wed, 9 Nov 2016 12:55:35 +0000 (13:55 +0100)]
skd: fix function prototype
Building with W=1 shows a harmless warning for the skd driver:
drivers/block/skd_main.c:2959:1: error: ‘static’ is not at beginning of declaration [-Werror=old-style-declaration]
This changes the prototype to the expected formatting.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Arnd Bergmann [Wed, 9 Nov 2016 12:55:34 +0000 (13:55 +0100)]
skd: fix msix error handling
As reported by gcc -Wmaybe-uninitialized, the cleanup path for
skd_acquire_msix tries to free the already allocated msi-x vectors
in reverse order, but the index variable may not have been
used yet:
drivers/block/skd_main.c: In function ‘skd_acquire_irq’:
drivers/block/skd_main.c:3890:8: error: ‘i’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
This changes the failure path to skip releasing the interrupts
if we have not started requesting them yet.
Fixes: 180b0ae77d49 ("skd: use pci_alloc_irq_vectors")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 9 Nov 2016 02:39:28 +0000 (19:39 -0700)]
block: set REQ_SYNC if we clear REQ_FUA|REQ_PREFLUSH
If we insert a flush request, we clear REQ_PREFLUSH and/or REQ_FUA,
depending on flush settings. Since op_is_sync() factors those flags
in for deciding whether this request is sync or not, we should
set REQ_SYNC to avoid screwing up this accounting.
This should be less fragile.
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Fixes: b685d3d65ac ("block: treat REQ_FUA and REQ_PREFLUSH as synchronous")
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 1 Sep 2016 16:20:33 +0000 (10:20 -0600)]
writeback: track if we're sleeping on progress in balance_dirty_pages()
Note in the bdi_writeback structure whenever a task ends up sleeping
waiting for progress. We can use that information in the lower layers
to increase the priority of writes.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Andy Shevchenko [Sat, 22 Oct 2016 16:52:01 +0000 (19:52 +0300)]
skd_main: use %*ph to dump small buffers
Replace custom approach by %*ph specifier to dump small buffers in hex format.
Unfortunately we can't use print_hex_dump_bytes() here since tha gap is
present, though one familiar with the code may change this.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Mon, 7 Nov 2016 19:14:07 +0000 (11:14 -0800)]
skd: use pci_alloc_irq_vectors
Switch the skd driver to use pci_alloc_irq_vectors. We need to two calls to
pci_alloc_irq_vectors as skd only supports multiple MSI-X vectors, but not
multiple MSI vectors.
Otherwise this cleans up a lot of cruft and allows to a lot more common code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 27 Oct 2016 07:04:44 +0000 (09:04 +0200)]
pktcdvd: don't scribble over the bvec array
Hi Peter, hi Jens,
I've been looking over the multi page bio vec work again recently, and
one of the stumbling blocks is raw biovec access in the pktcdvd.
The first issue is that it directly sets up the page and offset pointers
in the biovec just before calling bio_add_page. As bio_add_page already
does the setup it's trivial to just switch it to stack variables for the
arguments.
The second issue is the copy code in pkt_make_local_copy, which
effectively is an opencoded version of bio_copy_data except that it
skips pages that already are the same in the ѕource and destination.
But we look at the only calleer we just set up the bio using bio_add_page
to point exactly to the page array that pkt_make_local_copy compares,
so the pages will always be the same and we can just remove this function.
Note that all this is done based on code inspection, I don't have any
packet writing hardware myself.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Gabriel Krisman Bertazi [Wed, 28 Sep 2016 03:24:24 +0000 (00:24 -0300)]
blk-mq: Always schedule hctx->next_cpu
Commit
0e87e58bf60e ("blk-mq: improve warning for running a queue on the
wrong CPU") attempts to avoid triggering the WARN_ON in
__blk_mq_run_hw_queue when the expected CPU is dead. Problem is, in the
last batch execution before round robin, blk_mq_hctx_next_cpu can
schedule a dead CPU and also update next_cpu to the next alive CPU in
the mask, which will trigger the WARN_ON despite the previous
workaround.
The following patch fixes this scenario by always scheduling the value
in hctx->next_cpu. This changes the moment when we round-robin the CPU
running the hctx, but it really doesn't matter, since it still executes
BLK_MQ_CPU_WORK_BATCH times in a row before switching to another CPU.
Fixes: 0e87e58bf60e ("blk-mq: improve warning for running a queue on the wrong CPU")
Signed-off-by: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Wed, 30 Mar 2016 16:21:08 +0000 (10:21 -0600)]
block: add code to track actual device queue depth
For blk-mq, ->nr_requests does track queue depth, at least at init
time. But for the older queue paths, it's simply a soft setting.
On top of that, it's generally larger than the hardware setting
on purpose, to allow backup of requests for merging.
Fill a hole in struct request with a 'queue_depth' member, that
drivers can call to more closely inform the block layer of the
real queue depth.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Shaohua Li [Fri, 4 Nov 2016 00:03:54 +0000 (17:03 -0700)]
blk-mq: immediately dispatch big size request
This is corresponding part for blk-mq. Disk with multiple hardware
queues doesn't need this as we only hold 1 request at most.
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Shaohua Li [Fri, 4 Nov 2016 00:03:53 +0000 (17:03 -0700)]
block: immediately dispatch big size request
Currently block plug holds up to 16 non-mergeable requests. This makes
sense if the request size is small, eg, reduce lock contention. But if
request size is big enough, we don't need to worry about lock
contention. Holding such request makes no sense and it lows the disk
utilization.
In practice, this improves 10% throughput for my raid5 sequential write
workload.
The size (128k) is arbitrary right now, but it makes sure lock
contention is small. This probably could be more intelligent, eg, check
average request size holded. Since this is mainly for sequential IO,
probably not worthy.
V2: check the last request instead of the first request, so as long as
there is one big size request we flush the plug.
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Johannes Thumshirn [Thu, 3 Nov 2016 09:38:54 +0000 (03:38 -0600)]
block: drop q argument from bsg_validate_sgv4_hdr
bsg_validate_sgv4_hdr() doesn't care about the request_queue, so drop it
from it's arguments.
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:23:40 +0000 (17:23 -0700)]
nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. Change
blk_queue_stopped() tests into blk_mq_queue_stopped().
This patch fixes a race condition: using queue_flag_clear_unlocked()
is not safe if any other function that manipulates the queue flags
can be called concurrently, e.g. blk_cleanup_queue().
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:23:19 +0000 (17:23 -0700)]
nvme: Fix a race condition related to stopping queues
Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:22:16 +0000 (17:22 -0700)]
dm: Fix a race condition related to stopping and starting queues
Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
calls have stopped before setting the "queue stopped" flag. This
allows to remove the "queue stopped" test from dm_mq_queue_rq() and
dm_mq_requeue_request(). This patch fixes a race condition because
dm_mq_queue_rq() is called without holding the queue lock and hence
BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
in progress. This patch prevents that the following hang occurs
sporadically when using dm-mq:
INFO: task systemd-udevd:10111 blocked for more than 480 seconds.
Call Trace:
[<
ffffffff8161f397>] schedule+0x37/0x90
[<
ffffffff816239ef>] schedule_timeout+0x27f/0x470
[<
ffffffff8161e76f>] io_schedule_timeout+0x9f/0x110
[<
ffffffff8161fb36>] bit_wait_io+0x16/0x60
[<
ffffffff8161f929>] __wait_on_bit_lock+0x49/0xa0
[<
ffffffff8114fe69>] __lock_page+0xb9/0xc0
[<
ffffffff81165d90>] truncate_inode_pages_range+0x3e0/0x760
[<
ffffffff81166120>] truncate_inode_pages+0x10/0x20
[<
ffffffff81212a20>] kill_bdev+0x30/0x40
[<
ffffffff81213d41>] __blkdev_put+0x71/0x360
[<
ffffffff81214079>] blkdev_put+0x49/0x170
[<
ffffffff812141c0>] blkdev_close+0x20/0x30
[<
ffffffff811d48e8>] __fput+0xe8/0x1f0
[<
ffffffff811d4a29>] ____fput+0x9/0x10
[<
ffffffff810842d3>] task_work_run+0x83/0xb0
[<
ffffffff8106606e>] do_exit+0x3ee/0xc40
[<
ffffffff8106694b>] do_group_exit+0x4b/0xc0
[<
ffffffff81073d9a>] get_signal+0x2ca/0x940
[<
ffffffff8101bf43>] do_signal+0x23/0x660
[<
ffffffff810022b3>] exit_to_usermode_loop+0x73/0xb0
[<
ffffffff81002cb0>] syscall_return_slowpath+0xb0/0xc0
[<
ffffffff81624e33>] entry_SYSCALL_64_fastpath+0xa6/0xa8
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:22:00 +0000 (17:22 -0700)]
dm: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
Instead of manipulating both QUEUE_FLAG_STOPPED and BLK_MQ_S_STOPPED
in the dm start and stop queue functions, only manipulate the latter
flag. Change blk_queue_stopped() tests into blk_mq_queue_stopped().
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:21:41 +0000 (17:21 -0700)]
blk-mq: Add a kick_requeue_list argument to blk_mq_requeue_request()
Most blk_mq_requeue_request() and blk_mq_add_to_requeue_list() calls
are followed by kicking the requeue list. Hence add an argument to
these two functions that allows to kick the requeue list. This was
proposed by Christoph Hellwig.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Wed, 2 Nov 2016 16:09:51 +0000 (10:09 -0600)]
blk-mq: Introduce blk_mq_quiesce_queue()
blk_mq_quiesce_queue() waits until ongoing .queue_rq() invocations
have finished. This function does *not* wait until all outstanding
requests have finished (this means invocation of request.end_io()).
The algorithm used by blk_mq_quiesce_queue() is as follows:
* Hold either an RCU read lock or an SRCU read lock around
.queue_rq() calls. The former is used if .queue_rq() does not
block and the latter if .queue_rq() may block.
* blk_mq_quiesce_queue() first calls blk_mq_stop_hw_queues()
followed by synchronize_srcu() or synchronize_rcu(). The latter
call waits for .queue_rq() invocations that started before
blk_mq_quiesce_queue() was called.
* The blk_mq_hctx_stopped() calls that control whether or not
.queue_rq() will be called are called with the (S)RCU read lock
held. This is necessary to avoid race conditions against
blk_mq_quiesce_queue().
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:20:49 +0000 (17:20 -0700)]
blk-mq: Remove blk_mq_cancel_requeue_work()
Since blk_mq_requeue_work() no longer restarts stopped queues
canceling requeue work is no longer needed to prevent that a
stopped queue would be restarted. Hence remove this function.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:20:32 +0000 (17:20 -0700)]
blk-mq: Avoid that requeueing starts stopped queues
Since blk_mq_requeue_work() starts stopped queues and since
execution of this function can be scheduled after a queue has
been stopped it is not possible to stop queues without using
an additional state variable to track whether or not the queue
has been stopped. Hence modify blk_mq_requeue_work() such that it
does not start stopped queues. My conclusion after a review of
the blk_mq_stop_hw_queues() and blk_mq_{delay_,}kick_requeue_list()
callers is as follows:
* In the dm driver starting and stopping queues should only happen
if __dm_suspend() or __dm_resume() is called and not if the
requeue list is processed.
* In the SCSI core queue stopping and starting should only be
performed by the scsi_internal_device_block() and
scsi_internal_device_unblock() functions but not by any other
function. Although the blk_mq_stop_hw_queue() call in
scsi_queue_rq() may help to reduce CPU load if a LLD queue is
full, figuring out whether or not a queue should be restarted
when requeueing a command would require to introduce additional
locking in scsi_mq_requeue_cmd() to avoid a race with
scsi_internal_device_block(). Avoid this complexity by removing
the blk_mq_stop_hw_queue() call from scsi_queue_rq().
* In the NVMe core only the functions that call
blk_mq_start_stopped_hw_queues() explicitly should start stopped
queues.
* A blk_mq_start_stopped_hwqueues() call must be added in the
xen-blkfront driver in its blkif_recover() function.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: James Bottomley <jejb@linux.vnet.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:20:02 +0000 (17:20 -0700)]
blk-mq: Move more code into blk_mq_direct_issue_request()
Move the "hctx stopped" test and the insert request calls into
blk_mq_direct_issue_request(). Rename that function into
blk_mq_try_issue_directly() to reflect its new semantics. Pass
the hctx pointer to that function instead of looking it up a
second time. These changes avoid that code has to be duplicated
in the next patch.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:19:37 +0000 (17:19 -0700)]
blk-mq: Introduce blk_mq_queue_stopped()
The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:19:15 +0000 (17:19 -0700)]
blk-mq: Introduce blk_mq_hctx_stopped()
Multiple functions test the BLK_MQ_S_STOPPED bit so introduce
a helper function that performs this test.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Bart Van Assche [Sat, 29 Oct 2016 00:18:48 +0000 (17:18 -0700)]
blk-mq: Do not invoke .queue_rq() for a stopped queue
The meaning of the BLK_MQ_S_STOPPED flag is "do not call
.queue_rq()". Hence modify blk_mq_make_request() such that requests
are queued instead of issued if a queue has been stopped.
Reported-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Kent Overstreet [Mon, 31 Oct 2016 17:59:24 +0000 (11:59 -0600)]
block: add bio_iov_iter_get_pages()
This is a helper that pins down a range from an iov_iter and adds it to
a bio without requiring a separate memory allocation for the page array.
It will be used for upcoming direct I/O implementations for block devices
and iomap based file systems.
Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
[hch: ported to the iov_iter interface, renamed and added comments.
All blame should be directed to me and all fame should go to Kent
after this!]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Tue, 1 Nov 2016 16:01:35 +0000 (10:01 -0600)]
writeback: mark background writeback as such
If we're doing background type writes, then use the appropriate
background write flags for that.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Tue, 1 Nov 2016 16:00:38 +0000 (10:00 -0600)]
writeback: add wbc_to_write_flags()
Add wbc_to_write_flags(), which returns the write modifier flags to use,
based on a struct writeback_control. No functional changes in this
patch, but it prepares us for factoring other wbc fields for write type.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Tue, 1 Nov 2016 15:52:57 +0000 (09:52 -0600)]
block: add REQ_BACKGROUND
This adds a new request flag, REQ_BACKGROUND, that callers can use to
tell the block layer that this is background (non-urgent) IO.
Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:17 +0000 (07:40 -0600)]
block: remove the CONFIG_BLOCK ifdef in blk_types.h
Now that we have a separate header for struct bio_vec there is absolutely
no excuse for including this header from non-block I/O code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:16 +0000 (07:40 -0600)]
mm: only include blk_types in swap.h if CONFIG_SWAP is enabled
It's only needed for the CONFIG_SWAP-only use of bio_end_io_t.
Because CONFIG_SWAP implies CONFIG_BLOCK this will allow to drop some
ifdefs in blk_types.h.
Instead we'll need to add a few explicit includes that were implicit
before, though.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:15 +0000 (07:40 -0600)]
ceph: don't include blk_types.h in messenger.h
The file only needs the struct bvec_iter delcaration, which is available
from bvec.h.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:14 +0000 (07:40 -0600)]
arm, arm64: don't include blk_types.h in <asm/io.h>
No need for it - we only use struct bio_vec in prototypes and already have
forward declarations for it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:13 +0000 (07:40 -0600)]
block,fs: untangle fs.h and blk_types.h
Nothing in fs.h should require blk_types.h to be included.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:12 +0000 (07:40 -0600)]
block, fs: move submit_bio to bio.h
This is where all the other bio operations live, so users must include
bio.h anyway.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:11 +0000 (07:40 -0600)]
fs: decouple READ and WRITE from the block layer ops
Move READ and WRITE to kernel.h and don't define them in terms of block
layer ops; they are our generic data direction indicators these days
and have no more resemblance with the block layer ops.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:10 +0000 (07:40 -0600)]
block,fs: use REQ_* flags directly
Remove the WRITE_* and READ_SYNC wrappers, and just use the flags
directly. Where applicable this also drops usage of the
bio_set_op_attrs wrapper.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:09 +0000 (07:40 -0600)]
block: replace REQ_NOIDLE with REQ_IDLE
Noidle should be the default for writes as seen by all the compounds
definitions in fs.h using it. In fact only direct I/O really should
be using NODILE, so turn the whole flag around to get the defaults
right, which will make our life much easier especially onces the
WRITE_* defines go away.
This assumes all the existing "raw" users of REQ_SYNC for writes
want noidle behavior, which seems to be spot on from a quick audit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:08 +0000 (07:40 -0600)]
block: treat REQ_FUA and REQ_PREFLUSH as synchronous
Instead of requiring everyone to specify the REQ_SYNC flag aѕ well.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:07 +0000 (07:40 -0600)]
block: don't use REQ_SYNC in the READ_SYNC definition
Reads are synchronous per definition, don't add another flag for it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:06 +0000 (07:40 -0600)]
btrfs: use op_is_sync to check for synchronous requests
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:05 +0000 (07:40 -0600)]
bcache: use op_is_sync to check for synchronous requests
(and remove one layer of masking for the op_is_write call next to it).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:04 +0000 (07:40 -0600)]
umem: use op_is_sync to check for synchronous requests
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:03 +0000 (07:40 -0600)]
blk-cgroup: use op_is_sync to check for synchronous requests
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:02 +0000 (07:40 -0600)]
cfq-iosched: use op_is_sync instead of opencoding it
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:15 +0000 (15:12 +0200)]
block: add a proper block layer data direction encoding
Currently the block layer op_is_write, bio_data_dir and rq_data_dir
helper treat every operation that is not a READ as a data out operation.
This worked surprisingly long, but the new REQ_OP_ZONE_REPORT operation
actually adds a second operation that reads data from the device.
Surprisingly nothing critical relied on this direction, but this might
be a good opportunity to properly fix this issue up.
We take a little inspiration and use the least significant bit of the
operation number to encode the data direction, which just requires us
to renumber the operations to fix this scheme.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Fri, 28 Oct 2016 14:48:16 +0000 (08:48 -0600)]
block: better op and flags encoding
Now that we don't need the common flags to overflow outside the range
of a 32-bit type we can encode them the same way for both the bio and
request fields. This in addition allows us to place the operation
first (and make some room for more ops while we're at it) and to
stop having to shift around the operation values.
In addition this allows passing around only one value in the block layer
instead of two (and eventuall also in the file systems, but we can do
that later) and thus clean up a lot of code.
Last but not least this allows decreasing the size of the cmd_flags
field in struct request to 32-bits. Various functions passing this
value could also be updated, but I'd like to avoid the churn for now.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:13 +0000 (15:12 +0200)]
block: split out request-only flags into a new namespace
A lot of the REQ_* flags are only used on struct requests, and only of
use to the block layer and a few drivers that dig into struct request
internals.
This patch adds a new req_flags_t rq_flags field to struct request for
them, and thus dramatically shrinks the number of common requests. It
also removes the unfortunate situation where we have to fit the fields
from the same enum into 32 bits for struct bio and 64 bits for
struct request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:12 +0000 (15:12 +0200)]
block: replace REQ_THROTTLED with a bio flag
It's the last bio-only REQ_* flag, and we have space for it in the bio
bi_flags field.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:11 +0000 (15:12 +0200)]
block: move REQ_RAHEAD to common flags
The information that am I/O is a read-ahead can be useful for drivers.
In fact the NVMe driver already checks it, even if it won't ever be set
at the moment.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:10 +0000 (15:12 +0200)]
block: REQ_NOMERGE is common to the bio and request
So move it into the common setion of the request flags.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Christoph Hellwig [Thu, 20 Oct 2016 13:12:09 +0000 (15:12 +0200)]
block: remove bio_is_rw
With the addition of the zoned operations the tests in this function
became incorrect. But I think it's much better to just open code the
allow operations in the only caller anyway.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Fri, 28 Oct 2016 01:03:55 +0000 (19:03 -0600)]
blk-mq: get rid of confusing blk_map_ctx structure
We can just use struct blk_mq_alloc_data - it has a few more
members, but we allocate it further down the stack anyway. So
this cleans up the code, and reduces the stack overhead a bit.
Signed-off-by: Jens Axboe <axboe@fb.com>
Jens Axboe [Thu, 27 Oct 2016 15:49:19 +0000 (09:49 -0600)]
blk-mq: update hardware and software queues for sleeping alloc
If we end up sleeping due to running out of requests, we should
update the hardware and software queues in the map ctx structure.
Otherwise we could end up having rq->mq_ctx point to the pre-sleep
context, and risk corrupting ctx->rq_list since we'll be
grabbing the wrong lock when inserting the request.
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Reported-by: Chris Mason <clm@fb.com>
Tested-by: Chris Mason <clm@fb.com>
Fixes: 63581af3f31e ("blk-mq: remove non-blocking pass in blk_mq_map_request")
Signed-off-by: Jens Axboe <axboe@fb.com>
Mike Snitzer [Tue, 25 Oct 2016 14:46:23 +0000 (08:46 -0600)]
brd: remove support for BLKFLSBUF
Discontinue having the brd driver destructively free all pages in the
ramdisk in response to the BLKFLSBUF ioctl. Doing so allows a BLKFLSBUF
ioctl issued to a logical partition to destroy pages of the parent brd
device (and all other partitions of that brd device).
This change breaks compatibility - but in this case the compatibility
breaks more than it helps.
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Jan Kara [Tue, 25 Oct 2016 11:53:27 +0000 (13:53 +0200)]
brd: Switch rd_size to unsigned long
Currently rd_size was int which lead to overflow and bogus device size
once the requested ramdisk size was 1 TB or more. Although these days
ramdisks with 1 TB size are mostly a mistake, the days when they are
useful are not far.
Reported-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@fb.com>
Arnd Bergmann [Fri, 21 Oct 2016 15:32:24 +0000 (17:32 +0200)]
sd: fix uninitialized variable access in error handling
If sd_zbc_report_zones fails, the check for 'zone_blocks == 0'
later in the function accesses uninitialized data:
drivers/scsi/sd_zbc.c: In function ‘sd_zbc_read_zones’:
drivers/scsi/sd_zbc.c:520:7: error: ‘zone_blocks’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
This sets it to zero, which has the desired effect of leaving
the sd_zbc_read_zones successfully with sdkp->zone_blocks = 0.
Fixes: 89d947561077 ("sd: Implement support for ZBC devices")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>