openwrt/staging/blogic.git
10 years agodm thin: suspend/resume active thin devices when reloading thin-pool
Mike Snitzer [Wed, 29 Oct 2014 00:58:45 +0000 (20:58 -0400)]
dm thin: suspend/resume active thin devices when reloading thin-pool

Before this change it was expected that userspace would first suspend
all active thin devices, reload/resize the thin-pool target, then resume
all active thin devices.  Now the thin-pool suspend/resume will trigger
the suspend/resume of all active thins via appropriate calls to
dm_internal_suspend and dm_internal_resume.

Store the mapped_device for each thin device in struct thin_c to make
these calls possible.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
10 years agodm: enhance internal suspend and resume interface
Mike Snitzer [Tue, 28 Oct 2014 22:34:52 +0000 (18:34 -0400)]
dm: enhance internal suspend and resume interface

Rename dm_internal_{suspend,resume} to dm_internal_{suspend,resume}_fast
-- dm-stats will continue using these methods to avoid all the extra
suspend/resume logic that is not needed in order to quickly flush IO.

Introduce dm_internal_suspend_noflush() variant that actually calls the
mapped_device's target callbacks -- otherwise target-specific hooks are
avoided (e.g. dm-thin's thin_presuspend and thin_postsuspend).  Common
code between dm_internal_{suspend_noflush,resume} and
dm_{suspend,resume} was factored out as __dm_{suspend,resume}.

Update dm_internal_{suspend_noflush,resume} to always take and release
the mapped_device's suspend_lock.  Also update dm_{suspend,resume} to be
aware of potential for DM_INTERNAL_SUSPEND_FLAG to be set and respond
accordingly by interruptibly waiting for the DM_INTERNAL_SUSPEND_FLAG to
be cleared.  Add lockdep annotation to dm_suspend() and dm_resume().

The existing DM_SUSPEND_FLAG remains unchanged.
DM_INTERNAL_SUSPEND_FLAG is set by dm_internal_suspend_noflush() and
cleared by dm_internal_resume().

Both DM_SUSPEND_FLAG and DM_INTERNAL_SUSPEND_FLAG may be set if a device
was already suspended when dm_internal_suspend_noflush() was called --
this can be thought of as a "nested suspend".  A "nested suspend" can
occur with legacy userspace dm-thin code that might suspend all active
thin volumes before suspending the pool for resize.

But otherwise, in the normal dm-thin-pool suspend case moving forward:
the thin-pool will have DM_SUSPEND_FLAG set and all active thins from
that thin-pool will have DM_INTERNAL_SUSPEND_FLAG set.

Also add DM_INTERNAL_SUSPEND_FLAG to status report.  This new
DM_INTERNAL_SUSPEND_FLAG state is being reported to assist with
debugging (e.g. 'dmsetup info' will report an internally suspended
device accordingly).

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
10 years agodm thin: do not allow thin device activation while pool is suspended
Mike Snitzer [Fri, 7 Nov 2014 20:09:46 +0000 (15:09 -0500)]
dm thin: do not allow thin device activation while pool is suspended

Otherwise IO could be issued to the pool while it is suspended.

Care was taken to properly interlock between the thin and thin-pool
targets when accessing the pool's 'suspended' flag.  The thin_ctr will
not add a new thin device to the pool's active_thins list if the pool is
susepended.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
10 years agodm: add presuspend_undo hook to target_type
Mike Snitzer [Wed, 29 Oct 2014 00:13:31 +0000 (20:13 -0400)]
dm: add presuspend_undo hook to target_type

The DM thin-pool target now must undo the changes performed during
pool_presuspend() so introduce presuspend_undo hook in target_type.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
10 years agodm: return earlier from dm_blk_ioctl if target doesn't implement .ioctl
Mike Snitzer [Sun, 16 Nov 2014 19:21:47 +0000 (14:21 -0500)]
dm: return earlier from dm_blk_ioctl if target doesn't implement .ioctl

No point checking if the device is suspended if the current target
doesn't even implement .ioctl

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: remove stale 'trim' message in block comment above pool_message
Mike Snitzer [Fri, 7 Nov 2014 20:27:56 +0000 (15:27 -0500)]
dm thin: remove stale 'trim' message in block comment above pool_message

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: fix a race in thin_dtr
Mikulas Patocka [Wed, 5 Nov 2014 22:00:13 +0000 (17:00 -0500)]
dm thin: fix a race in thin_dtr

As long as struct thin_c is in the list, anyone can grab a reference of
it.  Consequently, we must wait for the reference count to drop to zero
*after* we remove the structure from the list, not before.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache: emit a warning message if there are a lot of cache blocks
Joe Thornber [Tue, 11 Nov 2014 11:58:32 +0000 (11:58 +0000)]
dm cache: emit a warning message if there are a lot of cache blocks

Loading and saving millions of block mappings takes time.  We may as
well explain what's going on, and encourage people to use a larger
cache block size.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache: improve discard support
Joe Thornber [Thu, 6 Nov 2014 10:18:04 +0000 (10:18 +0000)]
dm cache: improve discard support

Safely allow the discard blocksize to be larger than the cache blocksize
by using the bio prison's range locking support.  This also improves
discard performance considerly because larger discards are issued to the
dm-cache device.  The discard blocksize was always intended to be
greater than the cache blocksize.  But until now it wasn't implemented
safely.

Also, by safely restoring the ability to have discard blocksize larger
than cache blocksize we're able to significantly reduce the memory used
for the cache's discard bitset.  Before, with a small discard blocksize,
the discard bitset could get quite large because its size is a function
of the discard blocksize and the origin device's size.  For example,
previously, using a 32KB cache blocksize with a 40TB origin resulted in
1280MB of incore memory use for the discard bitset!  Now, the discard
blocksize is scaled up accordingly to ensure the discard bitset is
capped at 2**14 bits, or 16KB.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache: revert "prevent corruption caused by discard_block_size > cache_block_size"
Joe Thornber [Thu, 6 Nov 2014 14:38:01 +0000 (14:38 +0000)]
dm cache: revert "prevent corruption caused by discard_block_size > cache_block_size"

This reverts commit d132cc6d9e92424bb9d4fd35f5bd0e55d583f4be because we
actually do want to allow the discard blocksize to be larger than the
cache blocksize.  Further dm-cache discard changes will make this
possible.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache: revert "remove remainder of distinct discard block size"
Joe Thornber [Fri, 7 Nov 2014 14:47:07 +0000 (14:47 +0000)]
dm cache: revert "remove remainder of distinct discard block size"

This reverts commit 64ab346a360a4b15c28fb8531918d4a01f4eabd9 because we
actually do want to allow the discard blocksize to be larger than the
cache blocksize.  Further dm-cache discard changes will make this
possible.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm bio prison: introduce support for locking ranges of blocks
Joe Thornber [Wed, 17 Sep 2014 09:17:39 +0000 (10:17 +0100)]
dm bio prison: introduce support for locking ranges of blocks

Ranges will be placed in the same cell if they overlap.

Range locking is a prerequisite for more efficient multi-block discard
support in both the cache and thin-provisioning targets.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache policy mq: simplify ability to promote sequential IO to the cache
Mike Snitzer [Thu, 30 Oct 2014 14:02:01 +0000 (10:02 -0400)]
dm cache policy mq: simplify ability to promote sequential IO to the cache

Before, if the user wanted sequential IO to be promoted to the cache
they'd have to set sequential_threshold to some nebulous large value.

Now, the user may easily disable sequential IO detection (and sequential
IO's implicit bypass of the cache) by setting sequential_threshold to 0.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm cache policy mq: tweak algorithm that decides when to promote a block
Joe Thornber [Wed, 22 Oct 2014 13:30:58 +0000 (14:30 +0100)]
dm cache policy mq: tweak algorithm that decides when to promote a block

Rather than maintaining a separate promote_threshold variable that we
periodically update we now use the hit count of the oldest clean
block.  Also add a fudge factor to discourage demoting dirty blocks.

With some tests this has a sizeable difference, because the old code
was too eager to demote blocks.  For example, device-mapper-test-suite's
git_extract_cache_quick test goes from taking 190 seconds, to 142
(linear on spindle takes 250).

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: do not call dm_sync_table() when creating new devices
Hannes Reinecke [Wed, 5 Nov 2014 13:35:50 +0000 (14:35 +0100)]
dm: do not call dm_sync_table() when creating new devices

When creating new devices dm_sync_table() calls
synchronize_rcu_expedited(), causing _all_ pending RCU pointers to be
flushed. This causes a latency overhead that is especially noticeable
when creating lots of devices.

And all of this is pointless as there are no old maps to be
disconnected, and hence no stale pointers which would need to be
cleared up.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: sparse: Annotate field with __rcu for checking
Pranith Kumar [Tue, 28 Oct 2014 22:09:57 +0000 (15:09 -0700)]
dm: sparse: Annotate field with __rcu for checking

Annotate the map field with __rcu since this is a rcu pointer which is checked
by sparse.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: Use rcu_dereference() for accessing rcu pointer
Pranith Kumar [Tue, 28 Oct 2014 22:09:56 +0000 (15:09 -0700)]
dm: Use rcu_dereference() for accessing rcu pointer

The map field in 'struct mapped_device' is an rcu pointer. Use rcu_dereference()
while accessing it.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: refactor requeue_io to eliminate spinlock bouncing
Mike Snitzer [Sun, 19 Oct 2014 11:52:44 +0000 (07:52 -0400)]
dm thin: refactor requeue_io to eliminate spinlock bouncing

Also refactor some other bio_list erroring helpers.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: optimize retry_bios_on_resume
Mike Snitzer [Sun, 19 Oct 2014 12:23:09 +0000 (08:23 -0400)]
dm thin: optimize retry_bios_on_resume

Eliminate redundant should_error_unserviceable_bio check and error
loop.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: sort the deferred cells
Joe Thornber [Fri, 10 Oct 2014 15:42:10 +0000 (16:42 +0100)]
dm thin: sort the deferred cells

Sort the cells in logical block order before processing each cell in
process_thin_deferred_cells().  This significantly improves the ondisk
layout on rotational storage, whereby improving read performance.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: direct dispatch when breaking sharing
Joe Thornber [Wed, 15 Oct 2014 13:46:58 +0000 (14:46 +0100)]
dm thin: direct dispatch when breaking sharing

This use of direct submission in process_shared_bio() reduces latency
for submitting bios in the shared cell by avoiding adding those bios to
the deferred list and waiting for the next iteration of the worker.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: remap the bios in a cell immediately
Joe Thornber [Fri, 10 Oct 2014 14:27:16 +0000 (15:27 +0100)]
dm thin: remap the bios in a cell immediately

This use of direct submission in process_prepared_mapping() reduces
latency for submitting bios in a cell by avoiding adding those bios to
the deferred list and waiting for the next iteration of the worker.

But this direct submission exposes the potential for a race between
releasing a cell and incrementing deferred set.  Fix this by introducing
dm_cell_visit_release() and refactoring inc_remap_and_issue_cell()
accordingly.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: defer whole cells rather than individual bios
Joe Thornber [Fri, 10 Oct 2014 12:43:14 +0000 (13:43 +0100)]
dm thin: defer whole cells rather than individual bios

This avoids dropping the cell, so increases the probability that other
bios will collect within the cell, rather than being passed individually
to the worker.

Also add required process_cell and process_discard_cell error handling
wrappers and set associated pool-mode function pointers accordingly.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: factor out remap_and_issue_overwrite
Mike Snitzer [Thu, 9 Oct 2014 23:20:21 +0000 (19:20 -0400)]
dm thin: factor out remap_and_issue_overwrite

Purely cleanup of duplicated code, no functional change.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: performance improvement to discard processing
Joe Thornber [Fri, 12 Sep 2014 10:34:01 +0000 (11:34 +0100)]
dm thin: performance improvement to discard processing

When processing a discard bio, if the block is already quiesced do the
discard immediately rather than adding the mapping to a list for the
next iteration of the worker thread.

Discarding a fully provisioned 100G thin volume with 64k block size goes
from 860s to 95s with this change.

Clearly there's something wrong with the worker architecture, more
investigation needed.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: implement thin_merge
Mike Snitzer [Thu, 9 Oct 2014 19:24:12 +0000 (15:24 -0400)]
dm thin: implement thin_merge

Introduce thin_merge so that any additional constraints from the data
volume may be taken into account when determing the maximum number of
sectors that can be issued relative to the specified logical offset.

This is particularly important if/when the data volume is layered ontop
of a more sophisticated device (e.g. dm-raid or some other DM target).

Reviewed-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: improve documentation and code clarity in dm_merge_bvec
Mike Snitzer [Thu, 9 Oct 2014 23:32:22 +0000 (19:32 -0400)]
dm: improve documentation and code clarity in dm_merge_bvec

These code changes do not introduce a functional change.

But bio_add_page() will never attempt to build up a bio larger than
queue_max_sectors().  Similarly, bio_get_nr_vecs() is also bound by
queue_max_sectors().  Therefore, there is no point in allowing
dm_merge_bvec() to answer "how many sectors can a bio have at this
offset?" with anything larger than queue_max_sectors().  Using
queue_max_sectors() rather than BIO_MAX_SECTORS serves to more
accurately convey the limits that are being imposed.

Also, use unlikely() to clarify the fact that the defensive code in
dm_merge_bvec() relative to max_size going negative shouldn't ever
happen -- if it does happen there is a bug in the block layer for
requesting larger than dm_merge_bvec()'s initial response for a given
offset.  Also, update a comment in dm_merge_bvec() relative to
max_hw_sectors_kb.  And fix empty newline whitespace.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: adjust max_sectors_kb based on thinp blocksize
Mike Snitzer [Thu, 9 Oct 2014 22:43:25 +0000 (18:43 -0400)]
dm thin: adjust max_sectors_kb based on thinp blocksize

Allows for filesystems to submit bios that are a factor of the thinp
blocksize, improving dm-thinp efficiency (particularly when the data
volume is RAID).

Also set io_min to max_sectors_kb if it is a factor of the thinp
blocksize.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: throttle incoming IO
Joe Thornber [Mon, 6 Oct 2014 14:45:59 +0000 (15:45 +0100)]
dm thin: throttle incoming IO

Throttle IO based on the time it's taking the worker to do one loop.
There were reports of hung task timeouts occuring and it was observed
that the excessively long avgqu-sz (as reported by iostat) was
contributing to these hung tasks.

Throttling definitely helps dm-thinp perform better under heavy IO load
(without being detremental by being overzealous).  It reduces avgqu-sz
drastically, e.g.: from 60K to ~6K, and even as low as 150 once metadata
is cached by bufio, when dirty_ratio=5, dirty_background_ratio=2.  And
avgqu-sz stays at or below 30K even with dirty_ratio=20,
dirty_background_ratio=10.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin: prefetch missing metadata pages
Joe Thornber [Mon, 6 Oct 2014 14:28:30 +0000 (15:28 +0100)]
dm thin: prefetch missing metadata pages

Prefetch metadata at the start of the worker thread and then again every
128th bio processed from the deferred list.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm transaction manager: add support for prefetching blocks of metadata
Joe Thornber [Mon, 6 Oct 2014 14:27:26 +0000 (15:27 +0100)]
dm transaction manager: add support for prefetching blocks of metadata

Introduce the dm_tm_issue_prefetches interface.  If you're using a
non-blocking clone the tm will build up a list of requested blocks that
weren't in core.  dm_tm_issue_prefetches will request those blocks to be
prefetched.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm thin metadata: change dm_thin_find_block to allow blocking, but not issuing, IO
Joe Thornber [Mon, 6 Oct 2014 14:24:55 +0000 (15:24 +0100)]
dm thin metadata: change dm_thin_find_block to allow blocking, but not issuing, IO

This change is a prerequisite for allowing metadata to be prefetched.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm bio prison: switch to using a red black tree
Joe Thornber [Mon, 6 Oct 2014 20:30:06 +0000 (16:30 -0400)]
dm bio prison: switch to using a red black tree

Previously it was using a fixed sized hash table.  There are times
when very many concurrent cells are held (such as when processing a very
large discard).  When this happens the hash table performance becomes
very poor.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm bufio: evict buffers that are past the max age but retain some buffers
Joe Thornber [Thu, 9 Oct 2014 10:10:25 +0000 (11:10 +0100)]
dm bufio: evict buffers that are past the max age but retain some buffers

These changes help keep metadata backed by dm-bufio in-core longer which
fixes reports of metadata churn in the face of heavy random IO workloads.

Before, bufio evicted all buffers older than DM_BUFIO_DEFAULT_AGE_SECS.
Having a device (e.g. dm-thinp or dm-cache) lose all metadata just
because associated buffers had been idle for some time is unfriendly.

Now, the user may now configure the number of bytes that bufio retains
using the 'retain_bytes' module parameter.  The default is 256K.

Also, the DM_BUFIO_WORK_TIMER_SECS and DM_BUFIO_DEFAULT_AGE_SECS
defaults were quite low so increase them (to 30 and 300 respectively).

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm bufio: switch from a huge hash table to an rbtree
Joe Thornber [Mon, 6 Oct 2014 12:48:51 +0000 (13:48 +0100)]
dm bufio: switch from a huge hash table to an rbtree

Converting over to using an rbtree eliminates a fixed 8MB allocation
from vmalloc space for the hash table.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm btree: fix a recursion depth bug in btree walking code
Joe Thornber [Mon, 10 Nov 2014 15:03:24 +0000 (15:03 +0000)]
dm btree: fix a recursion depth bug in btree walking code

The walk code was using a 'ro_spine' to hold it's locked btree nodes.
But this data structure is designed for the rolling lock scheme, and
as such automatically unlocks blocks that are two steps up the call
chain.  This is not suitable for the simple recursive walk algorithm,
which retraces its steps.

This code is only used by the persistent array code, which in turn is
only used by dm-cache.  In order to trigger it you need to have a
mapping tree that is more than 2 levels deep; which equates to 8-16
million cache blocks.  For instance a 4T ssd with a very small block
size of 32k only just triggers this bug.

The fix just places the locked blocks on the stack, and stops using
the ro_spine altogether.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
10 years agodm thin: grab a virtual cell before looking up the mapping
Joe Thornber [Fri, 10 Oct 2014 08:41:09 +0000 (09:41 +0100)]
dm thin: grab a virtual cell before looking up the mapping

Avoids normal IO racing with discard.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
10 years agodm raid: fix inaccessible superblocks causing oops in configure_discard_support
Heinz Mauelshagen [Wed, 29 Oct 2014 18:02:27 +0000 (19:02 +0100)]
dm raid: fix inaccessible superblocks causing oops in configure_discard_support

Commit 48cf06bc5f ("dm raid: add discard support for RAID levels 4, 5
and 6") did not properly handle missing metadata device(s).  A failing
read of the superblock causes the metadata and data devices to be
removed from the dev array in struct raid_set, setting references to
both devices to NULL.  configure_discard_support() nonetheless tries to
access the data dev unconditionally causing an oops.

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm raid: ensure superblock's size matches device's logical block size
Heinz Mauelshagen [Fri, 17 Oct 2014 11:38:50 +0000 (13:38 +0200)]
dm raid: ensure superblock's size matches device's logical block size

The dm-raid superblock (struct dm_raid_superblock) is padded to 512
bytes and that size is being used to read it in from the metadata
device into one preallocated page.

Reading or writing this on a 512-byte sector device works fine but on
a 4096-byte sector device this fails.

Set the dm-raid superblock's size to the logical block size of the
metadata device, because IO at that size is guaranteed too work.  Also
add a size check to avoid silent partial metadata loss in case the
superblock should ever grow past the logical block size or PAGE_SIZE.

[includes pointer math fix from Dan Carpenter]
Reported-by: "Liuhua Wang" <lwang@suse.com>
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
10 years agodm bufio: change __GFP_IO to __GFP_FS in shrinker callbacks
Mikulas Patocka [Thu, 16 Oct 2014 18:45:20 +0000 (14:45 -0400)]
dm bufio: change __GFP_IO to __GFP_FS in shrinker callbacks

The shrinker uses gfp flags to indicate what kind of operation can the
driver wait for. If __GFP_IO flag is present, the driver can wait for
block I/O operations, if __GFP_FS flag is present, the driver can wait on
operations involving the filesystem.

dm-bufio tested for __GFP_IO. However, dm-bufio can run on a loop block
device that makes calls into the filesystem. If __GFP_IO is present and
__GFP_FS isn't, dm-bufio could still block on filesystem operations if it
runs on a loop block device.

The change from __GFP_IO to __GFP_FS supposedly fixes one observed (though
unreproducible) deadlock involving dm-bufio and loop device.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
10 years agodm stripe: fix potential for leak in stripe_ctr error path
Pavitra Kumar [Fri, 10 Oct 2014 15:19:46 +0000 (15:19 +0000)]
dm stripe: fix potential for leak in stripe_ctr error path

Fix a potential struct stripe_c leak that would occur if the
chunk_size exceeded the maximum allowed by dm_set_target_max_io_len
(UINT_MAX).  However, in practice there is no possibility of this
occuring given that chunk_size is of type uint32_t.  But it is good to
fix this to future-proof in case dm_set_target_max_io_len's
implementation were to change.

Signed-off-by: Pavitra Kumar <pavitrak@nvidia.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm log userspace: fix memory leak in dm_ulog_tfr_init failure path
Alexey Khoroshilov [Wed, 1 Oct 2014 20:58:35 +0000 (22:58 +0200)]
dm log userspace: fix memory leak in dm_ulog_tfr_init failure path

If cn_add_callback() fails in dm_ulog_tfr_init(), it does not
deallocate prealloced memory but calls cn_del_callback().

Found by Linux Driver Verification project (linuxtesting.org).

Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
10 years agodm bufio: when done scanning return from __scan immediately
Mikulas Patocka [Wed, 1 Oct 2014 17:29:48 +0000 (13:29 -0400)]
dm bufio: when done scanning return from __scan immediately

When __scan frees the required number of buffer entries that the
shrinker requested (nr_to_scan becomes zero) it must return.  Before
this fix the __scan code exited only the inner loop and continued in the
outer loop -- which could result in reduced performance due to extra
buffers being freed (e.g. unnecessarily evicted thinp metadata needing
to be synchronously re-read into bufio's cache).

Also, move dm_bufio_cond_resched to __scan's inner loop, so that
iterating the bufio client's lru lists doesn't result in scheduling
latency.

Reported-by: Joe Thornber <thornber@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 3.2+
10 years agodm bufio: update last_accessed when relinking a buffer
Joe Thornber [Tue, 30 Sep 2014 08:32:46 +0000 (09:32 +0100)]
dm bufio: update last_accessed when relinking a buffer

The 'last_accessed' member of the dm_buffer structure was only set when
the the buffer was created.  This led to each buffer being discarded
after dm_bufio_max_age time even if it was used recently.  In practice
this resulted in all thinp metadata being evicted soon after being read
-- this is particularly problematic for metadata intensive workloads
like multithreaded small random IO.

'last_accessed' is now updated each time the buffer is moved to the head
of the LRU list, so the buffer is now properly discarded if it was not
used in dm_bufio_max_age time.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # v3.2+
10 years agodm raid: add discard support for RAID levels 4, 5 and 6
Heinz Mauelshagen [Wed, 24 Sep 2014 15:47:19 +0000 (17:47 +0200)]
dm raid: add discard support for RAID levels 4, 5 and 6

In case of RAID levels 4, 5 and 6 we have to verify each RAID members'
ability to zero data on discards to avoid stripe data corruption -- if
discard_zeroes_data is not set for each RAID member discard support must
be disabled.  But given the uncertainty of whether or not a RAID member
properly supports zeroing data on discard we require the user to
explicitly allow discard support on RAID levels 4, 5, and 6 by setting
a dm-raid module paramter, e.g.: dm-raid.devices_handle_discard_safely=Y
Otherwise, discards could cause data corruption on RAID4/5/6.

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm raid: add discard support for RAID levels 1 and 10
Heinz Mauelshagen [Wed, 24 Sep 2014 15:47:18 +0000 (17:47 +0200)]
dm raid: add discard support for RAID levels 1 and 10

Discard support is not enabled for RAID levels 4, 5, and 6 at this time
due to concerns about unreliable discard_zeroes_data support on some
hardware.  Otherwise, discards could cause stripe data corruption
(classic example of bad apples spoiling the bunch).

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: allow active and inactive tables to share dm_devs
Benjamin Marzinski [Wed, 13 Aug 2014 18:53:43 +0000 (13:53 -0500)]
dm: allow active and inactive tables to share dm_devs

Until this change, when loading a new DM table, DM core would re-open
all of the devices in the DM table.  Now, DM core will avoid redundant
device opens (and closes when destroying the old table) if the old
table already has a device open using the same mode.  This is achieved
by managing reference counts on the table_devices that DM core now
stores in the mapped_device structure (rather than in the dm_table
structure).  So a mapped_device's active and inactive dm_tables' dm_dev
lists now just point to the dm_devs stored in the mapped_device's
table_devices list.

This improvement in DM core's device reference counting has the
side-effect of fixing a long-standing limitation of the multipath
target: a DM multipath table couldn't include any paths that were unusable
(failed).  For example: if all paths have failed and you add a new,
working, path to the table; you can't use it since the table load would
fail due to it still containing failed paths.  Now a re-load of a
multipath table can include failed devices and when those devices become
active again they can be used instantly.

The device list code in dm.c isn't a straight copy/paste from the code in
dm-table.c, but it's very close (aside from some variable renames).  One
subtle difference is that find_table_device for the tables_devices list
will only match devices with the same name and mode.  This is because we
don't want to upgrade a device's mode in the active table when an
inactive table is loaded.

Access to the mapped_device structure's tables_devices list requires a
mutex (tables_devices_lock), so that tables cannot be created and
destroyed concurrently.

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm mpath: stop queueing IO when no valid paths exist
Benjamin Marzinski [Wed, 13 Aug 2014 18:53:42 +0000 (13:53 -0500)]
dm mpath: stop queueing IO when no valid paths exist

'queue_io' is set so that IO is queued while paths are being
initialized.  Clear queue_io in __choose_pgpath if there are no valid
paths, since there are obviously no paths that can be initialized.
Otherwise IOs to the device will back up.

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: use bioset_create_nobvec()
Junichi Nomura [Fri, 3 Oct 2014 11:55:26 +0000 (11:55 +0000)]
dm: use bioset_create_nobvec()

Since DM core uses bio_clone_fast() for both bio-based and request-based
DM devices there is no need for DM's bioset to have a bvec mempool.

With this patch, on arch with 4KB page for example, memory usage will be
reduced by 64KB for each bio-based DM device and 1MB for each
request-based DM device.

For example, when you create 10,000 bio-based DM devices and 1,000
request-based DM devices, memory usage of biovec under no load is:
  # grep biovec /proc/slabinfo

  biovec-256        418068 418068   4096  ...
  biovec-128             0      0   2048  ...
  biovec-64              0      0   1024  ...
  biovec-16              0      0    256  ...

With this patch series applied, the usage becomes:
  # grep biovec /proc/slabinfo

  biovec-256           116    116   4096  ...
  biovec-128             0      0   2048  ...
  biovec-64              0      0   1024  ...
  biovec-16              0      0    256  ...

So 4096 * (418068 - 116) = 1.6GB of memory is saved in this example.

Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agodm: remove nr_iovecs parameter from alloc_tio()
Junichi Nomura [Fri, 3 Oct 2014 11:55:16 +0000 (11:55 +0000)]
dm: remove nr_iovecs parameter from alloc_tio()

alloc_tio() uses bio_alloc_bioset() to allocate a clone-bio for a bio.
alloc_tio() takes the number of bvecs to allocate for the clone-bio.
However, with v3.14's immutable biovec changes DM now uses
__bio_clone_fast() and no longer needs to allocate bvecs.

In practice, the 'nr_iovecs' passed to alloc_tio() is always effectively
0.  __clone_and_map_simple_bio() looked like it was passing non-zero
nr_iovecs, but its value was always within the range of inline bvecs and
no allocation actually happened.  If allocation happened, the BUG_ON() in
__bio_clone_fast() would've triggered.

Remove the nr_iovecs parameter from alloc_tio() to prevent possible
future bio_alloc_bioset() mis-use of a new bioset interface that will no
longer allow bvecs to be allocated.

Also fix extra whitespace before the __bio_clone_fast() call in
__clone_and_map_simple_bio().

Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
10 years agoblock: add bioset_create_nobvec()
Junichi Nomura [Fri, 3 Oct 2014 21:27:12 +0000 (17:27 -0400)]
block: add bioset_create_nobvec()

Users of bio_clone_fast() do not want bios with their own bvecs.
Allocating a bvec mempool as part of the bioset intended for such users
is a waste of memory.

bioset_create_nobvec() creates a bioset that doesn't have the bvec
mempool.

Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: use bio_clone_fast() in blk_rq_prep_clone()
Junichi Nomura [Fri, 3 Oct 2014 21:27:11 +0000 (17:27 -0400)]
block: use bio_clone_fast() in blk_rq_prep_clone()

Request cloning clones bios in the request to track the completion
of each bio.
For that purpose, we can use bio_clone_fast() instead of bio_clone()
to avoid unnecessary allocation and copy of bvecs.

This patch reduces memory footprint of request-based device-mapper
(about 1-4KB for each request) and is a preparation for further
reduction of memory usage by removing unused bvec mempool.

Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: misplaced rq_complete tracepoint
Hannes Reinecke [Wed, 1 Oct 2014 12:32:31 +0000 (14:32 +0200)]
block: misplaced rq_complete tracepoint

The rq_complete tracepoint was never issued for empty requests,
causing the resulting blktrace information to never show any
completion for those request.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agosd: Honor block layer integrity handling flags
Martin K. Petersen [Fri, 26 Sep 2014 23:20:08 +0000 (19:20 -0400)]
sd: Honor block layer integrity handling flags

A set of flags introduced in the block layer enable better control over
how protection information is handled. These flags are useful for both
error injection and data recovery purposes. Checking can be enabled and
disabled for controller and disk, and the guard tag format is now a
per-I/O property.

Update sd_protect_op to communicate the relevant information to the
low-level device driver via a set of flags in scsi_cmnd.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Replace strnicmp with strncasecmp
Rasmus Villemoes [Tue, 16 Sep 2014 20:51:16 +0000 (22:51 +0200)]
block: Replace strnicmp with strncasecmp

The kernel used to contain two functions for length-delimited,
case-insensitive string comparison, strnicmp with correct semantics
and a slightly buggy strncasecmp. The latter is the POSIX name, so
strnicmp was renamed to strncasecmp, and strnicmp made into a wrapper
for the new strncasecmp to avoid breaking existing users.

To allow the compat wrapper strnicmp to be removed at some point in
the future, and to avoid the extra indirection cost, do
s/strnicmp/strncasecmp/g.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Add T10 Protection Information functions
Martin K. Petersen [Fri, 26 Sep 2014 23:20:07 +0000 (19:20 -0400)]
block: Add T10 Protection Information functions

The T10 Protection Information format is also used by some devices that
do not go through the SCSI layer (virtual block devices, NVMe). Relocate
the relevant functions to a block layer library that can be used without
involving SCSI.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Don't merge requests if integrity flags differ
Martin K. Petersen [Fri, 26 Sep 2014 23:20:06 +0000 (19:20 -0400)]
block: Don't merge requests if integrity flags differ

We'd occasionally merge requests with conflicting integrity flags.
Introduce a merge helper which checks that the requests have compatible
integrity payloads.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Integrity checksum flag
Martin K. Petersen [Fri, 26 Sep 2014 23:20:05 +0000 (19:20 -0400)]
block: Integrity checksum flag

Make the choice of checksum a per-I/O property by introducing a flag
that can be inspected by the SCSI layer. There are several reasons for
this:

 1. It allows us to switch choice of checksum without unloading and
    reloading the HBA driver.

 2. During error recovery we need to be able to tell the HBA that
    checksums read from disk should not be verified and converted to IP
    checksums.

 3. For error injection purposes we need to be able to write a bad guard
    tag to storage. Since the storage device only supports T10 CRC we
    need to be able to disable IP checksum conversion on the HBA.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Relocate bio integrity flags
Martin K. Petersen [Fri, 26 Sep 2014 23:20:04 +0000 (19:20 -0400)]
block: Relocate bio integrity flags

Move flags affecting the integrity code out of the bio bi_flags and into
the block integrity payload.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Add a disk flag to block integrity profile
Martin K. Petersen [Fri, 26 Sep 2014 23:20:03 +0000 (19:20 -0400)]
block: Add a disk flag to block integrity profile

So far we have relied on the app tag size to determine whether a disk
has been formatted with T10 protection information or not. However, not
all target devices provide application tag storage.

Add a flag to the block integrity profile that indicates whether the
disk has been formatted with protection information.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Add prefix to block integrity profile flags
Martin K. Petersen [Fri, 26 Sep 2014 23:20:02 +0000 (19:20 -0400)]
block: Add prefix to block integrity profile flags

Add a BLK_ prefix to the integrity profile flags. Also rename the flags
to be more consistent with the generate/verify terminology in the rest
of the integrity code.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Clean up the code used to generate and verify integrity metadata
Martin K. Petersen [Fri, 26 Sep 2014 23:20:01 +0000 (19:20 -0400)]
block: Clean up the code used to generate and verify integrity metadata

Instead of the "operate" parameter we pass in a seed value and a pointer
to a function that can be used to process the integrity metadata. The
generation function is changed to have a return value to fit into this
scheme.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Make protection interval calculation generic
Martin K. Petersen [Fri, 26 Sep 2014 23:20:00 +0000 (19:20 -0400)]
block: Make protection interval calculation generic

Now that the protection interval has been detached from the sector size
we need to be able to handle sizes that are different from 4K and
512. Make the interval calculation generic.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Deprecate the use of the term sector in the context of block integrity
Martin K. Petersen [Fri, 26 Sep 2014 23:19:59 +0000 (19:19 -0400)]
block: Deprecate the use of the term sector in the context of block integrity

The protection interval is not necessarily tied to the logical block
size of a block device. Stop using the terms "sector" and "sectors".

Going forward we will use the term "seed" to describe the initial
reference tag value for a given I/O. "Interval" will be used to describe
the portion of the data buffer that a given piece of protection
information is associated with.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Remove bip_buf
Martin K. Petersen [Fri, 26 Sep 2014 23:19:58 +0000 (19:19 -0400)]
block: Remove bip_buf

bip_buf is not really needed so we can remove it.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Remove integrity tagging functions
Martin K. Petersen [Fri, 26 Sep 2014 23:19:57 +0000 (19:19 -0400)]
block: Remove integrity tagging functions

None of the filesystems appear interested in using the integrity tagging
feature. Potentially because very few storage devices actually permit
using the application tag space.

Remove the tagging functions.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Replace bi_integrity with bi_special
Martin K. Petersen [Fri, 26 Sep 2014 23:19:56 +0000 (19:19 -0400)]
block: Replace bi_integrity with bi_special

For commands like REQ_COPY we need a way to pass extra information along
with each bio. Like integrity metadata this information must be
available at the bottom of the stack so bi_private does not suffice.

Rename the existing bi_integrity field to bi_special and make it a union
so we can have different bio extensions for each class of command.

We previously used bi_integrity != NULL as a way to identify whether a
bio had integrity metadata or not. Introduce a REQ_INTEGRITY to be the
indicator now that bi_special can contain different things.

In addition, bio_integrity(bio) will now return a pointer to the
integrity payload (when applicable).

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: Get rid of bdev_integrity_enabled()
Martin K. Petersen [Fri, 26 Sep 2014 23:19:55 +0000 (19:19 -0400)]
block: Get rid of bdev_integrity_enabled()

bdev_integrity_enabled() is only used by bio_integrity_enabled().
Combine these two functions.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: support per-distpatch_queue flush machinery
Ming Lei [Thu, 25 Sep 2014 15:23:47 +0000 (23:23 +0800)]
blk-mq: support per-distpatch_queue flush machinery

This patch supports to run one single flush machinery for
each blk-mq dispatch queue, so that:

- current init_request and exit_request callbacks can
cover flush request too, then the buggy copying way of
initializing flush request's pdu can be fixed

- flushing performance gets improved in case of multi hw-queue

In fio sync write test over virtio-blk(4 hw queues, ioengine=sync,
iodepth=64, numjobs=4, bs=4K), it is observed that througput gets
increased a lot over my test environment:
- throughput: +70% in case of virtio-blk over null_blk
- throughput: +30% in case of virtio-blk over SSD image

The multi virtqueue feature isn't merged to QEMU yet, and patches for
the feature can be found in below tree:

git://kernel.ubuntu.com/ming/qemu.git   v2.1.0-mq.4

And simply passing 'num_queues=4 vectors=5' should be enough to
enable multi queue(quad queue) feature for QEMU virtio-blk.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: introduce 'blk_mq_ctx' parameter to blk_get_flush_queue
Ming Lei [Thu, 25 Sep 2014 15:23:46 +0000 (23:23 +0800)]
block: introduce 'blk_mq_ctx' parameter to blk_get_flush_queue

This patch adds 'blk_mq_ctx' parameter to blk_get_flush_queue(),
so that this function can find the corresponding blk_flush_queue
bound with current mq context since the flush queue will become
per hw-queue.

For legacy queue, the parameter can be simply 'NULL'.

For multiqueue case, the parameter should be set as the context
from which the related request is originated. With this context
info, the hw queue and related flush queue can be found easily.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: flush: avoid to figure out flush queue unnecessarily
Ming Lei [Thu, 25 Sep 2014 15:23:45 +0000 (23:23 +0800)]
block: flush: avoid to figure out flush queue unnecessarily

Just figuring out flush queue at the entry of kicking off flush
machinery and request's completion handler, then pass it through.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: remove blk_init_flush() and its pair
Ming Lei [Thu, 25 Sep 2014 15:23:44 +0000 (23:23 +0800)]
block: remove blk_init_flush() and its pair

Now mission of the two helpers is over, and just call
blk_alloc_flush_queue() and blk_free_flush_queue() directly.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: introduce blk_flush_queue to drive flush machinery
Ming Lei [Thu, 25 Sep 2014 15:23:43 +0000 (23:23 +0800)]
block: introduce blk_flush_queue to drive flush machinery

This patch introduces 'struct blk_flush_queue' and puts all
flush machinery related fields into this structure, so that

- flush implementation details aren't exposed to driver
- it is easy to convert to per dispatch-queue flush machinery

This patch is basically a mechanical replacement.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: avoid to use q->flush_rq directly
Ming Lei [Thu, 25 Sep 2014 15:23:42 +0000 (23:23 +0800)]
block: avoid to use q->flush_rq directly

This patch trys to use local variable to access flush request,
so that we can convert to per-queue flush machinery a bit easier.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: move flush initialization to blk_flush_init
Ming Lei [Thu, 25 Sep 2014 15:23:41 +0000 (23:23 +0800)]
block: move flush initialization to blk_flush_init

These fields are always used with the flush request, so
initialize them together.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: introduce blk_init_flush and its pair
Ming Lei [Thu, 25 Sep 2014 15:23:40 +0000 (23:23 +0800)]
block: introduce blk_init_flush and its pair

These two temporary functions are introduced for holding flush
initialization and de-initialization, so that we can
introduce 'flush queue' easier in the following patch. And
once 'flush queue' and its allocation/free functions are ready,
they will be removed for sake of code readability.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: allocate flush_rq in blk_mq_init_flush()
Ming Lei [Thu, 25 Sep 2014 15:23:39 +0000 (23:23 +0800)]
blk-mq: allocate flush_rq in blk_mq_init_flush()

It is reasonable to allocate flush req in blk_mq_init_flush().

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: handle failure path for initializing hctx
Ming Lei [Thu, 25 Sep 2014 15:23:38 +0000 (23:23 +0800)]
blk-mq: handle failure path for initializing hctx

Failure of initializing one hctx isn't handled, so this patch
introduces blk_mq_init_hctx() and its pair to handle it explicitly.
Also this patch makes code cleaner.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoscsi: move blk_mq_start_request call earlier
Christoph Hellwig [Mon, 22 Sep 2014 13:59:31 +0000 (15:59 +0200)]
scsi: move blk_mq_start_request call earlier

Some ATA drivers need the dma drain size workaround, and thus need to
call blk_mq_start_request before the S/G mapping.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblock: fix blk_abort_request on blk-mq
Christoph Hellwig [Mon, 22 Sep 2014 16:21:48 +0000 (10:21 -0600)]
block: fix blk_abort_request on blk-mq

Signed-off-by: Christoph Hellwig <hch@lst.de>
Moved blk_mq_rq_timed_out() definition to the private blk-mq.h header.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-timeout: fix blk_add_timer
Ming Lei [Fri, 19 Sep 2014 13:53:46 +0000 (21:53 +0800)]
blk-timeout: fix blk_add_timer

Commit 8cb34819cdd5d(blk-mq: unshared timeout handler) introduces
blk-mq's own timeout handler, and removes following line:

blk_queue_rq_timed_out(q, blk_mq_rq_timed_out);

which then causes blk_add_timer() to bypass adding the timer,
since blk-mq no longer has q->rq_timed_out_fn defined.

This patch fixes the problem by bypassing the check for blk-mq,
so that both request deadlines are still set and the rolling
timer updated.

Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: limit memory consumption if a crash dump is active
Jens Axboe [Wed, 17 Sep 2014 14:27:03 +0000 (08:27 -0600)]
blk-mq: limit memory consumption if a crash dump is active

It's not uncommon for crash dump kernels to be limited to 128MB or
something low in that area. This is normally not a problem for
devices as we don't use that much memory, but for some shared SCSI
setups with huge queue depths, it can potentially fill most of
memory with tons of request allocations. blk-mq does scale back
when it fails to allocate memory, but it scales back just enough
so that blk-mq succeeds. This could still leave the system with
not enough memory to make any real progress.

Check if we are in a kdump environment and limit the hardware
queues and tag depth.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove unnecessary blk_clear_rq_complete()
Ming Lei [Wed, 17 Sep 2014 09:47:58 +0000 (17:47 +0800)]
blk-mq: remove unnecessary blk_clear_rq_complete()

This patch removes two unnecessary blk_clear_rq_complete(),
the REQ_ATOM_COMPLETE flag is cleared inside blk_mq_start_request(),
so:

- The blk_clear_rq_complete() in blk_flush_restore_request()
needn't because the request will be freed later, and clearing
it here may open a small race window with timeout.

- The blk_clear_rq_complete() in blk_mq_requeue_request() isn't
necessary too, even though REQ_ATOM_STARTED is cleared in
__blk_mq_requeue_request(), in theory it still may cause a small
race window with timeout since the two clear_bit() may be
reordered.

Signed-off-by: Ming Lei <ming.lei@canoical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: pass a reserved argument to the timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:13 +0000 (16:40 -0700)]
blk-mq: pass a reserved argument to the timeout handler

Allow blk-mq to pass an argument to the timeout handler to indicate
if we're timing out a reserved or regular command.  For many drivers
those need to be handled different.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: unshared timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:12 +0000 (16:40 -0700)]
blk-mq: unshared timeout handler

Duplicate the (small) timeout handler in blk-mq so that we can pass
arguments more easily to the driver timeout handler.  This enables
the next patch.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix and simplify tag iteration for the timeout handler
Christoph Hellwig [Sat, 13 Sep 2014 23:40:11 +0000 (16:40 -0700)]
blk-mq: fix and simplify tag iteration for the timeout handler

Don't do a kmalloc from timer to handle timeouts, chances are we could be
under heavy load or similar and thus just miss out on the timeouts.
Fortunately it is very easy to just iterate over all in use tags, and doing
this properly actually cleans up the blk_mq_busy_iter API as well, and
prepares us for the next patch by passing a reserved argument to the
iterator.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: rename blk_mq_end_io to blk_mq_end_request
Christoph Hellwig [Sat, 13 Sep 2014 23:40:10 +0000 (16:40 -0700)]
blk-mq: rename blk_mq_end_io to blk_mq_end_request

Now that we've changed the driver API on the submission side use the
opportunity to fix up the name on the completion side to fit into the
general scheme.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: call blk_mq_start_request from ->queue_rq
Christoph Hellwig [Sat, 13 Sep 2014 23:40:09 +0000 (16:40 -0700)]
blk-mq: call blk_mq_start_request from ->queue_rq

When we call blk_mq_start_request from the core blk-mq code before calling into
->queue_rq there is a racy window where the timeout handler can hit before we've
fully set up the driver specific part of the command.

Move the call to blk_mq_start_request into the driver so the driver can start
the request only once it is fully set up.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: remove REQ_END
Christoph Hellwig [Sat, 13 Sep 2014 23:40:08 +0000 (16:40 -0700)]
blk-mq: remove REQ_END

Pass an explicit parameter for the last request in a batch to ->queue_rq
instead of using a request flag.  Besides being a cleaner and non-stateful
interface this is also required for the next patch, which fixes the blk-mq
I/O submission code to not start a time too early.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoMerge branch 'for-linus' into for-3.18/core
Jens Axboe [Mon, 22 Sep 2014 17:57:32 +0000 (11:57 -0600)]
Merge branch 'for-linus' into for-3.18/core

Moving patches from for-linus to 3.18 instead, pull in this changes
that will go to Linus today.

10 years agoblk-mq: use blk_mq_start_hw_queues() when running requeue work
Jens Axboe [Fri, 19 Sep 2014 19:10:29 +0000 (13:10 -0600)]
blk-mq: use blk_mq_start_hw_queues() when running requeue work

When requests are retried due to hw or sw resource shortages,
we often stop the associated hardware queue. So ensure that we
restart the queues when running the requeue work, otherwise the
queue run will be a no-op.

Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: fix potential oops on out-of-memory in __blk_mq_alloc_rq_maps()
Jens Axboe [Fri, 19 Sep 2014 14:04:53 +0000 (08:04 -0600)]
blk-mq: fix potential oops on out-of-memory in __blk_mq_alloc_rq_maps()

__blk_mq_alloc_rq_maps() can be invoked multiple times, if we scale
back the queue depth if we are low on memory. So don't clear
set->tags when we fail, this is handled directly in
the parent function, blk_mq_alloc_tag_set().

Reported-by: Robert Elliott <Elliott@hp.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: avoid infinite recursion with the FUA flag
Christoph Hellwig [Tue, 16 Sep 2014 21:44:07 +0000 (14:44 -0700)]
blk-mq: avoid infinite recursion with the FUA flag

We should not insert requests into the flush state machine from
blk_mq_insert_request.  All incoming flush requests come through
blk_{m,s}q_make_request and are handled there, while blk_execute_rq_nowait
should only be called for BLOCK_PC requests.  All other callers
deal with requests that already went through the flush statemchine
and shouldn't be reinserted into it.

Reported-by: Robert Elliott <Elliott@hp.com>
Debugged-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: Avoid race condition with uninitialized requests
David Hildenbrand [Thu, 18 Sep 2014 09:04:31 +0000 (11:04 +0200)]
blk-mq: Avoid race condition with uninitialized requests

This patch should fix the bug reported in
https://lkml.org/lkml/2014/9/11/249.

We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.

Otherwise blk_mq_timeout_check() might dereference uninitialized
pointers when racing with the creation of a request.

Also move the reset of cmd_flags for the initializing code to the point
where a request is freed. So we will never end up with pending flush
request indicators that might trigger dereferences of invalid pointers
in blk_mq_timeout_check().

Cc: stable@vger.kernel.org
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reported-by: Paulo De Rezende Pinatti <ppinatti@linux.vnet.ibm.com>
Tested-by: Paulo De Rezende Pinatti <ppinatti@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoblk-mq: request deadline must be visible before marking rq as started
Jens Axboe [Tue, 16 Sep 2014 16:37:37 +0000 (10:37 -0600)]
blk-mq: request deadline must be visible before marking rq as started

When we start the request, we set the deadline and flip the bits
marking the request as started and non-complete. However, it's
important that the deadline store is ordered before flipping the
bits, otherwise we could have a small window where the request is
marked started but with an invalid deadline. This can confuse the
timeout handling.

Suggested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
10 years agoMerge tag 'gfs2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2...
Linus Torvalds [Tue, 16 Sep 2014 14:47:04 +0000 (07:47 -0700)]
Merge tag 'gfs2-fixes' of git://git./linux/kernel/git/steve/gfs2-3.0-fixes

Pull gfs2 fixes from Steven Whitehouse:
 "Here are a number of small fixes for GFS2.

  There is a fix for FIEMAP on large sparse files, a negative dentry
  hashing fix, a fix for flock, and a bug fix relating to d_splice_alias
  usage.

  There are also (patches 1 and 5) a couple of updates which are less
  critical, but small and low risk"

* tag 'gfs2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes:
  GFS2: fix d_splice_alias() misuses
  GFS2: Don't use MAXQUOTAS value
  GFS2: Hash the negative dentry during inode lookup
  GFS2: Request demote when a "try" flock fails
  GFS2: Change maxlen variables to size_t
  GFS2: fs/gfs2/super.c: replace seq_printf by seq_puts

10 years agovfs: workaround gcc <4.6 build error in link_path_walk()
James Hogan [Tue, 16 Sep 2014 12:07:35 +0000 (13:07 +0100)]
vfs: workaround gcc <4.6 build error in link_path_walk()

Commit d6bb3e9075bb ("vfs: simplify and shrink stack frame of
link_path_walk()") introduced build problems with GCC versions older
than 4.6 due to the initialisation of a member of an anonymous union in
struct qstr without enclosing braces.

This hits GCC bug 10676 [1] (which was fixed in GCC 4.6 by [2]), and
causes the following build error:

  fs/namei.c: In function 'link_path_walk':
  fs/namei.c:1778: error: unknown field 'hash_len' specified in initializer

This is worked around by adding explicit braces.

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=10676
[2] https://gcc.gnu.org/viewcvs/gcc?view=revision&revision=159206

Fixes: d6bb3e9075bb (vfs: simplify and shrink stack frame of link_path_walk())
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-metag@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agoMerge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty...
Linus Torvalds [Tue, 16 Sep 2014 03:35:32 +0000 (20:35 -0700)]
Merge tag 'fixes-for-linus' of git://git./linux/kernel/git/rusty/linux

Pull virtio fixes from Rusty Russell:
 "virtio-rng corner case fixes, with cc:stable.

  Survived a few days in linux-next"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
  virtio-rng: skip reading when we start to remove the device
  virtio-rng: fix stuck of hot-unplugging busy device

10 years agoMerge tag 'regmap-v3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie...
Linus Torvalds [Mon, 15 Sep 2014 23:20:56 +0000 (16:20 -0700)]
Merge tag 'regmap-v3.17-rc5' of git://git./linux/kernel/git/broonie/regmap

Pull regmap fix from Mark Brown:
 "Fix registers file in debugfs

  Ensure that the mode reported for the registers file in debugfs is
  accurate by marking it as read only when the define to enable writes
  has not been set.  This is on the edge of being a bug fix but it's
  debugfs and it makes it much easier for users to spot what's going
  wrong when they forget to enable writeability"

* tag 'regmap-v3.17-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
  regmap: Fix debugfs-file 'registers' mode

10 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
Linus Torvalds [Mon, 15 Sep 2014 22:12:01 +0000 (15:12 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/dtor/input

Pull input updates from Dmitry Torokhov:
 "A few quirks for i8042/AT keyboards and a small device tree doc fix
  for Atmel Touchscreens"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
  Input: atmel_mxt_ts - fix merge in DT documentation
  Input: i8042 - also set the firmware id for MUXed ports
  Input: i8042 - add nomux quirk for Avatar AVIU-145A6
  Input: i8042 - add Fujitsu U574 to no_timeout dmi table
  Input: atkbd - do not try 'deactivate' keyboard on any LG laptops