James Smart [Tue, 14 May 2019 21:58:05 +0000 (14:58 -0700)]
lpfc: add support to generate RSCN events for nport
This patch adds general RSCN support:
- The ability to transmit an RSCN to the port on the other end of
the link (regular port if pt2pt, or fabric controller if fabric).
- And general recognition of an RSCN ELS when an ELS is received.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Arun Easi <aeasi@marvell.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Tue, 14 May 2019 21:58:04 +0000 (14:58 -0700)]
nvme-fcloop: add support for nvmet discovery_event op
Update fcloop to support the discovery_event operation and
invoke a nvme rescan. In a real fc adapter, this would generate an
RSCN, which the host would receive and convert into a nvme rescan
on the remote port specified in the rscn payload.
Signed-off-by: James Smart <jsmart2021@gmail.com>
[kbuild-bot: fcloop_tgt_discovery_evt can be static]
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Arun Easi <aeasi@marvell.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Tue, 14 May 2019 21:58:03 +0000 (14:58 -0700)]
nvmet-fc: add transport discovery change event callback support
This patch adds support for the nvmet discovery_change transport op.
In turn, the transport adds it's own LLDD api callback discovery_event
op to request the LLDD to generate an RSCN for the discovery change.
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Arun Easi <aeasi@marvell.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Tue, 14 May 2019 21:58:02 +0000 (14:58 -0700)]
nvmet: add transport discovery change op
Some transports, such as FC-NVME, support discovery controller change
events without the use of a persistent discovery controller. FC receives
events via RSCN from the FC Fabric Controller or subsystem FC port.
This patch adds a nvmet transport op that is called whenever a
discovery change event occurs in the nvmet layer.
To facilitate the callback without adding another layer to cross into
core.c to reference the transport ops, the port structure snapshots
the transport ops when the port is enabled and clears them when disabled.
Signed-off-by: James Smart <jsmart2021@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Arun Easi <aeasi@marvell.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Fri, 21 Jun 2019 08:22:48 +0000 (10:22 +0200)]
cgroup: export css_next_descendant_pre for bfq
The bfq schedule now uses css_next_descendant_pre directly after
the stats functionality depending on it has been from the core
blk-cgroup code to bfq. Export the symbol so that bfq can still
be build modular.
Fixes: d6258980daf2 ("bfq-iosched: move bfq_stat_recursive_sum into the only caller")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 21 Jun 2019 05:37:20 +0000 (23:37 -0600)]
Merge branch 'md-next' of https://github.com/liu-song-6/linux into for-5.3/block
Pull MD changes from Song.
* 'md-next' of https://github.com/liu-song-6/linux:
md: add bitmap_abort label in md_run
md-bitmap: create and destroy wb_info_pool with the change of bitmap
md-bitmap: create and destroy wb_info_pool with the change of backlog
md: introduce mddev_create/destroy_wb_pool for the change of member device
md/raid1: fix potential data inconsistency issue with write behind device
Guoqing Jiang [Fri, 14 Jun 2019 09:10:39 +0000 (17:10 +0800)]
md: add bitmap_abort label in md_run
Now, there are two places need to consider about
the failure of destroy bitmap, so move the common
part between bitmap_abort and abort label.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Guoqing Jiang [Fri, 14 Jun 2019 09:10:38 +0000 (17:10 +0800)]
md-bitmap: create and destroy wb_info_pool with the change of bitmap
The write-behind attribute is part of bitmap, since bitmap
can be added/removed dynamically with the following.
1. mdadm --grow /dev/md0 --bitmap=none
2. mdadm --grow /dev/md0 --bitmap=internal --write-behind
So we need to destroy wb_info_pool in md_bitmap_destroy,
and create the pool before load bitmap.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Guoqing Jiang [Fri, 14 Jun 2019 09:10:37 +0000 (17:10 +0800)]
md-bitmap: create and destroy wb_info_pool with the change of backlog
Since we can enable write-behind mode by write backlog node,
so create wb_info_pool if the mode is just enabled, also call
call md_bitmap_update_sb to make user aware the write-behind
mode is enabled. Conversely, wb_info_pool should be destroyed
when write-behind mode is disabled.
Beside above, it is better to update bitmap sb if we change
the number of max_write_behind.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Guoqing Jiang [Fri, 14 Jun 2019 09:10:36 +0000 (17:10 +0800)]
md: introduce mddev_create/destroy_wb_pool for the change of member device
Previously, we called rdev_init_wb to avoid potential data
inconsistency when array is created.
Now, we need to call the function and create mempool if a
device is added or just be flaged as "writemostly". So
mddev_create_wb_pool is introduced and called accordingly.
And for safety reason, we mark implicit GFP_NOIO allocation
scope for create mempool during mddev_suspend/mddev_resume.
And mempool should be removed conversely after remove a
member device or its's "writemostly" flag, which is done
by call mddev_destroy_wb_pool.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Guoqing Jiang [Wed, 19 Jun 2019 09:30:46 +0000 (17:30 +0800)]
md/raid1: fix potential data inconsistency issue with write behind device
For write-behind mode, we think write IO is complete once it has
reached all the non-writemostly devices. It works fine for single
queue devices.
But for multiqueue device, if there are lots of IOs come from upper
layer, then the write-behind device could issue those IOs to different
queues, depends on the each queue's delay, so there is no guarantee
that those IOs can arrive in order.
To address the issue, we need to check the collision among write
behind IOs, we can only continue without collision, otherwise wait
for the completion of previous collisioned IO.
And WBCollision is introduced for multiqueue device which is worked
under write-behind mode.
But this patch doesn't handle below cases which could have the data
inconsistency issue as well, these cases will be handled in later
patches.
1. modify max_write_behind by write backlog node.
2. add or remove array's bitmap dynamically.
3. the change of member disk.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Chaitanya Kulkarni [Thu, 20 Jun 2019 17:59:19 +0000 (10:59 -0700)]
f2fs: use block layer helper for show_bio_op macro
Adjust the f2fs tracing code to use newly introduced block layer
function blk_op_str() which converts the REQ_OP_XXX into the string
XXX.
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Thu, 20 Jun 2019 17:59:18 +0000 (10:59 -0700)]
block: update print_req_error()
Improve the print_req_error with additional request fields which are
helpful for debugging. Use newly introduced blk_op_str() to print the
REQ_OP_XXX in the string format.
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Thu, 20 Jun 2019 17:59:17 +0000 (10:59 -0700)]
block: use blk_op_str() in blk-mq-debugfs.c
Now that we've a helper function blk_op_str() to convert the
REQ_OP_XXX to string XXX, adjust the code to use that. Get rid of
the duplicate array op_name which is now present in the blk-core.c
which we renamed it to "blk_op_name" and open coding in the
blk-mq-debugfs.c.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Thu, 20 Jun 2019 17:59:16 +0000 (10:59 -0700)]
block: add centralize REQ_OP_XXX to string helper
In order to centralize the REQ_OP_XXX to string conversion which can be
used in the block layer and different places in the kernel like f2fs,
this patch adds a new helper function along with an array similar to the
one present in the blk-mq-debugfs.c.
We keep this helper functionality centralize under blk-core.c instead of
blk-mq-debugfs.c since blk-core.c is configured using CONFIG_BLOCK and
it will not be dependent on blk-mq-debugfs.c which is configured using
CONFIG_BLK_DEBUG_FS.
Next patch adjusts the code in the blk-mq-debugfs.c with newly
introduced helper.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 20 Jun 2019 17:59:15 +0000 (10:59 -0700)]
block: improve print_req_error
Print the calling function instead of print_req_error as a prefix, and
print the operation and op_flags separately instead of the whole field.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:24 +0000 (12:26 +0200)]
block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG
This option is entirely bfq specific, give it an appropinquate name.
Also make it depend on CONFIG_BFQ_GROUP_IOSCHED in Kconfig, as all
the functionality already does so anyway.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:23 +0000 (12:26 +0200)]
bfq-iosched: move bfq_stat_recursive_sum into the only caller
This function was moved from core block code and is way to generic.
Fold it into the only caller and simplify it based on the actually
passed arguments.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:22 +0000 (12:26 +0200)]
blk-cgroup: move struct blkg_stat to bfq
This structure and assorted infrastructure is only used by the bfq I/O
scheduler. Move it there instead of bloating the common code.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:21 +0000 (12:26 +0200)]
blk-cgroup: introduce a new struct blkg_rwstat_sample
When sampling the blkcg counts we don't need atomics or per-cpu
variables. Introduce a new structure just containing plain u64
counters.
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:20 +0000 (12:26 +0200)]
blk-cgroup: pass blkg_rwstat structures by reference
Returning a structure generates rather bad code, so switch to passing
by reference. Also don't require the structure to be zeroed and add
to the 0-initialized counters, but actually set the counters to the
calculated value.
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:26:19 +0000 (12:26 +0200)]
blk-cgroup: factor out a helper to read rwstat counter
Trying to break up the crazy statements to something readable.
Also switch to an unsigned counter as it can't ever turn negative.
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:29:04 +0000 (12:29 +0200)]
block: mark blk_rq_bio_prep as inline
This function just has a few trivial assignments, has two callers with
one of them being in the fastpath.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:29:03 +0000 (12:29 +0200)]
block: untangle the end of blk_bio_segment_split
Now that we don't need to assign the front/back segment sizes, we can
duplicating the segs assignment for the split vs no-split case and
remove a whole chunk of boilerplate code.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:29:02 +0000 (12:29 +0200)]
block: simplify blk_recalc_rq_segments
Return the segement and let the callers assign them, which makes the code
a littler more obvious. Also pass the request instead of q plus bio
chain, allowing for the use of rq_for_each_bvec.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:29:01 +0000 (12:29 +0200)]
block: remove the bi_phys_segments field in struct bio
We only need the number of segments in the blk-mq submission path.
Remove the field from struct bio, and return it from a variant of
blk_queue_split instead of that it can passed as an argument to
those functions that need the value.
This also means we stop recounting segments except for cloning
and partial segments.
To keep the number of arguments in this how path down remove
pointless struct request_queue arguments from any of the functions
that had it and grew a nr_segs argument.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:29:00 +0000 (12:29 +0200)]
block: remove blk_init_request_from_bio
lightnvm should have never used this function, as it is sending
passthrough requests, so switch it to blk_rq_append_bio like all the
other passthrough request users. Inline blk_init_request_from_bio into
the only remaining caller.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Javier González <javier@javigon.com>
Reviewed-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 6 Jun 2019 10:28:59 +0000 (12:28 +0200)]
block: initialize the write priority in blk_rq_bio_prep
The priority field also makes sense for passthrough requests, so
initialize it in blk_rq_bio_prep.
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dennis Zhou [Thu, 23 May 2019 20:10:18 +0000 (16:10 -0400)]
blk-iolatency: only account submitted bios
As is, iolatency recognizes done_bio and cleanup as ending paths. If a
request is marked REQ_NOWAIT and fails to get a request, the bio is
cleaned up via rq_qos_cleanup() and ended in bio_wouldblock_error().
This results in underflowing the inflight counter. Fix this by only
accounting bios that were actually submitted.
Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Kroah-Hartman [Tue, 18 Jun 2019 15:45:49 +0000 (17:45 +0200)]
block: drbd: no need to check return value of debugfs_create functions
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Cc: Philipp Reisner <philipp.reisner@linbit.com>
Cc: Lars Ellenberg <lars.ellenberg@linbit.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: drbd-dev@lists.linbit.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pavel Begunkov [Thu, 23 May 2019 15:43:11 +0000 (18:43 +0300)]
blk-core: Remove blk_end_request*() declarations
Commit
a1ce35fa49852db60fc6e268 ("block: remove dead elevator code")
deleted blk_end_request() and friends, but some declaration are still
left. Purge them.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Wed, 19 Jun 2019 22:01:50 +0000 (15:01 -0700)]
block: code cleanup queue_poll_stat_show()
This is a pure code cleanup patch and doesn't change any functionality.
Having multiple coding styles in the code creates confusion when
someone tries to add a new code.
Make queue_poll_stat_show() consistent by adding spaces around binary
operators with the rest of the code.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Wed, 19 Jun 2019 22:01:49 +0000 (15:01 -0700)]
block: use right format specifier for op
In function __blk_mq_debugfs_rq_show variable op has unsigned int type.
Since op can never be negative use %u format specifier to match the
variable type.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Wed, 19 Jun 2019 22:01:48 +0000 (15:01 -0700)]
block: get rid of redundant else
This is a pure code cleanup patch and doesn't change any functionality.
This removes the redundant else in the code which is not needed since
we are returning from function anyway.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Thu, 20 Jun 2019 04:56:58 +0000 (21:56 -0700)]
null_blk: remove duplicate 0 initialization
In function null_add_dev() struct nullb *nullb member is allocated
using kzalloc_node() which returns 0red memory.
In function setup_queues() which is called from the null_add_dev(), on
successful queue allocation we set the nullb->nr_queues = 0 which is not
needed due to earlier use of kzalloc_node().
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Minwoo Im [Sat, 8 Jun 2019 20:15:51 +0000 (05:15 +0900)]
block: move tag field position in struct request
__data_len and __sector are internal fields which should not be accessed
directly in driver-level like the comment above it. But, tag field can
be accessed by driver level directly so that we need to make the comment
right by moving it to some other place.
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Arnd Bergmann [Wed, 19 Jun 2019 13:19:44 +0000 (15:19 +0200)]
floppy: fix harmless clang build warning
clang warns about unusual code in floppy.c that looks like it
was intended to be a bit mask operation, checking for a specific
bit in the UDP->cmos variable (FLOPPY1_TYPE expands to '4' on
ARM):
drivers/block/floppy.c:3902:17: error: use of logical '&&' with constant operand [-Werror,-Wconstant-logical-operand]
if (!UDP->cmos && FLOPPY1_TYPE)
^ ~~~~~~~~~~~~
drivers/block/floppy.c:3902:17: note: use '&' for a bitwise operation
if (!UDP->cmos && FLOPPY1_TYPE)
The check here is redundant anyway, if FLOPPY1_TYPE is zero, then
assigning it to a zero UDP->cmos field does not change anything,
so removing the extra check here has no effect other than shutting
up the warning.
On x86, this will no longer read a hardware register, as the
FLOPPY1_TYPE macro is not expanded if UDP->cmos is already
zero, but the result is the same.
Cc: Robert Elliott <elliott@hpe.com>
Cc: Keith Busch <kbusch@kernel.org>
Link: https://patchwork.kernel.org/patch/10851841/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Thu, 13 Jun 2019 14:14:21 +0000 (07:14 -0700)]
block: use req_op() to maintain consistency
This is a pure code cleanup patch and doesn't change any functionality.
In block layer to identify the request operation req_op() macro is
used, so change the open coding the req_op() in the blk-mq-debugfs.c.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tejun Heo [Thu, 13 Jun 2019 22:30:41 +0000 (15:30 -0700)]
blkcg, writeback: dead memcgs shouldn't contribute to writeback ownership arbitration
wbc_account_io() collects information on cgroup ownership of writeback
pages to determine which cgroup should own the inode. Pages can stay
associated with dead memcgs but we want to avoid attributing IOs to
dead blkcgs as much as possible as the association is likely to be
stale. However, currently, pages associated with dead memcgs
contribute to the accounting delaying and/or confusing the
arbitration.
Fix it by ignoring pages associated with dead memcgs.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tejun Heo [Thu, 13 Jun 2019 22:30:40 +0000 (15:30 -0700)]
blkcg: blkcg_activate_policy() should initialize ancestors first
When blkcg_activate_policy() is creating blkg_policy_data for existing
blkgs, it did in the wrong order - descendants first. Fix it. None
of the existing controllers seem affected by this.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tejun Heo [Thu, 13 Jun 2019 22:30:39 +0000 (15:30 -0700)]
blkcg: perpcu_ref init/exit should be done from blkg_alloc/free()
blkg alloc is performed as a separate step from the rest of blkg
creation so that GFP_KERNEL allocations can be used when creating
blkgs from configuration file writes because otherwise user actions
may fail due to failures of opportunistic GFP_NOWAIT allocations.
While making blkgs use percpu_ref,
7fcf2b033b84 ("blkcg: change blkg
reference counting to use percpu_ref") incorrectly added unconditional
opportunistic percpu_ref_init() to blkg_create() breaking this
guarantee.
This patch moves percpu_ref_init() to blkg_alloc() so makes it use
@gfp_mask that blkg_alloc() is called with. Also, percpu_ref_exit()
is moved to blkg_free() for consistency.
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 7fcf2b033b84 ("blkcg: change blkg reference counting to use percpu_ref")
Cc: Dennis Zhou <dennis@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tejun Heo [Thu, 13 Jun 2019 22:30:38 +0000 (15:30 -0700)]
blkcg: update blkcg_print_stat() to handle larger outputs
Depending on the number of devices, blkcg stats can go over the
default seqfile buf size. seqfile normally retries with a larger
buffer but since the ->pd_stat() addition, blkcg_print_stat() doesn't
tell seqfile that overflow has happened and the output gets printed
truncated. Fix it by calling seq_commit() w/ -1 on possible
overflows.
Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 903d23f0a354 ("blk-cgroup: allow controllers to output their own stats")
Cc: stable@vger.kernel.org # v4.19+
Cc: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tejun Heo [Thu, 13 Jun 2019 22:30:37 +0000 (15:30 -0700)]
blk-iolatency: clear use_delay when io.latency is set to zero
If use_delay was non-zero when the latency target of a cgroup was set
to zero, it will stay stuck until io.latency is enabled on the cgroup
again. This keeps readahead disabled for the cgroup impacting
performance negatively.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Josef Bacik <jbacik@fb.com>
Fixes: d70675121546 ("block: introduce blk-iolatency io controller")
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Gustavo A. R. Silva [Mon, 10 Jun 2019 15:04:12 +0000 (10:04 -0500)]
block: bio: Use struct_size() in kmalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct bio_map_data {
...
struct iovec iov[];
};
instance = kmalloc(sizeof(sizeof(struct bio_map_data) + sizeof(struct iovec) *
count, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, iov, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Gustavo A. R. Silva [Fri, 31 May 2019 18:47:54 +0000 (13:47 -0500)]
block: genhd: Use struct_size() helper
Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes, in particular in the
context in which this code is being used.
So, replace the following form:
sizeof(*new_ptbl) + target * sizeof(new_ptbl->part[0])
with:
struct_size(new_ptbl, part, target)
Also, notice that variable size is unnecessary, hence it is removed.
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Bob Liu [Sat, 15 Jun 2019 07:43:48 +0000 (01:43 -0600)]
block: null_blk: fix race condition for null_del_dev
Dulicate call of null_del_dev() will trigger null pointer error like below.
The reason is a race condition between nullb_device_power_store() and
nullb_group_drop_item().
CPU#0 CPU#1
---------------- -----------------
do_rmdir()
>configfs_rmdir()
>client_drop_item()
>nullb_group_drop_item()
nullb_device_power_store()
>null_del_dev()
>test_and_clear_bit(NULLB_DEV_FL_UP
>null_del_dev()
^^^^^
Duplicated null_dev_dev() triger null pointer error
>clear_bit(NULLB_DEV_FL_UP
The fix could be keep the sequnce of clear NULLB_DEV_FL_UP and null_del_dev().
[ 698.613600] BUG: unable to handle kernel NULL pointer dereference at
0000000000000018
[ 698.613608] #PF error: [normal kernel read fault]
[ 698.613611] PGD 0 P4D 0
[ 698.613619] Oops: 0000 [#1] SMP PTI
[ 698.613627] CPU: 3 PID: 6382 Comm: rmdir Not tainted 5.0.0+ #35
[ 698.613631] Hardware name: LENOVO 20LJS2EV08/20LJS2EV08, BIOS R0SET33W (1.17 ) 07/18/2018
[ 698.613644] RIP: 0010:null_del_dev+0xc/0x110 [null_blk]
[ 698.613649] Code: 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 0b eb 97 e8 47 bb 2a e8 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41 54 53 <8b> 77 18 48 89 fb 4c 8b 27 48 c7 c7 40 57 1e c1 e8 bf c7 cb e8 48
[ 698.613654] RSP: 0018:
ffffb887888bfde0 EFLAGS:
00010286
[ 698.613659] RAX:
0000000000000000 RBX:
ffff9d436d92bc00 RCX:
ffff9d43a9184681
[ 698.613663] RDX:
ffffffffc11e5c30 RSI:
0000000068be6540 RDI:
0000000000000000
[ 698.613667] RBP:
ffffb887888bfdf0 R08:
0000000000000001 R09:
0000000000000000
[ 698.613671] R10:
ffffb887888bfdd8 R11:
0000000000000f16 R12:
ffff9d436d92bc08
[ 698.613675] R13:
ffff9d436d94e630 R14:
ffffffffc11e5088 R15:
ffffffffc11e5000
[ 698.613680] FS:
00007faa68be6540(0000) GS:
ffff9d43d14c0000(0000) knlGS:
0000000000000000
[ 698.613685] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 698.613689] CR2:
0000000000000018 CR3:
000000042f70c002 CR4:
00000000003606e0
[ 698.613693] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[ 698.613697] DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
[ 698.613700] Call Trace:
[ 698.613712] nullb_group_drop_item+0x50/0x70 [null_blk]
[ 698.613722] client_drop_item+0x29/0x40
[ 698.613728] configfs_rmdir+0x1ed/0x300
[ 698.613738] vfs_rmdir+0xb2/0x130
[ 698.613743] do_rmdir+0x1c7/0x1e0
[ 698.613750] __x64_sys_rmdir+0x17/0x20
[ 698.613759] do_syscall_64+0x5a/0x110
[ 698.613768] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pavel Begunkov [Fri, 14 Jun 2019 11:39:26 +0000 (14:39 +0300)]
blk-mq/debugfs: Fix improper print qualifier
struct blk_rq_stat::mean is a u64 value, so use %llu
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Guoqing Jiang [Fri, 14 Jun 2019 22:41:11 +0000 (15:41 -0700)]
md/raid10: read balance chooses idlest disk for SSD
Andy reported that raid10 array with SSD disks has poor
read performance. Compared with raid1, RAID-1 can be 3x
faster than RAID-10 sometimes [1].
The thing is that raid10 chooses the low distance disk
for read request, however, the approach doesn't work
well for SSD device since it doesn't have spindle like
HDD, we should just read from the SSD which has less
pending IO like commit
9dedf60313fa4 ("md/raid1: read
balance chooses idlest disk for SSD").
So this commit selects the idlest SSD disk for read if
array has none rotational disk, otherwise, read_balance
uses the previous distance priority algorithm. With the
change, the performance of raid10 gets increased largely
per Andy's test [2].
[1]. https://marc.info/?l=linux-raid&m=
155915890004761&w=2
[2]. https://marc.info/?l=linux-raid&m=
155990654223786&w=2
Tested-by: Andy Smith <andy@strugglers.net>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marcos Paulo de Souza [Fri, 14 Jun 2019 22:41:10 +0000 (15:41 -0700)]
md: raid1-10: Unify r{1,10}bio_pool_free
Avoiding duplicated code, since they just execute a kfree.
Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Gustavo A. R. Silva [Fri, 14 Jun 2019 22:41:09 +0000 (15:41 -0700)]
md: raid10: Use struct_size() in kmalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
struct boo entry[];
};
instance = kmalloc(size, GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yufen Yu [Fri, 14 Jun 2019 22:41:08 +0000 (15:41 -0700)]
md/raid1: get rid of extra blank line and space
This patch get rid of extra blank line and space, and
add necessary space for code.
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yufen Yu [Fri, 14 Jun 2019 22:41:07 +0000 (15:41 -0700)]
md: fix spelling typo and add necessary space
This patch fix a spelling typo and add necessary space for code.
In addition, the patch get rid of the unnecessary 'if'.
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marcos Paulo de Souza [Fri, 14 Jun 2019 22:41:06 +0000 (15:41 -0700)]
md: md.c: Return -ENODEV when mddev is NULL in rdev_attr_show
Commit
c42d3240990814eec1e4b2b93fa0487fc4873aed
("md: return -ENODEV if rdev has no mddev assigned") changed
rdev_attr_store to return -ENODEV when rdev->mddev is NULL, now do the
same to rdev_attr_show.
Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Xiao Ni [Fri, 14 Jun 2019 22:41:05 +0000 (15:41 -0700)]
raid5-cache: Need to do start() part job after adding journal device
commit
d5d885fd514f ("md: introduce new personality funciton start()")
splits the init job to two parts. The first part run() does the jobs that
do not require the md threads. The second part start() does the jobs that
require the md threads.
Now it just does run() in adding new journal device. It needs to do the
second part start() too.
Fixes: d5d885fd514f ("md: introduce new personality funciton start()")
Cc: stable@vger.kernel.org #v4.9+
Reported-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Marcos Paulo de Souza [Fri, 14 Jun 2019 22:41:04 +0000 (15:41 -0700)]
drivers: md: Unify common definitions of raid1 and raid10
These definitions are being moved to raid1-10.c.
Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Sat, 15 Jun 2019 01:41:18 +0000 (15:41 -1000)]
Merge tag 'for-linus-
20190614' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
- Remove references to old schedulers for the scheduler switching and
blkio controller documentation (Andreas)
- Kill duplicate check for report zone for null_blk (Chaitanya)
- Two bcache fixes (Coly)
- Ensure that mq-deadline is selected if zoned block device is enabled,
as we need that to support them (Damien)
- Fix io_uring memory leak (Eric)
- ps3vram fallout from LBDAF removal (Geert)
- Redundant blk-mq debugfs debugfs_create return check cleanup (Greg)
- Extend NOPLM quirk for ST1000LM024 drives (Hans)
- Remove error path warning that can now trigger after the queue
removal/addition fixes (Ming)
* tag 'for-linus-
20190614' of git://git.kernel.dk/linux-block:
block/ps3vram: Use %llu to format sector_t after LBDAF removal
libata: Extend quirks for the ST1000LM024 drives with NOLPM quirk
bcache: only set BCACHE_DEV_WB_RUNNING when cached device attached
bcache: fix stack corruption by PRECEDING_KEY()
blk-mq: remove WARN_ON(!q->elevator) from blk_mq_sched_free_requests
blkio-controller.txt: Remove references to CFQ
block/switching-sched.txt: Update to blk-mq schedulers
null_blk: remove duplicate check for report zone
blk-mq: no need to check return value of debugfs_create functions
io_uring: fix memory leak of UNIX domain socket inode
block: force select mq-deadline for zoned block devices
Linus Torvalds [Sat, 15 Jun 2019 01:25:27 +0000 (15:25 -1000)]
Merge branch 'i2c/for-current' of git://git./linux/kernel/git/wsa/linux
Pull i2c fixes from Wolfram Sang:
"I2C has two simple but wanted driver fixes for you"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: pca-platform: Fix GPIO lookup code
i2c: acorn: fix i2c warning
Casey Schaufler [Fri, 31 May 2019 10:53:33 +0000 (11:53 +0100)]
Smack: Restore the smackfsdef mount option and add missing prefixes
The 5.1 mount system rework changed the smackfsdef mount option to
smackfsdefault. This fixes the regression by making smackfsdef treated
the same way as smackfsdefault.
Also fix the smack_param_specs[] to have "smack" prefixes on all the
names. This isn't visible to a user unless they either:
(a) Try to mount a filesystem that's converted to the internal mount API
and that implements the ->parse_monolithic() context operation - and
only then if they call security_fs_context_parse_param() rather than
security_sb_eat_lsm_opts().
There are no examples of this upstream yet, but nfs will probably want
to do this for nfs2 or nfs3.
(b) Use fsconfig() to configure the filesystem - in which case
security_fs_context_parse_param() will be called.
This issue is that smack_sb_eat_lsm_opts() checks for the "smack" prefix
on the options, but smack_fs_context_parse_param() does not.
Fixes: c3300aaf95fb ("smack: get rid of match_token()")
Fixes: 2febd254adc4 ("smack: Implement filesystem context security hooks")
Cc: stable@vger.kernel.org
Reported-by: Jose Bollo <jose.bollo@iot.bzh>
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Fri, 14 Jun 2019 16:16:47 +0000 (06:16 -1000)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Here are some arm64 fixes for -rc5.
The only non-trivial change (in terms of the diffstat) is fixing our
SVE ptrace API for big-endian machines, but the majority of this is
actually the addition of much-needed comments and updates to the
documentation to try to avoid this mess biting us again in future.
There are still a couple of small things on the horizon, but nothing
major at this point.
Summary:
- Fix broken SVE ptrace API when running in a big-endian configuration
- Fix performance regression due to off-by-one in TLBI range checking
- Fix build regression when using Clang"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64/sve: Fix missing SVE/FPSIMD endianness conversions
arm64: tlbflush: Ensure start/end of address range are aligned to stride
arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS
Linus Torvalds [Fri, 14 Jun 2019 16:08:46 +0000 (06:08 -1000)]
Merge branch 'akpm' (patches from Andrew)
Merge misc fixes from Andrew Morton:
"16 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm/devm_memremap_pages: fix final page put race
PCI/P2PDMA: track pgmap references per resource, not globally
lib/genalloc: introduce chunk owners
PCI/P2PDMA: fix the gen_pool_add_virt() failure path
mm/devm_memremap_pages: introduce devm_memunmap_pages
drivers/base/devres: introduce devm_release_action()
mm/vmscan.c: fix trying to reclaim unevictable LRU page
coredump: fix race condition between collapse_huge_page() and core dumping
mm/mlock.c: change count_mm_mlocked_page_nr return type
mm: mmu_gather: remove __tlb_reset_range() for force flush
fs/ocfs2: fix race in ocfs2_dentry_attach_lock()
mm/vmscan.c: fix recent_rotated history
mm/mlock.c: mlockall error for flag MCL_ONFAULT
scripts/decode_stacktrace.sh: prefix addr2line with $CROSS_COMPILE
mm/list_lru.c: fix memory leak in __memcg_init_list_lru_node
mm: memcontrol: don't batch updates of local VM stats and events
Linus Torvalds [Fri, 14 Jun 2019 15:49:35 +0000 (05:49 -1000)]
Merge tag 'iommu-fixes-v5.2-rc4' of git://git./linux/kernel/git/joro/iommu
Pull iommu fixes from Joerg Roedel:
- three fixes for Intel VT-d to fix a potential dead-lock, a formatting
fix and a bit setting fix
- one fix for the ARM-SMMU to make it work on some platforms with
sub-optimal SMMU emulation
* tag 'iommu-fixes-v5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/arm-smmu: Avoid constant zero in TLBI writes
iommu/vt-d: Set the right field for Page Walk Snoop
iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock
iommu: Add missing new line for dma type
Linus Torvalds [Fri, 14 Jun 2019 15:48:29 +0000 (05:48 -1000)]
Merge tag 'gpio-v5.2-3' of git://git./linux/kernel/git/linusw/linux-gpio
Pull GPIO fix from Linus Walleij:
"A single fix for the PCA953x driver affecting some fringe variants of
the chip"
* tag 'gpio-v5.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
gpio: pca953x: hack to fix 24 bit gpio expanders
Linus Torvalds [Fri, 14 Jun 2019 15:37:06 +0000 (05:37 -1000)]
Merge tag 'sound-5.2-rc5' of git://git./linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"It might feel like deja vu to receive a bulk of changes at rc5, and it
happens again; we've got a collection of fixes for ASoC. Most of fixes
are targeted for the newly merged SOF (Sound Open Firmware) stuff and
the relevant fixes for Intel platforms.
Other than that, there are a few regression fixes for the recent ASoC
core changes and HD-audio quirk, as well as a couple of FireWire fixes
and for other ASoC codecs"
* tag 'sound-5.2-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (54 commits)
Revert "ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops"
ALSA: ice1712: Check correct return value to snd_i2c_sendbytes (EWS/DMX 6Fire)
ALSA: oxfw: allow PCM capture for Stanton SCS.1m
ALSA: firewire-motu: fix destruction of data for isochronous resources
ASoC: Intel: sst: fix kmalloc call with wrong flags
ASoC: core: Fix deadlock in snd_soc_instantiate_card()
SoC: rt274: Fix internal jack assignment in set_jack callback
ALSA: hdac: fix memory release for SST and SOF drivers
ASoC: SOF: Intel: hda: use the defined ppcap functions
ASoC: core: move DAI pre-links initiation to snd_soc_instantiate_card
ASoC: Intel: cht_bsw_rt5672: fix kernel oops with platform_name override
ASoC: Intel: cht_bsw_nau8824: fix kernel oops with platform_name override
ASoC: Intel: bytcht_es8316: fix kernel oops with platform_name override
ASoC: Intel: cht_bsw_max98090: fix kernel oops with platform_name override
ASoC: sun4i-i2s: Add offset to RX channel select
ASoC: sun4i-i2s: Fix sun8i tx channel offset mask
ASoC: max98090: remove 24-bit format support if RJ is 0
ASoC: da7219: Fix build error without CONFIG_I2C
ASoC: SOF: Intel: hda: Fix COMPILE_TEST build error
ASoC: SOF: fix DSP oops definitions in FW ABI
...
Hui Wang [Fri, 14 Jun 2019 08:44:12 +0000 (16:44 +0800)]
Revert "ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops"
This reverts commit
9cb40eb184c4220d244a532bd940c6345ad9dbd9.
This patch introduces noise and headphone playback issue after
rebooting or suspending/resuming. Let us revert it.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=203831
Fixes: 9cb40eb184c4 ("ALSA: hda/realtek - Improve the headset mic for Acer Aspire laptops")
Cc: <stable@vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Dan Williams [Thu, 13 Jun 2019 22:56:33 +0000 (15:56 -0700)]
mm/devm_memremap_pages: fix final page put race
Logan noticed that devm_memremap_pages_release() kills the percpu_ref
drops all the page references that were acquired at init and then
immediately proceeds to unplug, arch_remove_memory(), the backing pages
for the pagemap. If for some reason device shutdown actually collides
with a busy / elevated-ref-count page then arch_remove_memory() should
be deferred until after that reference is dropped.
As it stands the "wait for last page ref drop" happens *after*
devm_memremap_pages_release() returns, which is obviously too late and
can lead to crashes.
Fix this situation by assigning the responsibility to wait for the
percpu_ref to go idle to devm_memremap_pages() with a new ->cleanup()
callback. Implement the new cleanup callback for all
devm_memremap_pages() users: pmem, devdax, hmm, and p2pdma.
Link: http://lkml.kernel.org/r/155727339156.292046.5432007428235387859.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes: 41e94a851304 ("add devm_memremap_pages")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Williams [Thu, 13 Jun 2019 22:56:30 +0000 (15:56 -0700)]
PCI/P2PDMA: track pgmap references per resource, not globally
In preparation for fixing a race between devm_memremap_pages_release()
and the final put of a page from the device-page-map, allocate a
percpu-ref per p2pdma resource mapping.
Link: http://lkml.kernel.org/r/155727338646.292046.9922678317501435597.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Williams [Thu, 13 Jun 2019 22:56:27 +0000 (15:56 -0700)]
lib/genalloc: introduce chunk owners
The p2pdma facility enables a provider to publish a pool of dma
addresses for a consumer to allocate. A genpool is used internally by
p2pdma to collect dma resources, 'chunks', to be handed out to
consumers. Whenever a consumer allocates a resource it needs to pin the
'struct dev_pagemap' instance that backs the chunk selected by
pci_alloc_p2pmem().
Currently that reference is taken globally on the entire provider
device. That sets up a lifetime mismatch whereby the p2pdma core needs
to maintain hacks to make sure the percpu_ref is not released twice.
This lifetime mismatch also stands in the way of a fix to
devm_memremap_pages() whereby devm_memremap_pages_release() must wait for
the percpu_ref ->release() callback to complete before it can proceed to
teardown pages.
So, towards fixing this situation, introduce the ability to store a 'chunk
owner' at gen_pool_add() time, and a facility to retrieve the owner at
gen_pool_{alloc,free}() time. For p2pdma this will be used to store and
recall individual dev_pagemap reference counter instances per-chunk.
Link: http://lkml.kernel.org/r/155727338118.292046.13407378933221579644.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Williams [Thu, 13 Jun 2019 22:56:24 +0000 (15:56 -0700)]
PCI/P2PDMA: fix the gen_pool_add_virt() failure path
The pci_p2pdma_add_resource() implementation immediately frees the pgmap
if gen_pool_add_virt() fails. However, that means that when @dev
triggers a devres release devm_memremap_pages_release() will crash
trying to access the freed @pgmap.
Use the new devm_memunmap_pages() to manually free the mapping in the
error path.
Link: http://lkml.kernel.org/r/155727337603.292046.13101332703665246702.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Fixes: 52916982af48 ("PCI/P2PDMA: Support peer-to-peer memory")
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Williams [Thu, 13 Jun 2019 22:56:21 +0000 (15:56 -0700)]
mm/devm_memremap_pages: introduce devm_memunmap_pages
Use the new devm_release_action() facility to allow
devm_memremap_pages_release() to be manually triggered.
Link: http://lkml.kernel.org/r/155727337088.292046.5774214552136776763.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Williams [Thu, 13 Jun 2019 22:56:18 +0000 (15:56 -0700)]
drivers/base/devres: introduce devm_release_action()
Patch series "mm/devm_memremap_pages: Fix page release race", v2.
Logan audited the devm_memremap_pages() shutdown path and noticed that
it was possible to proceed to arch_remove_memory() before all potential
page references have been reaped.
Introduce a new ->cleanup() callback to do the work of waiting for any
straggling page references and then perform the percpu_ref_exit() in
devm_memremap_pages_release() context.
For p2pdma this involves some deeper reworks to reference count
resources on a per-instance basis rather than a per pci-device basis. A
modified genalloc api is introduced to convey a driver-private pointer
through gen_pool_{alloc,free}() interfaces. Also, a
devm_memunmap_pages() api is introduced since p2pdma does not
auto-release resources on a setup failure.
The dax and pmem changes pass the nvdimm unit tests, and the p2pdma
changes should now pass testing with the pci_p2pdma_release() fix.
Jrme, how does this look for HMM?
This patch (of 6):
The devm_add_action() facility allows a resource allocation routine to
add custom devm semantics. One such user is devm_memremap_pages().
There is now a need to manually trigger
devm_memremap_pages_release(). Introduce devm_release_action() so the
release action can be triggered via a new devm_memunmap_pages() api in a
follow-on change.
Link: http://lkml.kernel.org/r/155727336530.292046.2926860263201336366.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Thu, 13 Jun 2019 22:56:15 +0000 (15:56 -0700)]
mm/vmscan.c: fix trying to reclaim unevictable LRU page
There was the below bug report from Wu Fangsuo.
On the CMA allocation path, isolate_migratepages_range() could isolate
unevictable LRU pages and reclaim_clean_page_from_list() can try to
reclaim them if they are clean file-backed pages.
page:
ffffffbf02f33b40 count:86 mapcount:84 mapping:
ffffffc08fa7a810 index:0x24
flags: 0x19040c(referenced|uptodate|arch_1|mappedtodisk|unevictable|mlocked)
raw:
000000000019040c ffffffc08fa7a810 0000000000000024 0000005600000053
raw:
ffffffc009b05b20 ffffffc009b05b20 0000000000000000 ffffffc09bf3ee80
page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page))
page->mem_cgroup:
ffffffc09bf3ee80
------------[ cut here ]------------
kernel BUG at /home/build/farmland/adroid9.0/kernel/linux/mm/vmscan.c:1350!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 7125 Comm: syz-executor Tainted: G S 4.14.81 #3
Hardware name: ASR AQUILAC EVB (DT)
task:
ffffffc00a54cd00 task.stack:
ffffffc009b00000
PC is at shrink_page_list+0x1998/0x3240
LR is at shrink_page_list+0x1998/0x3240
pc : [<
ffffff90083a2158>] lr : [<
ffffff90083a2158>] pstate:
60400045
sp :
ffffffc009b05940
..
shrink_page_list+0x1998/0x3240
reclaim_clean_pages_from_list+0x3c0/0x4f0
alloc_contig_range+0x3bc/0x650
cma_alloc+0x214/0x668
ion_cma_allocate+0x98/0x1d8
ion_alloc+0x200/0x7e0
ion_ioctl+0x18c/0x378
do_vfs_ioctl+0x17c/0x1780
SyS_ioctl+0xac/0xc0
Wu found it's due to commit
ad6b67041a45 ("mm: remove SWAP_MLOCK in
ttu"). Before that, unevictable pages go to cull_mlocked so that we
can't reach the VM_BUG_ON_PAGE line.
To fix the issue, this patch filters out unevictable LRU pages from the
reclaim_clean_pages_from_list in CMA.
Link: http://lkml.kernel.org/r/20190524071114.74202-1-minchan@kernel.org
Fixes: ad6b67041a45 ("mm: remove SWAP_MLOCK in ttu")
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Debugged-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Tested-by: Wu Fangsuo <fangsuowu@asrmicro.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Pankaj Suryawanshi <pankaj.suryawanshi@einfochips.com>
Cc: <stable@vger.kernel.org> [4.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrea Arcangeli [Thu, 13 Jun 2019 22:56:11 +0000 (15:56 -0700)]
coredump: fix race condition between collapse_huge_page() and core dumping
When fixing the race conditions between the coredump and the mmap_sem
holders outside the context of the process, we focused on
mmget_not_zero()/get_task_mm() callers in
04f5866e41fb70 ("coredump: fix
race condition between mmget_not_zero()/get_task_mm() and core
dumping"), but those aren't the only cases where the mmap_sem can be
taken outside of the context of the process as Michal Hocko noticed
while backporting that commit to older -stable kernels.
If mmgrab() is called in the context of the process, but then the
mm_count reference is transferred outside the context of the process,
that can also be a problem if the mmap_sem has to be taken for writing
through that mm_count reference.
khugepaged registration calls mmgrab() in the context of the process,
but the mmap_sem for writing is taken later in the context of the
khugepaged kernel thread.
collapse_huge_page() after taking the mmap_sem for writing doesn't
modify any vma, so it's not obvious that it could cause a problem to the
coredump, but it happens to modify the pmd in a way that breaks an
invariant that pmd_trans_huge_lock() relies upon. collapse_huge_page()
needs the mmap_sem for writing just to block concurrent page faults that
call pmd_trans_huge_lock().
Specifically the invariant that "!pmd_trans_huge()" cannot become a
"pmd_trans_huge()" doesn't hold while collapse_huge_page() runs.
The coredump will call __get_user_pages() without mmap_sem for reading,
which eventually can invoke a lockless page fault which will need a
functional pmd_trans_huge_lock().
So collapse_huge_page() needs to use mmget_still_valid() to check it's
not running concurrently with the coredump... as long as the coredump
can invoke page faults without holding the mmap_sem for reading.
This has "Fixes: khugepaged" to facilitate backporting, but in my view
it's more a bug in the coredump code that will eventually have to be
rewritten to stop invoking page faults without the mmap_sem for reading.
So the long term plan is still to drop all mmget_still_valid().
Link: http://lkml.kernel.org/r/20190607161558.32104-1-aarcange@redhat.com
Fixes: ba76149f47d8 ("thp: khugepaged")
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
swkhack [Thu, 13 Jun 2019 22:56:08 +0000 (15:56 -0700)]
mm/mlock.c: change count_mm_mlocked_page_nr return type
On a 64-bit machine the value of "vma->vm_end - vma->vm_start" may be
negative when using 32 bit ints and the "count >> PAGE_SHIFT"'s result
will be wrong. So change the local variable and return value to
unsigned long to fix the problem.
Link: http://lkml.kernel.org/r/20190513023701.83056-1-swkhack@gmail.com
Fixes: 0cf2f6f6dc60 ("mm: mlock: check against vma for actual mlock() size")
Signed-off-by: swkhack <swkhack@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yang Shi [Thu, 13 Jun 2019 22:56:05 +0000 (15:56 -0700)]
mm: mmu_gather: remove __tlb_reset_range() for force flush
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under non-exclusive lock (i.e. read mmap_sem).
Before commit
dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap"), the syscalls (e.g. MADV_DONTNEED, MADV_FREE) which may update
PTEs in parallel don't remove page tables. But, the forementioned
commit may do munmap() under read mmap_sem and free page tables. This
may result in program hang on aarch64 reported by Jan Stancek. The
problem could be reproduced by his test program with slightly modified
below.
---8<---
static int map_size = 4096;
static int num_iter = 500;
static long threads_total;
static void *distant_area;
void *map_write_unmap(void *ptr)
{
int *fd = ptr;
unsigned char *map_address;
int i, j = 0;
for (i = 0; i < num_iter; i++) {
map_address = mmap(distant_area, (size_t) map_size, PROT_WRITE | PROT_READ,
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
if (map_address == MAP_FAILED) {
perror("mmap");
exit(1);
}
for (j = 0; j < map_size; j++)
map_address[j] = 'b';
if (munmap(map_address, map_size) == -1) {
perror("munmap");
exit(1);
}
}
return NULL;
}
void *dummy(void *ptr)
{
return NULL;
}
int main(void)
{
pthread_t thid[2];
/* hint for mmap in map_write_unmap() */
distant_area = mmap(0, DISTANT_MMAP_SIZE, PROT_WRITE | PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
munmap(distant_area, (size_t)DISTANT_MMAP_SIZE);
distant_area += DISTANT_MMAP_SIZE / 2;
while (1) {
pthread_create(&thid[0], NULL, map_write_unmap, NULL);
pthread_create(&thid[1], NULL, dummy, NULL);
pthread_join(thid[0], NULL);
pthread_join(thid[1], NULL);
}
}
---8<---
The program may bring in parallel execution like below:
t1 t2
munmap(map_address)
downgrade_write(&mm->mmap_sem);
unmap_region()
tlb_gather_mmu()
inc_tlb_flush_pending(tlb->mm);
free_pgtables()
tlb->freed_tables = 1
tlb->cleared_pmds = 1
pthread_exit()
madvise(thread_stack, 8M, MADV_DONTNEED)
zap_page_range()
tlb_gather_mmu()
inc_tlb_flush_pending(tlb->mm);
tlb_finish_mmu()
if (mm_tlb_flush_nested(tlb->mm))
__tlb_reset_range()
__tlb_reset_range() would reset freed_tables and cleared_* bits, but this
may cause inconsistency for munmap() which do free page tables. Then it
may result in some architectures, e.g. aarch64, may not flush TLB
completely as expected to have stale TLB entries remained.
Use fullmm flush since it yields much better performance on aarch64 and
non-fullmm doesn't yields significant difference on x86.
The original proposed fix came from Jan Stancek who mainly debugged this
issue, I just wrapped up everything together.
Jan's testing results:
v5.2-rc2-24-gbec7550cca10
--------------------------
mean stddev
real 37.382 2.780
user 1.420 0.078
sys 54.658 1.855
v5.2-rc2-24-gbec7550cca10 + "mm: mmu_gather: remove __tlb_reset_range() for force flush"
---------------------------------------------------------------------------------------_
mean stddev
real 37.119 2.105
user 1.548 0.087
sys 55.698 1.357
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1558322252-113575-1-git-send-email-yang.shi@linux.alibaba.com
Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Reported-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Suggested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org> [4.20+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Wengang Wang [Thu, 13 Jun 2019 22:56:01 +0000 (15:56 -0700)]
fs/ocfs2: fix race in ocfs2_dentry_attach_lock()
ocfs2_dentry_attach_lock() can be executed in parallel threads against the
same dentry. Make that race safe. The race is like this:
thread A thread B
(A1) enter ocfs2_dentry_attach_lock,
seeing dentry->d_fsdata is NULL,
and no alias found by
ocfs2_find_local_alias, so kmalloc
a new ocfs2_dentry_lock structure
to local variable "dl", dl1
.....
(B1) enter ocfs2_dentry_attach_lock,
seeing dentry->d_fsdata is NULL,
and no alias found by
ocfs2_find_local_alias so kmalloc
a new ocfs2_dentry_lock structure
to local variable "dl", dl2.
......
(A2) set dentry->d_fsdata with dl1,
call ocfs2_dentry_lock() and increase
dl1->dl_lockres.l_ro_holders to 1 on
success.
......
(B2) set dentry->d_fsdata with dl2
call ocfs2_dentry_lock() and increase
dl2->dl_lockres.l_ro_holders to 1 on
success.
......
(A3) call ocfs2_dentry_unlock()
and decrease
dl2->dl_lockres.l_ro_holders to 0
on success.
....
(B3) call ocfs2_dentry_unlock(),
decreasing
dl2->dl_lockres.l_ro_holders, but
see it's zero now, panic
Link: http://lkml.kernel.org/r/20190529174636.22364-1-wen.gang.wang@oracle.com
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Reported-by: Daniel Sobe <daniel.sobe@nxp.com>
Tested-by: Daniel Sobe <daniel.sobe@nxp.com>
Reviewed-by: Changwei Ge <gechangwei@live.cn>
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Cc: Mark Fasheh <mark@fasheh.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Gang He <ghe@suse.com>
Cc: Jun Piao <piaojun@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kirill Tkhai [Thu, 13 Jun 2019 22:55:58 +0000 (15:55 -0700)]
mm/vmscan.c: fix recent_rotated history
Johannes pointed out that after commit
886cf1901db9 ("mm: move
recent_rotated pages calculation to shrink_inactive_list()") we lost all
zone_reclaim_stat::recent_rotated history.
This fixes it.
Link: http://lkml.kernel.org/r/155905972210.26456.11178359431724024112.stgit@localhost.localdomain
Fixes: 886cf1901db9 ("mm: move recent_rotated pages calculation to shrink_inactive_list()")
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reported-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Potyra, Stefan [Thu, 13 Jun 2019 22:55:55 +0000 (15:55 -0700)]
mm/mlock.c: mlockall error for flag MCL_ONFAULT
If mlockall() is called with only MCL_ONFAULT as flag, it removes any
previously applied lockings and does nothing else.
This behavior is counter-intuitive and doesn't match the Linux man page.
For mlockall():
EINVAL Unknown flags were specified or MCL_ONFAULT was specified
without either MCL_FUTURE or MCL_CURRENT.
Consequently, return the error EINVAL, if only MCL_ONFAULT is passed.
That way, applications will at least detect that they are calling
mlockall() incorrectly.
Link: http://lkml.kernel.org/r/20190527075333.GA6339@er01809n.ebgroup.elektrobit.com
Fixes: b0f205c2a308 ("mm: mlock: add mlock flags to enable VM_LOCKONFAULT usage")
Signed-off-by: Stefan Potyra <Stefan.Potyra@elektrobit.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Manuel Traut [Thu, 13 Jun 2019 22:55:52 +0000 (15:55 -0700)]
scripts/decode_stacktrace.sh: prefix addr2line with $CROSS_COMPILE
At least for ARM64 kernels compiled with the crosstoolchain from
Debian/stretch or with the toolchain from kernel.org the line number is
not decoded correctly by 'decode_stacktrace.sh':
$ echo "[ 136.513051] f1+0x0/0xc [kcrash]" | \
CROSS_COMPILE=/opt/gcc-8.1.0-nolibc/aarch64-linux/bin/aarch64-linux- \
./scripts/decode_stacktrace.sh /scratch/linux-arm64/vmlinux \
/scratch/linux-arm64 \
/nfs/debian/lib/modules/4.20.0-devel
[ 136.513051] f1 (/linux/drivers/staging/kcrash/kcrash.c:68) kcrash
If addr2line from the toolchain is used the decoded line number is correct:
[ 136.513051] f1 (/linux/drivers/staging/kcrash/kcrash.c:57) kcrash
Link: http://lkml.kernel.org/r/20190527083425.3763-1-manut@linutronix.de
Signed-off-by: Manuel Traut <manut@linutronix.de>
Acked-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shakeel Butt [Thu, 13 Jun 2019 22:55:49 +0000 (15:55 -0700)]
mm/list_lru.c: fix memory leak in __memcg_init_list_lru_node
Syzbot reported following memory leak:
ffffffffda RBX:
0000000000000003 RCX:
0000000000441f79
BUG: memory leak
unreferenced object 0xffff888114f26040 (size 32):
comm "syz-executor626", pid 7056, jiffies
4294948701 (age 39.410s)
hex dump (first 32 bytes):
40 60 f2 14 81 88 ff ff 40 60 f2 14 81 88 ff ff @`......@`......
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
slab_post_alloc_hook mm/slab.h:439 [inline]
slab_alloc mm/slab.c:3326 [inline]
kmem_cache_alloc_trace+0x13d/0x280 mm/slab.c:3553
kmalloc include/linux/slab.h:547 [inline]
__memcg_init_list_lru_node+0x58/0xf0 mm/list_lru.c:352
memcg_init_list_lru_node mm/list_lru.c:375 [inline]
memcg_init_list_lru mm/list_lru.c:459 [inline]
__list_lru_init+0x193/0x2a0 mm/list_lru.c:626
alloc_super+0x2e0/0x310 fs/super.c:269
sget_userns+0x94/0x2a0 fs/super.c:609
sget+0x8d/0xb0 fs/super.c:660
mount_nodev+0x31/0xb0 fs/super.c:1387
fuse_mount+0x2d/0x40 fs/fuse/inode.c:1236
legacy_get_tree+0x27/0x80 fs/fs_context.c:661
vfs_get_tree+0x2e/0x120 fs/super.c:1476
do_new_mount fs/namespace.c:2790 [inline]
do_mount+0x932/0xc50 fs/namespace.c:3110
ksys_mount+0xab/0x120 fs/namespace.c:3319
__do_sys_mount fs/namespace.c:3333 [inline]
__se_sys_mount fs/namespace.c:3330 [inline]
__x64_sys_mount+0x26/0x30 fs/namespace.c:3330
do_syscall_64+0x76/0x1a0 arch/x86/entry/common.c:301
entry_SYSCALL_64_after_hwframe+0x44/0xa9
This is a simple off by one bug on the error path.
Link: http://lkml.kernel.org/r/20190528043202.99980-1-shakeelb@google.com
Fixes: 60d3fd32a7a9 ("list_lru: introduce per-memcg lists")
Reported-by: syzbot+f90a420dfe2b1b03cb2c@syzkaller.appspotmail.com
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: <stable@vger.kernel.org> [4.0+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Johannes Weiner [Thu, 13 Jun 2019 22:55:46 +0000 (15:55 -0700)]
mm: memcontrol: don't batch updates of local VM stats and events
The kernel test robot noticed a 26% will-it-scale pagefault regression
from commit
42a300353577 ("mm: memcontrol: fix recursive statistics
correctness & scalabilty"). This appears to be caused by bouncing the
additional cachelines from the new hierarchical statistics counters.
We can fix this by getting rid of the batched local counters instead.
Originally, there were *only* group-local counters, and they were fully
maintained per cpu. A reader of a stats file high up in the cgroup tree
would have to walk the entire subtree and collect each level's per-cpu
counters to get the recursive view. This was prohibitively expensive,
and so we switched to per-cpu batched updates of the local counters
during
a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
memory.stat reporting"), reducing the complexity from nr_subgroups *
nr_cpus to nr_subgroups.
With growing machines and cgroup trees, the tree walk itself became too
expensive for monitoring top-level groups, and this is when the culprit
patch added hierarchy counters on each cgroup level. When the per-cpu
batch size would be reached, both the local and the hierarchy counters
would get batch-updated from the per-cpu delta simultaneously.
This makes local and hierarchical counter reads blazingly fast, but it
unfortunately makes the write-side too cache line intense.
Since local counter reads were never a problem - we only centralized
them to accelerate the hierarchy walk - and use of the local counters
are becoming rarer due to replacement with hierarchical views (ongoing
rework in the page reclaim and workingset code), we can make those local
counters unbatched per-cpu counters again.
The scheme will then be as such:
when a memcg statistic changes, the writer will:
- update the local counter (per-cpu)
- update the batch counter (per-cpu). If the batch is full:
- spill the batch into the group's atomic_t
- spill the batch into all ancestors' atomic_ts
- empty out the batch counter (per-cpu)
when a local memcg counter is read, the reader will:
- collect the local counter from all cpus
when a hiearchy memcg counter is read, the reader will:
- read the atomic_t
We might be able to simplify this further and make the recursive
counters unbatched per-cpu counters as well (batch upward propagation,
but leave per-cpu collection to the readers), but that will require a
more in-depth analysis and testing of all the callsites. Deal with the
immediate regression for now.
Link: http://lkml.kernel.org/r/20190521151647.GB2870@cmpxchg.org
Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: kernel test robot <rong.a.chen@intel.com>
Tested-by: kernel test robot <rong.a.chen@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Thu, 13 Jun 2019 15:59:05 +0000 (05:59 -1000)]
Merge branch 'for-linus' of git://git./linux/kernel/git/hid/hid
Pull HID fixes from Jiri Kosina:
- regression fixes (reverts) for module loading changes that turned out
to be incompatible with some userspace, from Benjamin Tissoires
- regression fix for special Logitech unifiying receiver 0xc52f, from
Hans de Goede
- a few device ID additions to logitech driver, from Hans de Goede
- fix for Bluetooth support on 2nd-gen Wacom Intuos Pro, from Jason
Gerecke
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
HID: logitech-dj: Fix 064d:c52f receiver support
Revert "HID: core: Call request_module before doing device_add"
Revert "HID: core: Do not call request_module() in async context"
Revert "HID: Increase maximum report size allowed by hid_field_extract()"
HID: a4tech: fix horizontal scrolling
HID: hyperv: Add a module description line
HID: logitech-hidpp: Add support for the S510 remote control
HID: multitouch: handle faulty Elo touch device
HID: wacom: Sync INTUOSP2_BT touch state after each frame if necessary
HID: wacom: Correct button numbering 2nd-gen Intuos Pro over Bluetooth
HID: wacom: Send BTN_TOUCH in response to INTUOSP2_BT eraser contact
HID: wacom: Don't report anything prior to the tool entering range
HID: wacom: Don't set tool type until we're in range
HID: rmi: Use SET_REPORT request on control endpoint for Acer Switch 3 and 5
HID: logitech-hidpp: add support for the MX5500 keyboard
HID: logitech-dj: add support for the Logitech MX5500's Bluetooth Mini-Receiver
HID: i2c-hid: add iBall Aer3 to descriptor override
Takashi Iwai [Thu, 13 Jun 2019 15:33:34 +0000 (17:33 +0200)]
Merge tag 'asoc-fix-v5.2-rc4' of https://git./linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v5.2
There's an awful lot of fixes here, almost all for the newly introduced
SoF DSP drivers (including a few things it turned up in shared code).
This is a large and complex piece of code so it's not surprising that
there have been quite a few issues here, fortunately things seem to have
mostly calmed down now. Otherwise there's just a smattering of small fixes.
Geert Uytterhoeven [Thu, 13 Jun 2019 07:30:06 +0000 (09:30 +0200)]
block/ps3vram: Use %llu to format sector_t after LBDAF removal
The removal of CONFIG_LBDAF changed the type of sector_t from "unsigned
long" to "u64" aka "unsigned long long" on 64-bit platforms, leading to
a compiler warning regression:
drivers/block/ps3vram.c: In function ‘ps3vram_probe’:
drivers/block/ps3vram.c:770:23: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 4 has type ‘sector_t {aka long long unsigned int}’ [-Wformat=]
Fix this by using "%llu" instead.
Fixes: 72deb455b5ec619f ("block: remove CONFIG_LBDAF")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans de Goede [Tue, 11 Jun 2019 14:32:59 +0000 (16:32 +0200)]
libata: Extend quirks for the ST1000LM024 drives with NOLPM quirk
We've received a bugreport that using LPM with ST1000LM024 drives leads
to system lockups. So it seems that these models are buggy in more then
1 way. Add NOLPM quirk to the existing quirks entry for BROKEN_FPDMA_AA.
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1571330
Cc: stable@vger.kernel.org
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Sun, 9 Jun 2019 22:13:35 +0000 (06:13 +0800)]
bcache: only set BCACHE_DEV_WB_RUNNING when cached device attached
When people set a writeback percent via sysfs file,
/sys/block/bcache<N>/bcache/writeback_percent
current code directly sets BCACHE_DEV_WB_RUNNING to dc->disk.flags
and schedules kworker dc->writeback_rate_update.
If there is no cache set attached to, the writeback kernel thread is
not running indeed, running dc->writeback_rate_update does not make
sense and may cause NULL pointer deference when reference cache set
pointer inside update_writeback_rate().
This patch checks whether the cache set point (dc->disk.c) is NULL in
sysfs interface handler, and only set BCACHE_DEV_WB_RUNNING and
schedule dc->writeback_rate_update when dc->disk.c is not NULL (it
means the cache device is attached to a cache set).
This problem might be introduced from initial bcache commit, but
commit
3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
changes part of the original code piece, so I add 'Fixes:
3fd47bfe55b0'
to indicate from which commit this patch can be applied.
Fixes: 3fd47bfe55b0 ("bcache: stop dc->writeback_rate_update properly")
Reported-by: Bjørn Forsman <bjorn.forsman@gmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Bjørn Forsman <bjorn.forsman@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Sun, 9 Jun 2019 22:13:34 +0000 (06:13 +0800)]
bcache: fix stack corruption by PRECEDING_KEY()
Recently people report bcache code compiled with gcc9 is broken, one of
the buggy behavior I observe is that two adjacent 4KB I/Os should merge
into one but they don't. Finally it turns out to be a stack corruption
caused by macro PRECEDING_KEY().
See how PRECEDING_KEY() is defined in bset.h,
437 #define PRECEDING_KEY(_k) \
438 ({ \
439 struct bkey *_ret = NULL; \
440 \
441 if (KEY_INODE(_k) || KEY_OFFSET(_k)) { \
442 _ret = &KEY(KEY_INODE(_k), KEY_OFFSET(_k), 0); \
443 \
444 if (!_ret->low) \
445 _ret->high--; \
446 _ret->low--; \
447 } \
448 \
449 _ret; \
450 })
At line 442, _ret points to address of a on-stack variable combined by
KEY(), the life range of this on-stack variable is in line 442-446,
once _ret is returned to bch_btree_insert_key(), the returned address
points to an invalid stack address and this address is overwritten in
the following called bch_btree_iter_init(). Then argument 'search' of
bch_btree_iter_init() points to some address inside stackframe of
bch_btree_iter_init(), exact address depends on how the compiler
allocates stack space. Now the stack is corrupted.
Fixes: 0eacac22034c ("bcache: PRECEDING_KEY()")
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Rolf Fokkens <rolf@rolffokkens.nl>
Reviewed-by: Pierre JUHEN <pierre.juhen@orange.fr>
Tested-by: Shenghui Wang <shhuiw@foxmail.com>
Tested-by: Pierre JUHEN <pierre.juhen@orange.fr>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Nix <nix@esperi.org.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dave Martin [Wed, 12 Jun 2019 16:00:32 +0000 (17:00 +0100)]
arm64/sve: Fix missing SVE/FPSIMD endianness conversions
The in-memory representation of SVE and FPSIMD registers is
different: the FPSIMD V-registers are stored as single 128-bit
host-endian values, whereas SVE registers are stored in an
endianness-invariant byte order.
This means that the two representations differ when running on a
big-endian host. But we blindly copy data from one representation
to another when converting between the two, resulting in the
register contents being unintentionally byteswapped in certain
situations. Currently this can be triggered by the first SVE
instruction after a syscall, for example (though the potential
trigger points may vary in future).
So, fix the conversion functions fpsimd_to_sve(), sve_to_fpsimd()
and sve_sync_from_fpsimd_zeropad() to swab where appropriate.
There is no common swahl128() or swab128() that we could use here.
Maybe it would be worth making this generic, but for now add a
simple local hack.
Since the byte order differences are exposed in ABI, also clarify
the documentation.
Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Alan Hayward <alan.hayward@arm.com>
Cc: Julien Grall <julien.grall@arm.com>
Fixes: bc0ee4760364 ("arm64/sve: Core task context handling")
Fixes: 8cd969d28fd2 ("arm64/sve: Signal handling support")
Fixes: 43d4da2c45b2 ("arm64/sve: ptrace and ELF coredump support")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
[will: Fix typos in comments and docs spotted by Julien]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Ming Lei [Tue, 11 Jun 2019 09:31:53 +0000 (17:31 +0800)]
blk-mq: remove WARN_ON(!q->elevator) from blk_mq_sched_free_requests
blk_mq_sched_free_requests() may be called in failure path in which
q->elevator may not be setup yet, so remove WARN_ON(!q->elevator) from
blk_mq_sched_free_requests for avoiding the false positive.
This function is actually safe to call in case of !q->elevator because
hctx->sched_tags is checked.
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Yi Zhang <yi.zhang@redhat.com>
Fixes: c3e2219216c9 ("block: free sched's request pool in blk_cleanup_queue")
Reported-by: syzbot+b9d0d56867048c7bcfde@syzkaller.appspotmail.com
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Herrmann [Wed, 12 Jun 2019 06:17:32 +0000 (08:17 +0200)]
blkio-controller.txt: Remove references to CFQ
CFQ is gone. No need anymore to document its "proportional weight time
based division of disk policy".
Signed-off-by: Andreas Herrmann <aherrmann@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Andreas Herrmann [Wed, 12 Jun 2019 06:50:09 +0000 (08:50 +0200)]
block/switching-sched.txt: Update to blk-mq schedulers
Remove references to CFQ and legacy block layer which are gone.
Update example with what's available under blk-mq.
Signed-off-by: Andreas Herrmann <aherrmann@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Chaitanya Kulkarni [Tue, 11 Jun 2019 22:10:17 +0000 (15:10 -0700)]
null_blk: remove duplicate check for report zone
This patch removes the check in the null_blk_zoned for report zone
command, where it checks for the dev-,>zoned before executing the report
zone.
The null_zone_report() function is a block_device operation callback
which is initialized in the null_blk_main.c and gets called as a part
of blkdev for report zone IOCTL (BLKREPORTZONE).
blkdev_ioctl()
blkdev_report_zones_ioctl()
blkdev_report_zones()
blk_report_zones()
disk->fops->report_zones()
nullb_zone_report();
The null_zone_report() will never get executed on the non-zoned block
device, in the non zoned block device blk_queue_is_zoned() will always
be false which is first check the blkdev_report_zones_ioctl()
before actual low level driver report zone callback is executed.
Here is the detailed scenario:-
1. modprobe null_blk
null_init
null_alloc_dev
dev->zoned = 0
null_add_dev
dev->zoned == 0
so we don't set the q->limits.zoned = BLK_ZONED_HR
2. blkzone report /dev/nullb0
blkdev_ioctl()
blkdev_report_zones_ioctl()
blk_queue_is_zoned()
blk_queue_is_zoned
q->limits.zoned == 0
return false
if (!blk_queue_is_zoned(q)) <--- true
return -ENOTTY;
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Kroah-Hartman [Wed, 12 Jun 2019 12:30:19 +0000 (14:30 +0200)]
blk-mq: no need to check return value of debugfs_create functions
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
When all of these checks are cleaned up, lots of the functions used in
the blk-mq-debugfs code can now return void, as no need to check the
return value of them either.
Overall, this ends up cleaning up the code and making it smaller, always
a nice win.
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Eric Biggers [Wed, 12 Jun 2019 21:58:43 +0000 (14:58 -0700)]
io_uring: fix memory leak of UNIX domain socket inode
Opening and closing an io_uring instance leaks a UNIX domain socket
inode. This is because the ->file of the io_uring instance's internal
UNIX domain socket is set to point to the io_uring file, but then
sock_release() sees the non-NULL ->file and assumes the inode reference
is held by the file so doesn't call iput(). That's not the case here,
since the reference is still meant to be held by the socket; the actual
inode of the io_uring file is different.
Fix this leak by NULL-ing out ->file before releasing the socket.
Reported-by: syzbot+111cb28d9f583693aefa@syzkaller.appspotmail.com
Fixes: 2b188cc1bb85 ("Add io_uring IO interface")
Cc: <stable@vger.kernel.org> # v5.1+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Tue, 4 Jun 2019 07:23:40 +0000 (16:23 +0900)]
block: force select mq-deadline for zoned block devices
In most use cases of zoned block devices (aka SMR disks), the
mq-deadline scheduler is mandatory as it implements sequential write
command processing guarantees with zone write locking. So make sure that
this scheduler is always enabled if CONFIG_BLK_DEV_ZONED is selected.
Tested-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Thu, 13 Jun 2019 02:10:57 +0000 (16:10 -1000)]
Merge tag 'selinux-pr-
20190612' of git://git./linux/kernel/git/pcmoore/selinux
Pull selinux fixes from Paul Moore:
"Three patches for v5.2.
One fixes a problem where we weren't correctly logging raw SELinux
labels, the other two fix problems where we weren't properly checking
calls to kmemdup()"
* tag 'selinux-pr-
20190612' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
selinux: fix a missing-check bug in selinux_sb_eat_lsm_opts()
selinux: fix a missing-check bug in selinux_add_mnt_opt( )
selinux: log raw contexts as untrusted strings
Gen Zhang [Wed, 12 Jun 2019 13:55:38 +0000 (21:55 +0800)]
selinux: fix a missing-check bug in selinux_sb_eat_lsm_opts()
In selinux_sb_eat_lsm_opts(), 'arg' is allocated by kmemdup_nul(). It
returns NULL when fails. So 'arg' should be checked. And 'mnt_opts'
should be freed when error.
Signed-off-by: Gen Zhang <blackgod016574@gmail.com>
Fixes: 99dbbb593fe6 ("selinux: rewrite selinux_sb_eat_lsm_opts()")
Cc: <stable@vger.kernel.org>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Linus Torvalds [Wed, 12 Jun 2019 15:57:05 +0000 (05:57 -1000)]
Merge tag 'media/v5.2-2' of git://git./linux/kernel/git/mchehab/linux-media
Pull media fixes from Mauro Carvalho Chehab:
- a debug warning for satellite tuning at dvb core was producing too
much noise
- a regression at hfi_parser on Venus driver
* tag 'media/v5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: venus: hfi_parser: fix a regression in parser
media: dvb: warning about dvb frequency limits produces too much noise
Gen Zhang [Wed, 12 Jun 2019 13:28:21 +0000 (21:28 +0800)]
selinux: fix a missing-check bug in selinux_add_mnt_opt( )
In selinux_add_mnt_opt(), 'val' is allocated by kmemdup_nul(). It returns
NULL when fails. So 'val' should be checked. And 'mnt_opts' should be
freed when error.
Signed-off-by: Gen Zhang <blackgod016574@gmail.com>
Fixes: 757cbe597fe8 ("LSM: new method: ->sb_add_mnt_opt()")
Cc: <stable@vger.kernel.org>
[PM: fixed some indenting problems]
Signed-off-by: Paul Moore <paul@paul-moore.com>
Will Deacon [Tue, 11 Jun 2019 11:47:34 +0000 (12:47 +0100)]
arm64: tlbflush: Ensure start/end of address range are aligned to stride
Since commit
3d65b6bbc01e ("arm64: tlbi: Set MAX_TLBI_OPS to
PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to
perform more than PTRS_PER_PTE invalidation instructions in a single
call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather
code does not ensure that the end address of the range is rounded-up
to the stride when freeing intermediate page tables in pXX_free_tlb(),
which defeats our range checking.
Align the bounds passed into __flush_tlb_range().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Reviewed-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Linus Walleij [Thu, 30 May 2019 20:24:24 +0000 (22:24 +0200)]
i2c: pca-platform: Fix GPIO lookup code
The devm_gpiod_request_gpiod() call will add "-gpios" to
any passed connection ID before looking it up.
I do not think the reset GPIO on this platform is named
"reset-gpios-gpios" but rather "reset-gpios" in the device
tree, so fix this up so that we get a proper reset GPIO
handle.
Also drop the inclusion of the legacy GPIO header.
Fixes: 0e8ce93bdceb ("i2c: pca-platform: add devicetree awareness")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>