sbitmap: fix improper use of smp_mb__before_atomic()
authorAndrea Parri <andrea.parri@amarulasolutions.com>
Mon, 20 May 2019 17:23:57 +0000 (19:23 +0200)
committerJens Axboe <axboe@kernel.dk>
Thu, 23 May 2019 16:25:26 +0000 (10:25 -0600)
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.

Replace the barrier with an smp_mb().

Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
Cc: stable@vger.kernel.org
Reported-by: "Paul E. McKenney" <paulmck@linux.ibm.com>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: linux-block@vger.kernel.org
Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
lib/sbitmap.c

index 155fe38756ecfda251f26fa8616a325dddd8d455..4a7fc4915dfc6206c350372de067a54431391246 100644 (file)
@@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
                 * to ensure that the batch size is updated before the wait
                 * counts.
                 */
-               smp_mb__before_atomic();
+               smp_mb();
                for (i = 0; i < SBQ_WAIT_QUEUES; i++)
                        atomic_set(&sbq->ws[i].wait_cnt, 1);
        }