md/raid1: clean up request counts properly in close_sync()
authorNeilBrown <neilb@suse.de>
Thu, 4 Sep 2014 06:30:38 +0000 (16:30 +1000)
committerNeilBrown <neilb@suse.de>
Mon, 22 Sep 2014 01:26:01 +0000 (11:26 +1000)
If there are outstanding writes when close_sync is called,
the change to ->start_next_window might cause them to
decrement the wrong counter when they complete.  Fix this
by merging the two counters into the one that will be decremented.

Having an incorrect value in a counter can cause raise_barrier()
to hangs, so this is suitable for -stable.

Fixes: 79ef3a8aa1cb1523cc231c9a90a278333c21f761
cc: stable@vger.kernel.org (v3.13+)
Signed-off-by: NeilBrown <neilb@suse.de>
drivers/md/raid1.c

index ad0468c42d23c3cccb739874874322531903b38b..a31c92bbcfc9ff7f573d19ac8fba9750e76bf2a7 100644 (file)
@@ -1545,8 +1545,13 @@ static void close_sync(struct r1conf *conf)
        mempool_destroy(conf->r1buf_pool);
        conf->r1buf_pool = NULL;
 
+       spin_lock_irq(&conf->resync_lock);
        conf->next_resync = 0;
        conf->start_next_window = MaxSector;
+       conf->current_window_requests +=
+               conf->next_window_requests;
+       conf->next_window_requests = 0;
+       spin_unlock_irq(&conf->resync_lock);
 }
 
 static int raid1_spare_active(struct mddev *mddev)