locking/ww_mutex: Fix runtime warning in the WW mutex selftest
authorGuenter Roeck <linux@roeck-us.net>
Tue, 2 Oct 2018 21:48:49 +0000 (14:48 -0700)
committerIngo Molnar <mingo@kernel.org>
Wed, 3 Oct 2018 06:56:31 +0000 (08:56 +0200)
If CONFIG_WW_MUTEX_SELFTEST=y is enabled, booting an image
in an arm64 virtual machine results in the following
traceback if 8 CPUs are enabled:

  DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current)
  WARNING: CPU: 2 PID: 537 at kernel/locking/mutex.c:1033 __mutex_unlock_slowpath+0x1a8/0x2e0
  ...
  Call trace:
   __mutex_unlock_slowpath()
   ww_mutex_unlock()
   test_cycle_work()
   process_one_work()
   worker_thread()
   kthread()
   ret_from_fork()

If requesting b_mutex fails with -EDEADLK, the error variable
is reassigned to the return value from calling ww_mutex_lock
on a_mutex again. If this call fails, a_mutex is not locked.
It is, however, unconditionally unlocked subsequently, causing
the reported warning. Fix the problem by using two error variables.

With this change, the selftest still fails as follows:

  cyclic deadlock not resolved, ret[7/8] = -35

However, the traceback is gone.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: d1b42b800e5d0 ("locking/ww_mutex: Add kselftests for resolving ww_mutex cyclic deadlocks")
Link: http://lkml.kernel.org/r/1538516929-9734-1-git-send-email-linux@roeck-us.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/test-ww_mutex.c

index 0be047dbd8971dcd4ee1282936d3b3056f6ebfea..65a3b7e55b9fcd2b289e09d194a179f1cc8accc5 100644 (file)
@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work)
 {
        struct test_cycle *cycle = container_of(work, typeof(*cycle), work);
        struct ww_acquire_ctx ctx;
-       int err;
+       int err, erra = 0;
 
        ww_acquire_init(&ctx, &ww_class);
        ww_mutex_lock(&cycle->a_mutex, &ctx);
@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work)
 
        err = ww_mutex_lock(cycle->b_mutex, &ctx);
        if (err == -EDEADLK) {
+               err = 0;
                ww_mutex_unlock(&cycle->a_mutex);
                ww_mutex_lock_slow(cycle->b_mutex, &ctx);
-               err = ww_mutex_lock(&cycle->a_mutex, &ctx);
+               erra = ww_mutex_lock(&cycle->a_mutex, &ctx);
        }
 
        if (!err)
                ww_mutex_unlock(cycle->b_mutex);
-       ww_mutex_unlock(&cycle->a_mutex);
+       if (!erra)
+               ww_mutex_unlock(&cycle->a_mutex);
        ww_acquire_fini(&ctx);
 
-       cycle->result = err;
+       cycle->result = err ?: erra;
 }
 
 static int __test_cycle(unsigned int nthreads)