drm/i915: Tighten atomicity of i915_active_acquire vs i915_active_release
authorChris Wilson <chris@chris-wilson.co.uk>
Sun, 26 Jan 2020 10:23:43 +0000 (10:23 +0000)
committerJani Nikula <jani.nikula@intel.com>
Wed, 12 Feb 2020 11:24:44 +0000 (13:24 +0200)
As we use a mutex to serialise the first acquire (as it may be a lengthy
operation), but only an atomic decrement for the release, we have to
be careful in case a second thread races and completes both
acquire/release as the first finishes its acquire.

Thread A Thread B
i915_active_acquire i915_active_acquire
  atomic_read() == 0   atomic_read() == 0
  mutex_lock()   mutex_lock()
  atomic_read() == 0
    ref->active();
  atomic_inc()
  mutex_unlock()
  atomic_read() == 1
i915_active_release
  atomic_dec_and_test() -> 0
    ref->retire()
  atomic_inc() -> 1
  mutex_unlock()

So thread A has acquired the ref->active_count but since the ref was
still active at the time, it did not initialise it. By switching the
check inside the mutex to an atomic increment only if already active, we
close the race.

Fixes: c9ad602feabe ("drm/i915: Split i915_active.mutex into an irq-safe spinlock for the rbtree")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200126102346.1877661-3-chris@chris-wilson.co.uk
(cherry picked from commit ac0e331a628b5ded087eab09fad2ffb082ac61ba)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
drivers/gpu/drm/i915/i915_active.c

index f3da5c06f331a5cf7df35ac3c59a7a587ccbc81c..4fcd567ff818d49d3b87be0b823445eaf6cbabfe 100644 (file)
@@ -416,13 +416,15 @@ int i915_active_acquire(struct i915_active *ref)
        if (err)
                return err;
 
-       if (!atomic_read(&ref->count) && ref->active)
-               err = ref->active(ref);
-       if (!err) {
-               spin_lock_irq(&ref->tree_lock); /* vs __active_retire() */
-               debug_active_activate(ref);
-               atomic_inc(&ref->count);
-               spin_unlock_irq(&ref->tree_lock);
+       if (likely(!i915_active_acquire_if_busy(ref))) {
+               if (ref->active)
+                       err = ref->active(ref);
+               if (!err) {
+                       spin_lock_irq(&ref->tree_lock); /* __active_retire() */
+                       debug_active_activate(ref);
+                       atomic_inc(&ref->count);
+                       spin_unlock_irq(&ref->tree_lock);
+               }
        }
 
        mutex_unlock(&ref->mutex);