For locking semantics it really doesn't matter when we grab the
ticket. But for lockdep validation it does: the acquire ctx is a fake
lockdep. Since other drivers might want to do a full multi-lock dance
in their fault-handler, not just lock a single dma_resv. Therefore we
must init the acquire_ctx only after we've done all the copy_*_user or
anything else that might trigger a pagefault. For msm this means we
need to move it past submit_lookup_objects.
Aside: Why is msm still using struct_mutex, it seems to be using
dma_resv_lock for general buffer state protection?
v2:
- Add comment to explain why the ww ticket setup is separate (Rob)
- Fix up error handling, we need to make sure we don't call
ww_acquire_fini without _init.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-and-tested-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Link: https://patchwork.freedesktop.org/patch/msgid/20191120105607.3023-1-daniel.vetter@ffwll.ch
INIT_LIST_HEAD(&submit->node);
INIT_LIST_HEAD(&submit->bo_list);
- ww_acquire_init(&submit->ticket, &reservation_ww_class);
return submit;
}
list_del_init(&msm_obj->submit_entry);
drm_gem_object_put(&msm_obj->base);
}
-
- ww_acquire_fini(&submit->ticket);
}
int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
struct msm_ringbuffer *ring;
int out_fence_fd = -1;
struct pid *pid = get_pid(task_pid(current));
+ bool has_ww_ticket = false;
unsigned i;
int ret, submitid;
if (!gpu)
if (ret)
goto out;
+ /* copy_*_user while holding a ww ticket upsets lockdep */
+ ww_acquire_init(&submit->ticket, &reservation_ww_class);
+ has_ww_ticket = true;
ret = submit_lock_objects(submit);
if (ret)
goto out;
out:
submit_cleanup(submit);
+ if (has_ww_ticket)
+ ww_acquire_fini(&submit->ticket);
if (ret)
msm_gem_submit_free(submit);
out_unlock: