Otherwise we might run into a use after free during bulk move.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
struct amdgpu_bo_va *bo_va)
{
struct amdgpu_bo_va_mapping *mapping, *next;
+ struct amdgpu_bo *bo = bo_va->base.bo;
struct amdgpu_vm *vm = bo_va->base.vm;
+ if (bo && bo->tbo.resv == vm->root.base.bo->tbo.resv)
+ vm->bulk_moveable = false;
+
list_del(&bo_va->base.bo_list);
spin_lock(&vm->invalidated_lock);