drm/i915: Limit the backpressure for i915_request allocation
authorChris Wilson <chris@chris-wilson.co.uk>
Fri, 14 Sep 2018 08:00:15 +0000 (09:00 +0100)
committerChris Wilson <chris@chris-wilson.co.uk>
Fri, 14 Sep 2018 10:54:59 +0000 (11:54 +0100)
If we try and fail to allocate a i915_request, we apply some
backpressure on the clients to throttle the memory allocations coming
from i915.ko. Currently, we wait until completely idle, but this is far
too heavy and leads to some situations where the only escape is to
declare a client hung and reset the GPU. The intent is to only ratelimit
the allocation requests and to allow ourselves to recycle requests and
memory from any long queues built up by a client hog.

Although the system memory is inherently a global resources, we don't
want to overly penalize an unlucky client to pay the price of reaping a
hog. To reduce the influence of one client on another, we can instead of
waiting for the entire GPU to idle, impose a barrier on the local client.
(One end goal for request allocation is for scalability to many
concurrent allocators; simultaneous execbufs.)

To prevent ourselves from getting caught out by long running requests
(requests that may never finish without userspace intervention, whom we
are blocking) we need to impose a finite timeout, ideally shorter than
hangcheck. A long time ago Paul McKenney suggested that RCU users should
ratelimit themselves using judicious use of cond_synchronize_rcu(). This
gives us the opportunity to reduce our indefinite wait for the GPU to
idle to a wait for the RCU grace period of the previous allocation along
this timeline to expire, satisfying both the local and finite properties
we desire for our ratelimiting.

There are still a few global steps (reclaim not least amongst those!)
when we exhaust the immediate slab pool, at least now the wait is itself
decoupled from struct_mutex for our glorious highly parallel future!

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106680
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914080017.30308-1-chris@chris-wilson.co.uk
drivers/gpu/drm/i915/i915_request.c
drivers/gpu/drm/i915/i915_request.h

index 09ed48833b5496a9c5b6501ed128cf4bfd504167..a492385b2089252d1a1b18daf5af68a9587b5760 100644 (file)
@@ -732,13 +732,13 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
        rq = kmem_cache_alloc(i915->requests,
                              GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
        if (unlikely(!rq)) {
+               i915_retire_requests(i915);
+
                /* Ratelimit ourselves to prevent oom from malicious clients */
-               ret = i915_gem_wait_for_idle(i915,
-                                            I915_WAIT_LOCKED |
-                                            I915_WAIT_INTERRUPTIBLE,
-                                            MAX_SCHEDULE_TIMEOUT);
-               if (ret)
-                       goto err_unreserve;
+               rq = i915_gem_active_raw(&ce->ring->timeline->last_request,
+                                        &i915->drm.struct_mutex);
+               if (rq)
+                       cond_synchronize_rcu(rq->rcustate);
 
                /*
                 * We've forced the client to stall and catch up with whatever
@@ -758,6 +758,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
                }
        }
 
+       rq->rcustate = get_state_synchronize_rcu();
+
        INIT_LIST_HEAD(&rq->active_list);
        rq->i915 = i915;
        rq->engine = engine;
index 9898301ab7ef5e3dc096e13cc0d24ddcef35cb41..7fa94b0249683b476142fc1213f86e0d6abac013 100644 (file)
@@ -100,6 +100,14 @@ struct i915_request {
        struct i915_timeline *timeline;
        struct intel_signal_node signaling;
 
+       /*
+        * The rcu epoch of when this request was allocated. Used to judiciously
+        * apply backpressure on future allocations to ensure that under
+        * mempressure there is sufficient RCU ticks for us to reclaim our
+        * RCU protected slabs.
+        */
+       unsigned long rcustate;
+
        /*
         * Fences for the various phases in the request's lifetime.
         *