udp: do rmem bulk free even if the rx sk queue is empty
authorPaolo Abeni <pabeni@redhat.com>
Tue, 19 Sep 2017 10:11:43 +0000 (12:11 +0200)
committerDavid S. Miller <davem@davemloft.net>
Wed, 20 Sep 2017 21:28:52 +0000 (14:28 -0700)
The commit 6b229cf77d68 ("udp: add batching to udp_rmem_release()")
reduced greatly the cacheline contention between the BH and the US
reader batching the rmem updates in most scenarios.

Such optimization is explicitly avoided if the US reader is faster
then BH processing.

My fault, I initially suggested this kind of behavior due to concerns
of possible regressions with small sk_rcvbuf values. Tests showed
such concerns are misplaced, so this commit relaxes the condition
for rmem bulk updates, obtaining small but measurable performance
gain in the scenario described above.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/ipv4/udp.c

index ef29df8648e4d388547269fe6f972e8ab473419e..784ced0b915018c8fd63d951d5d1b3f3c519fa09 100644 (file)
@@ -1212,8 +1212,7 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
        if (likely(partial)) {
                up->forward_deficit += size;
                size = up->forward_deficit;
-               if (size < (sk->sk_rcvbuf >> 2) &&
-                   !skb_queue_empty(&up->reader_queue))
+               if (size < (sk->sk_rcvbuf >> 2))
                        return;
        } else {
                size += up->forward_deficit;