vhost_net: conditionally enable tx polling
authorJason Wang <jasowang@redhat.com>
Mon, 13 Nov 2017 03:45:34 +0000 (11:45 +0800)
committerDavid S. Miller <davem@davemloft.net>
Wed, 15 Nov 2017 04:50:58 +0000 (13:50 +0900)
We always poll tx for socket, this is sub optimal since this will
slightly increase the waitqueue traversing time and more important,
vhost could not benefit from commit 9e641bdcfa4e ("net-tun:
restructure tun_do_read for better sleep/wakeup efficiency") even if
we've stopped rx polling during handle_rx(), tx poll were still left
in the waitqueue.

Pktgen from a remote host to VM over mlx4 on two 2.00GHz Xeon E5-2650
shows 11.7% improvements on rx PPS. (from 1.28Mpps to 1.44Mpps)

Cc: Wei Xu <wexu@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/vhost/net.c

index 68677d930e20197fdd651b0f23d29835f5102305..8d626d7c2e7e79db8d243278e805c96ad563bb3d 100644 (file)
@@ -471,6 +471,7 @@ static void handle_tx(struct vhost_net *net)
                goto out;
 
        vhost_disable_notify(&net->dev, vq);
+       vhost_net_disable_vq(net, vq);
 
        hdr_size = nvq->vhost_hlen;
        zcopy = nvq->ubufs;
@@ -556,6 +557,7 @@ static void handle_tx(struct vhost_net *net)
                                        % UIO_MAXIOV;
                        }
                        vhost_discard_vq_desc(vq, 1);
+                       vhost_net_enable_vq(net, vq);
                        break;
                }
                if (err != len)