search for: 1d3e45f

Displaying 15 results from an estimated 15 matches for "1d3e45f".

2016 Jun 07
1
[PATCH V3 2/2] vhost_net: conditionally enable tx polling
...vement on guest rx pps. > > Before: ~1350000 > After: ~1460000 > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 1d3e45f..e75ffcc 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -378,6 +378,7 @@ static void handle_tx(struct vhost_net *net) > goto out; > > vhost_disable_notify(&net->dev, vq); > + vhost_net_disable_vq(net, vq); > > hdr_size = nvq->vhos...
2016 Jun 07
1
[PATCH V3 2/2] vhost_net: conditionally enable tx polling
...vement on guest rx pps. > > Before: ~1350000 > After: ~1460000 > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 1d3e45f..e75ffcc 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -378,6 +378,7 @@ static void handle_tx(struct vhost_net *net) > goto out; > > vhost_disable_notify(&net->dev, vq); > + vhost_net_disable_vq(net, vq); > > hdr_size = nvq->vhos...
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V2: - Don't enable rx vq if we meet an err or rx vq is empty Changes from V1:
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V2: - Don't enable rx vq if we meet an err or rx vq is empty Changes from V1:
2016 Jun 01
0
[PATCH V3 2/2] vhost_net: conditionally enable tx polling
...only when -EAGAIN were met. Test shows about 8% improvement on guest rx pps. Before: ~1350000 After: ~1460000 Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 1d3e45f..e75ffcc 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -378,6 +378,7 @@ static void handle_tx(struct vhost_net *net) goto out; vhost_disable_notify(&net->dev, vq); + vhost_net_disable_vq(net, vq); hdr_size = nvq->vhost_hlen; zcopy = nvq->ubufs; @@ -459,6 +4...
2016 Jun 01
0
[PATCH V3 1/2] vhost_net: stop polling socket during rx processing
...240000 pkt/s after: ~1350000 pkt/s Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 64 +++++++++++++++++++++++++++-------------------------- 1 file changed, 33 insertions(+), 31 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f744eeb..1d3e45f 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -301,6 +301,32 @@ static bool vhost_can_busy_poll(struct vhost_dev *dev, !vhost_has_work(dev); } +static void vhost_net_disable_vq(struct vhost_net *n, + struct vhost_virtqueue *vq) +{ + struct vhost_net_virtqueue *nvq =...
2016 Jun 30
0
[PATCH net-next V3 6/6] tun: switch to use skb array for tx
...eanup(void) { misc_deregister(&tun_miscdev); rtnl_link_unregister(&tun_link_ops); + unregister_netdevice_notifier(&tun_notifier_block); } /* Get an underlying socket object from tun file. Returns error unless file is diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 1d3e45f..e032ca3 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -481,10 +481,14 @@ out: static int peek_head_len(struct sock *sk) { + struct socket *sock = sk->sk_socket; struct sk_buff *head; int len = 0; unsigned long flags; + if (sock->ops->peek_len) + return sock-&g...
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 23
3
[PATCH V2 0/3] basic device IOTLB support for vhost_net
This patch tries to implement an device IOTLB for vhost. This could be used with for co-operation with userspace IOMMU implementation (qemu) for a secure DMA environment (DMAR) in guest. The idea is simple. When vhost meets an IOTLB miss, it will request the assistance of userspace to do the translation, this is done through: - when there's a IOTLB miss, it will notify userspace through
2016 Jun 23
3
[PATCH V2 0/3] basic device IOTLB support for vhost_net
This patch tries to implement an device IOTLB for vhost. This could be used with for co-operation with userspace IOMMU implementation (qemu) for a secure DMA environment (DMAR) in guest. The idea is simple. When vhost meets an IOTLB miss, it will request the assistance of userspace to do the translation, this is done through: - when there's a IOTLB miss, it will notify userspace through
2016 Jun 22
4
[PATCH 0/3] basic device IOTLB support
This patch tries to implement an device IOTLB for vhost. This could be used with for co-operation with userspace IOMMU implementation (qemu) for a secure DMA environment (DMAR) in guest. The idea is simple. When vhost meets an IOTLB miss, it will request the assistance of userspace to do the translation, this is done through: - when there's a IOTLB miss, it will notify userspace through
2016 Jun 22
4
[PATCH 0/3] basic device IOTLB support
This patch tries to implement an device IOTLB for vhost. This could be used with for co-operation with userspace IOMMU implementation (qemu) for a secure DMA environment (DMAR) in guest. The idea is simple. When vhost meets an IOTLB miss, it will request the assistance of userspace to do the translation, this is done through: - when there's a IOTLB miss, it will notify userspace through