search for: vhost_sock_zcopi

Displaying 20 results from an estimated 60 matches for "vhost_sock_zcopi".

Did you mean: vhost_sock_zcopy
2018 Nov 23
5
[PATCH net-next 0/3] basic in order support for vhost_net
Hi: This series implement basic in order feature support for vhost_net. This feature requires both driver and device to use descriptors in order which can simplify the implementation and optimizaton for both side. The series also implement a simple optimization that avoid read available ring. Test shows 10% performance improvement. More optimizations could be done on top. Jason Wang (3):
2018 Nov 23
0
[PATCH net-next 2/3] vhost_net: support in order feature
This makes vhost_net to support in order feature. This is as simple as use datacopy path when it was negotiated. An alternative is not to advertise in order when zerocopy is enabled which tends to be suboptimal consider zerocopy may suffer from e.g HOL issues. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 6 ++++-- 1 file changed, 4 insertions(+), 2
2018 Nov 23
1
[PATCH net-next 2/3] vhost_net: support in order feature
On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote: > This makes vhost_net to support in order feature. This is as simple as > use datacopy path when it was negotiated. An alternative is not to > advertise in order when zerocopy is enabled which tends to be > suboptimal consider zerocopy may suffer from e.g HOL issues. Well IIRC vhost_zerocopy_signal_used is used to actually
2011 Jul 17
3
[PATCHv9] vhost: experimental tx zero-copy support
From: Shirley Ma <mashirle at us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set the zerocopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored
2011 Jul 17
3
[PATCHv9] vhost: experimental tx zero-copy support
From: Shirley Ma <mashirle at us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set the zerocopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored
2011 Jul 18
1
[PATCHv10] vhost: vhost TX zero-copy support
This adds experimental zero copy support in vhost-net, disabled by default. To enable, set the zerocopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored through last reference of kfree_skb callback. Two
2011 Jul 18
1
[PATCHv10] vhost: vhost TX zero-copy support
This adds experimental zero copy support in vhost-net, disabled by default. To enable, set the zerocopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored through last reference of kfree_skb callback. Two
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: > This patch implements XDP batching for vhost_net. The idea is first to > try to do userspace copy and build XDP buff directly in vhost. Instead > of submitting the packet immediately, vhost_net will batch them in an > array and submit every 64 (VHOST_NET_BATCH) packets to the under layer > sockets through msg_control of
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: > This patch implements XDP batching for vhost_net. The idea is first to > try to do userspace copy and build XDP buff directly in vhost. Instead > of submitting the packet immediately, vhost_net will batch them in an > array and submit every 64 (VHOST_NET_BATCH) packets to the under layer > sockets through msg_control of
2011 Jul 18
1
[PATCHv11] vhost: vhost TX zero-copy support
>From: Shirley Ma <mashirle at us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set experimental_zcopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is
2011 Jul 18
1
[PATCHv11] vhost: vhost TX zero-copy support
>From: Shirley Ma <mashirle at us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set experimental_zcopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is
2018 Sep 06
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
This patch implements XDP batching for vhost_net. The idea is first to try to do userspace copy and build XDP buff directly in vhost. Instead of submitting the packet immediately, vhost_net will batch them in an array and submit every 64 (VHOST_NET_BATCH) packets to the under layer sockets through msg_control of sendmsg(). When XDP is enabled on the TUN/TAP, TUN/TAP can process XDP inside a loop
2018 Sep 12
0
[PATCH net-next V2 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
This patch implements XDP batching for vhost_net. The idea is first to try to do userspace copy and build XDP buff directly in vhost. Instead of submitting the packet immediately, vhost_net will batch them in an array and submit every 64 (VHOST_NET_BATCH) packets to the under layer sockets through msg_control of sendmsg(). When XDP is enabled on the TUN/TAP, TUN/TAP can process XDP inside a loop
2018 Sep 07
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
On 2018?09?07? 00:46, Michael S. Tsirkin wrote: > On Thu, Sep 06, 2018 at 12:05:26PM +0800, Jason Wang wrote: >> This patch implements XDP batching for vhost_net. The idea is first to >> try to do userspace copy and build XDP buff directly in vhost. Instead >> of submitting the packet immediately, vhost_net will batch them in an >> array and submit every 64
2015 Feb 04
2
[PATCH v3 17/18] vhost: don't bother copying iovecs in handle_rx(), kill memcpy_toiovecend()
From: Al Viro <viro at zeniv.linux.org.uk> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: kvm at vger.kernel.org Cc: virtualization at lists.linux-foundation.org Signed-off-by: Al Viro <viro at zeniv.linux.org.uk> --- drivers/vhost/net.c | 82 +++++++++++++++-------------------------------------- include/linux/uio.h | 3 -- lib/iovec.c | 26 ----------------- 3 files
2015 Feb 04
2
[PATCH v3 17/18] vhost: don't bother copying iovecs in handle_rx(), kill memcpy_toiovecend()
From: Al Viro <viro at zeniv.linux.org.uk> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: kvm at vger.kernel.org Cc: virtualization at lists.linux-foundation.org Signed-off-by: Al Viro <viro at zeniv.linux.org.uk> --- drivers/vhost/net.c | 82 +++++++++++++++-------------------------------------- include/linux/uio.h | 3 -- lib/iovec.c | 26 ----------------- 3 files
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
Hi! Recently I have been investigating some strange migration problems on s390x. It turned out under certain circumstances vhost_net corrupts avail.idx by using wrong endianness. I managed to track the problem down (I'm pretty sure). It boils down to the following. When stopping vhost userspace (QEMU) calls vhost_net_set_backend with the fd argument set to -1, this leads to is_le being
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
Hi! Recently I have been investigating some strange migration problems on s390x. It turned out under certain circumstances vhost_net corrupts avail.idx by using wrong endianness. I managed to track the problem down (I'm pretty sure). It boils down to the following. When stopping vhost userspace (QEMU) calls vhost_net_set_backend with the fd argument set to -1, this leads to is_le being
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4, 5, 6 Tonghao Zhang (6): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4, 5, 6 Tonghao Zhang (6): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out