search for: datacopy

Displaying 20 results from an estimated 47 matches for "datacopy".

2018 May 21
1
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
On Mon, 21 May 2018 17:04:25 +0800 Jason wrote: > Instead of mixing zerocopy and datacopy logics, this patch tries to > split datacopy logic out. This results for a more compact code and > specific optimization could be done on top more easily. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 111 +++++++++++++++++++++++++++++...
2018 May 21
0
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
Instead of mixing zerocopy and datacopy logics, this patch tries to split datacopy logic out. This results for a more compact code and specific optimization could be done on top more easily. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 111 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 fil...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for future implementation like in order completion. Plea...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for future implementation like in order completion. Plea...
2018 Jul 22
0
[PATCH net-next 0/9] TX used ring batched updating for vhost
On Fri, Jul 20, 2018 at 08:15:12AM +0800, Jason Wang wrote: > Hi: > > This series implement batch updating of used ring for TX. This help to > reduce the cache contention on used ring. The idea is first split > datacopy path from zerocopy, and do only batching for datacopy. This > is because zercopy had already supported its own batching. > > TX PPS was increased 25.8% and Netperf TCP does not show obvious > differences. > > The split of datapath will also be helpful for future implementation &...
2018 Nov 23
1
[PATCH net-next 2/3] vhost_net: support in order feature
On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote: > This makes vhost_net to support in order feature. This is as simple as > use datacopy path when it was negotiated. An alternative is not to > advertise in order when zerocopy is enabled which tends to be > suboptimal consider zerocopy may suffer from e.g HOL issues. Well IIRC vhost_zerocopy_signal_used is used to actually reorder used ring to match available ring. So with a b...
2019 Jun 17
2
[PATCH net-next] vhost_net: disable zerocopy by default
Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small packets transmission. It looks to me that disable zerocopy by default is more appropriate. It cold be enabled by default again in the future if we fix the above issues. [1] https://patchwo...
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
...es tries to remove this limitation by: - introduce a TUN specific msg_control that can hold a pointer to an array of XDP buffs - try copy and build XDP buff in vhost_net - store XDP buffs in an array and submit them once for every N packets from vhost_net - since TUN can only do native XDP for datacopy packet, to simplify the logic, split datacopy out logic and only do batching for datacopy. With this series, TX PPS can improve about 34% from 2.9Mpps to 3.9Mpps when doing xdp_redirect_map between TAP and ixgbe. Thanks Jason Wang (12): vhost_net: introduce helper to initialize tx iov iter...
2014 Apr 10
1
[PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics
...1215215 0 > vhost_work_queue_wakeup 986808 0 > vhost_virtio_signal 811601 0 > vhost_net_tx 611457 0 > vhost_net_rx 603758 0 > vhost_net_tx(datacopy) 601903 0 > vhost_work_queue_wakeup(rx_net) 565081 0 > vhost_virtio_signal(rx) 461603 0 > vhost_work_queue_wakeup(tx_kick) 421718 0 > vhost_virtio_update_avail_event 417346 0 &g...
2014 Apr 10
1
[PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics
...1215215 0 > vhost_work_queue_wakeup 986808 0 > vhost_virtio_signal 811601 0 > vhost_net_tx 611457 0 > vhost_net_rx 603758 0 > vhost_net_tx(datacopy) 601903 0 > vhost_work_queue_wakeup(rx_net) 565081 0 > vhost_virtio_signal(rx) 461603 0 > vhost_work_queue_wakeup(tx_kick) 421718 0 > vhost_virtio_update_avail_event 417346 0 &g...
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
...get_vq_desc 1215215 0 vhost_work_queue_wakeup 986808 0 vhost_virtio_signal 811601 0 vhost_net_tx 611457 0 vhost_net_rx 603758 0 vhost_net_tx(datacopy) 601903 0 vhost_work_queue_wakeup(rx_net) 565081 0 vhost_virtio_signal(rx) 461603 0 vhost_work_queue_wakeup(tx_kick) 421718 0 vhost_virtio_update_avail_event 417346 0 vhost_virtio_signal(t...
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
...get_vq_desc 1215215 0 vhost_work_queue_wakeup 986808 0 vhost_virtio_signal 811601 0 vhost_net_tx 611457 0 vhost_net_rx 603758 0 vhost_net_tx(datacopy) 601903 0 vhost_work_queue_wakeup(rx_net) 565081 0 vhost_virtio_signal(rx) 461603 0 vhost_work_queue_wakeup(tx_kick) 421718 0 vhost_virtio_update_avail_event 417346 0 vhost_virtio_signal(t...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...;>> >>>> 2) tx napi is used for virtio-net >>> >>> I am not aware of any issue specific to the use of tx-napi? > > > Might not be clear here, I mean e.g virtio_net (tx-napi) in guest + > vhost_net (zerocopy) in host. In this case, even if we switch to datacopy if > ubuf counts exceeds vq->num >> 1, we still complete tx buffers in order, tx > interrupt could be delayed for indefinite time. Copied buffers are completed immediately in handle_tx. Do you mean when a process sends fewer packets than vq->num >> 1, so that all are queue...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...;>> >>>> 2) tx napi is used for virtio-net >>> >>> I am not aware of any issue specific to the use of tx-napi? > > > Might not be clear here, I mean e.g virtio_net (tx-napi) in guest + > vhost_net (zerocopy) in host. In this case, even if we switch to datacopy if > ubuf counts exceeds vq->num >> 1, we still complete tx buffers in order, tx > interrupt could be delayed for indefinite time. Copied buffers are completed immediately in handle_tx. Do you mean when a process sends fewer packets than vq->num >> 1, so that all are queue...
2018 Nov 23
5
[PATCH net-next 0/3] basic in order support for vhost_net
Hi: This series implement basic in order feature support for vhost_net. This feature requires both driver and device to use descriptors in order which can simplify the implementation and optimizaton for both side. The series also implement a simple optimization that avoid read available ring. Test shows 10% performance improvement. More optimizations could be done on top. Jason Wang (3):
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the >>> large >>> packet flow are still converted to copying. Without the change the rate >>> without queue was 80k zerocopy vs 80k copy, so this choice of >>> (vq->num >> 2) appears too conservative. >>> >>> However, testing with (vq->num >> 1) was
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the >>> large >>> packet flow are still converted to copying. Without the change the rate >>> without queue was 80k zerocopy vs 80k copy, so this choice of >>> (vq->num >> 2) appears too conservative. >>> >>> However, testing with (vq->num >> 1) was
2014 Mar 21
0
[PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics
...get_vq_desc 1215215 0 vhost_work_queue_wakeup 986808 0 vhost_virtio_signal 811601 0 vhost_net_tx 611457 0 vhost_net_rx 603758 0 vhost_net_tx(datacopy) 601903 0 vhost_work_queue_wakeup(rx_net) 565081 0 vhost_virtio_signal(rx) 461603 0 vhost_work_queue_wakeup(tx_kick) 421718 0 vhost_virtio_update_avail_event 417346 0 vhost_virtio_signal(t...
2018 Nov 23
0
[PATCH net-next 2/3] vhost_net: support in order feature
This makes vhost_net to support in order feature. This is as simple as use datacopy path when it was negotiated. An alternative is not to advertise in order when zerocopy is enabled which tends to be suboptimal consider zerocopy may suffer from e.g HOL issues. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 6 ++++-- 1 file changed, 4 insertion...
2019 Jul 15
0
[PATCH AUTOSEL 5.2 119/249] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small packets transmission. It looks to me that disable zerocopy by default is more appropriate. It cold be enabled by default again in the future if we fix the above issues. [1] https://patchwo...