similar to: [PATCH AUTOSEL 4.14 056/105] vhost_net: disable zerocopy by default

Displaying 20 results from an estimated 3000 matches similar to: "[PATCH AUTOSEL 4.14 056/105] vhost_net: disable zerocopy by default"

2019 Jul 15
0
[PATCH AUTOSEL 5.1 105/219] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small
2019 Jul 15
0
[PATCH AUTOSEL 5.2 119/249] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small
2019 Jul 15
0
[PATCH AUTOSEL 4.19 079/158] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small
2019 Jul 15
0
[PATCH AUTOSEL 4.9 41/73] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small
2019 Jul 15
0
[PATCH AUTOSEL 4.4 33/53] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com> [ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ] Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small
2019 Jun 17
2
[PATCH net-next] vhost_net: disable zerocopy by default
Vhost_net was known to suffer from HOL[1] issues which is not easy to fix. Several downstream disable the feature by default. What's more, the datapath was split and datacopy path got the support of batching and XDP support recently which makes it faster than zerocopy part for small packets transmission. It looks to me that disable zerocopy by default is more appropriate. It cold be enabled
2018 Nov 23
1
[PATCH net-next 2/3] vhost_net: support in order feature
On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote: > This makes vhost_net to support in order feature. This is as simple as > use datacopy path when it was negotiated. An alternative is not to > advertise in order when zerocopy is enabled which tends to be > suboptimal consider zerocopy may suffer from e.g HOL issues. Well IIRC vhost_zerocopy_signal_used is used to actually
2017 Sep 04
0
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On 2017?09?02? 00:17, Willem de Bruijn wrote: >>>> This is not a 50/50 split, which impliesTw that some packets from the >>>> large >>>> packet flow are still converted to copying. Without the change the rate >>>> without queue was 80k zerocopy vs 80k copy, so this choice of >>>> (vq->num >> 2) appears too conservative.
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?09?02? 00:17, Willem de Bruijn wrote: >>>>> >>>>> This is not a 50/50 split, which impliesTw that some packets from the >>>>> large >>>>> packet flow are still converted to copying. Without the change the rate >>>>>
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?09?02? 00:17, Willem de Bruijn wrote: >>>>> >>>>> This is not a 50/50 split, which impliesTw that some packets from the >>>>> large >>>>> packet flow are still converted to copying. Without the change the rate >>>>>
2018 Nov 23
0
[PATCH net-next 2/3] vhost_net: support in order feature
This makes vhost_net to support in order feature. This is as simple as use datacopy path when it was negotiated. An alternative is not to advertise in order when zerocopy is enabled which tends to be suboptimal consider zerocopy may suffer from e.g HOL issues. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 6 ++++-- 1 file changed, 4 insertions(+), 2
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the >>> large >>> packet flow are still converted to copying. Without the change the rate >>> without queue was 80k zerocopy vs 80k copy, so this choice of >>> (vq->num >> 2) appears too conservative. >>> >>> However, testing with (vq->num >> 1) was
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the >>> large >>> packet flow are still converted to copying. Without the change the rate >>> without queue was 80k zerocopy vs 80k copy, so this choice of >>> (vq->num >> 2) appears too conservative. >>> >>> However, testing with (vq->num >> 1) was
2018 May 21
0
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
Instead of mixing zerocopy and datacopy logics, this patch tries to split datacopy logic out. This results for a more compact code and specific optimization could be done on top more easily. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 111 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 102 insertions(+), 9 deletions(-) diff --git
2018 May 21
1
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
On Mon, 21 May 2018 17:04:25 +0800 Jason wrote: > Instead of mixing zerocopy and datacopy logics, this patch tries to > split datacopy logic out. This results for a more compact code and > specific optimization could be done on top more easily. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 111
2018 Nov 23
5
[PATCH net-next 0/3] basic in order support for vhost_net
Hi: This series implement basic in order feature support for vhost_net. This feature requires both driver and device to use descriptors in order which can simplify the implementation and optimizaton for both side. The series also implement a simple optimization that avoid read available ring. Test shows 10% performance improvement. More optimizations could be done on top. Jason Wang (3):
2017 Sep 01
0
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On Thu, Aug 31, 2017 at 11:25 PM, Jason Wang <jasowang at redhat.com> wrote: > > > On 2017?08?31? 22:30, Willem de Bruijn wrote: >>> >>> Incomplete results at this stage, but I do see this correlation between >>> flows. It occurs even while not running out of zerocopy descriptors, >>> which I cannot yet explain. >>> >>> Running
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: > We used to limit the max pending DMAs to prevent guest from pinning too many > pages. But this could be removed since: > > - We have the sk_wmem_alloc check in both tun/macvtap to do the same work > - This max pending check were almost useless since it was one done when there's > no new buffers coming from
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: > We used to limit the max pending DMAs to prevent guest from pinning too many > pages. But this could be removed since: > > - We have the sk_wmem_alloc check in both tun/macvtap to do the same work > - This max pending check were almost useless since it was one done when there's > no new buffers coming from
2013 Aug 16
0
[PATCH 6/6] vhost_net: remove the max pending check
We used to limit the max pending DMAs to prevent guest from pinning too many pages. But this could be removed since: - We have the sk_wmem_alloc check in both tun/macvtap to do the same work - This max pending check were almost useless since it was one done when there's no new buffers coming from guest. Guest can easily exceeds the limitation. - We've already check upend_idx != done_idx