Displaying 20 results from an estimated 4000 matches similar to: "[PATCH net-next] vhost_net: disable zerocopy by default"
2019 Jul 15
0
[PATCH AUTOSEL 5.2 119/249] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2019 Jul 15
0
[PATCH AUTOSEL 5.1 105/219] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2019 Jul 15
0
[PATCH AUTOSEL 4.19 079/158] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2019 Jul 15
0
[PATCH AUTOSEL 4.14 056/105] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2019 Jul 15
0
[PATCH AUTOSEL 4.9 41/73] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2019 Jul 15
0
[PATCH AUTOSEL 4.4 33/53] vhost_net: disable zerocopy by default
From: Jason Wang <jasowang at redhat.com>
[ Upstream commit 098eadce3c622c07b328d0a43dda379b38cf7c5e ]
Vhost_net was known to suffer from HOL[1] issues which is not easy to
fix. Several downstream disable the feature by default. What's more,
the datapath was split and datacopy path got the support of batching
and XDP support recently which makes it faster than zerocopy part for
small
2018 Nov 23
1
[PATCH net-next 2/3] vhost_net: support in order feature
On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote:
> This makes vhost_net to support in order feature. This is as simple as
> use datacopy path when it was negotiated. An alternative is not to
> advertise in order when zerocopy is enabled which tends to be
> suboptimal consider zerocopy may suffer from e.g HOL issues.
Well IIRC vhost_zerocopy_signal_used is used to
actually
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang at redhat.com> wrote:
>
>
> On 2017?09?02? 00:17, Willem de Bruijn wrote:
>>>>>
>>>>> This is not a 50/50 split, which impliesTw that some packets from the
>>>>> large
>>>>> packet flow are still converted to copying. Without the change the rate
>>>>>
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On Mon, Sep 4, 2017 at 5:03 AM, Jason Wang <jasowang at redhat.com> wrote:
>
>
> On 2017?09?02? 00:17, Willem de Bruijn wrote:
>>>>>
>>>>> This is not a 50/50 split, which impliesTw that some packets from the
>>>>> large
>>>>> packet flow are still converted to copying. Without the change the rate
>>>>>
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the
>>> large
>>> packet flow are still converted to copying. Without the change the rate
>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>> (vq->num >> 2) appears too conservative.
>>>
>>> However, testing with (vq->num >> 1) was
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the
>>> large
>>> packet flow are still converted to copying. Without the change the rate
>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>> (vq->num >> 2) appears too conservative.
>>>
>>> However, testing with (vq->num >> 1) was
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Nov 23
5
[PATCH net-next 0/3] basic in order support for vhost_net
Hi:
This series implement basic in order feature support for
vhost_net. This feature requires both driver and device to use
descriptors in order which can simplify the implementation and
optimizaton for both side. The series also implement a simple
optimization that avoid read available ring. Test shows 10%
performance improvement.
More optimizations could be done on top.
Jason Wang (3):
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
Recent debugging on vhost net zerocopy shows the need of
tracepoints. So to help in vhost{net} debugging and performance
analyzing, the following series adding basic tracepoints to
vhost. Operations of both vhost and vhost_net were traced in current
implementation.
A top-like satistics displaying script were introduced to help the
troubleshooting:
vhost statistics
vhost_virtio_update_used_idx
2014 Mar 21
5
[PATCH RFC V2 0/4] Adding tracepoints to vhost/net
Recent debugging on vhost net zerocopy shows the need of
tracepoints. So to help in vhost{net} debugging and performance
analyzing, the following series adding basic tracepoints to
vhost. Operations of both vhost and vhost_net were traced in current
implementation.
A top-like satistics displaying script were introduced to help the
troubleshooting:
vhost statistics
vhost_virtio_update_used_idx
2018 May 21
1
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
On Mon, 21 May 2018 17:04:25 +0800 Jason wrote:
> Instead of mixing zerocopy and datacopy logics, this patch tries to
> split datacopy logic out. This results for a more compact code and
> specific optimization could be done on top more easily.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/vhost/net.c | 111
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
> We used to limit the max pending DMAs to prevent guest from pinning too many
> pages. But this could be removed since:
>
> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
> - This max pending check were almost useless since it was one done when there's
> no new buffers coming from
2013 Aug 16
2
[PATCH 6/6] vhost_net: remove the max pending check
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
> We used to limit the max pending DMAs to prevent guest from pinning too many
> pages. But this could be removed since:
>
> - We have the sk_wmem_alloc check in both tun/macvtap to do the same work
> - This max pending check were almost useless since it was one done when there's
> no new buffers coming from
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
On 2017?08?31? 22:30, Willem de Bruijn wrote:
>> Incomplete results at this stage, but I do see this correlation between
>> flows. It occurs even while not running out of zerocopy descriptors,
>> which I cannot yet explain.
>>
>> Running two threads in a guest, each with a udp socket, each
>> sending up to 100 datagrams, or until EAGAIN, every msec.
>>