Displaying 20 results from an estimated 32 matches for "zercopy".
Did you mean:
zerocopy
2011 Jul 22
3
[PULL net] vhost-net: zercopy mode fixes
The following includes vhost-net fixes - both in the
experimental zero copy mode.
Please pull for 3.1.
Thanks!
2011 Jul 22
3
[PULL net] vhost-net: zercopy mode fixes
The following includes vhost-net fixes - both in the
experimental zero copy mode.
Please pull for 3.1.
Thanks!
2012 Dec 03
1
[PATCH] vhost-net: initialize zcopy packet counters
These packet counters are used to drive the zercopy
selection heuristic so nothing too bad happens if they are off a bit -
and they are also reset once in a while.
But it's cleaner to clear them when backend is set so that
we start in a known state.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/vhost/net.c | 3 +++
1...
2012 Dec 03
1
[PATCH] vhost-net: initialize zcopy packet counters
These packet counters are used to drive the zercopy
selection heuristic so nothing too bad happens if they are off a bit -
and they are also reset once in a while.
But it's cleaner to clear them when backend is set so that
we start in a known state.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/vhost/net.c | 3 +++
1...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...ver XDP was used. I was going to dig into it but not yet.
>
> Right, just to confirm this. This is expected.
>
> In tuntap, we do native XDP only for small and non zerocopy packets. See
> tun_can_build_skb(). The reason is XDP may adjust packet header which is
> not supported by zercopy. We can only use XDP generic for zerocopy in
> this case.
I think I understand when driver XDP can be used. What I'm not sure and
was going to narrow down is why zerocopy is mostly not applied.
--
Toshiaki Makita
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...ver XDP was used. I was going to dig into it but not yet.
>
> Right, just to confirm this. This is expected.
>
> In tuntap, we do native XDP only for small and non zerocopy packets. See
> tun_can_build_skb(). The reason is XDP may adjust packet header which is
> not supported by zercopy. We can only use XDP generic for zerocopy in
> this case.
I think I understand when driver XDP can be used. What I'm not sure and
was going to narrow down is why zerocopy is mostly not applied.
--
Toshiaki Makita
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...dig into it but not yet.
>>> Right, just to confirm this. This is expected.
>>>
>>> In tuntap, we do native XDP only for small and non zerocopy packets. See
>>> tun_can_build_skb(). The reason is XDP may adjust packet header which is
>>> not supported by zercopy. We can only use XDP generic for zerocopy in
>>> this case.
>> I think I understand when driver XDP can be used. What I'm not sure and
>> was going to narrow down is why zerocopy is mostly not applied.
>>
>
> I see, any touch to the zerocopy packet (clone, head...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...dig into it but not yet.
>>> Right, just to confirm this. This is expected.
>>>
>>> In tuntap, we do native XDP only for small and non zerocopy packets. See
>>> tun_can_build_skb(). The reason is XDP may adjust packet header which is
>>> not supported by zercopy. We can only use XDP generic for zerocopy in
>>> this case.
>> I think I understand when driver XDP can be used. What I'm not sure and
>> was going to narrow down is why zerocopy is mostly not applied.
>>
>
> I see, any touch to the zerocopy packet (clone, head...
2017 Sep 27
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...nd for simplicity, batched
> > > processing were simply disabled by only fetching and processing one
> > > descriptor at a time, this could be optimized in the future.
> > >
> > > XDP_DROP (without touching skb) on tun (with Moongen in guest) with
> > > zercopy disabled:
> > >
> > > Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz:
> > > Before: 3.20Mpps
> > > After: 3.90Mpps (+22%)
> > >
> > > No differences were seen with zerocopy enabled.
> > >
> > > Signed-off-by: Jason Wang <jasow...
2017 Sep 27
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...nd for simplicity, batched
> > > processing were simply disabled by only fetching and processing one
> > > descriptor at a time, this could be optimized in the future.
> > >
> > > XDP_DROP (without touching skb) on tun (with Moongen in guest) with
> > > zercopy disabled:
> > >
> > > Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz:
> > > Before: 3.20Mpps
> > > After: 3.90Mpps (+22%)
> > >
> > > No differences were seen with zerocopy enabled.
> > >
> > > Signed-off-by: Jason Wang <jasow...
2017 Sep 26
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...ing
> more batching on top. For zerocopy case and for simplicity, batched
> processing were simply disabled by only fetching and processing one
> descriptor at a time, this could be optimized in the future.
>
> XDP_DROP (without touching skb) on tun (with Moongen in guest) with
> zercopy disabled:
>
> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz:
> Before: 3.20Mpps
> After: 3.90Mpps (+22%)
>
> No differences were seen with zerocopy enabled.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
So where is the speedup coming from? I'd guess the ri...
2017 Sep 26
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...ing
> more batching on top. For zerocopy case and for simplicity, batched
> processing were simply disabled by only fetching and processing one
> descriptor at a time, this could be optimized in the future.
>
> XDP_DROP (without touching skb) on tun (with Moongen in guest) with
> zercopy disabled:
>
> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz:
> Before: 3.20Mpps
> After: 3.90Mpps (+22%)
>
> No differences were seen with zerocopy enabled.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
So where is the speedup coming from? I'd guess the ri...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for future implementation
like in order completion.
Please review.
Thanks
Jason Wang (9):
vhost_net: drop unnecessary parameter
v...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for future implementation
like in order completion.
Please review.
Thanks
Jason Wang (9):
vhost_net: drop unnecessary parameter
v...
2018 Jul 02
1
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...expected.
>>>>>
>>>>> In tuntap, we do native XDP only for small and non zerocopy
>>>>> packets. See
>>>>> tun_can_build_skb(). The reason is XDP may adjust packet header
>>>>> which is
>>>>> not supported by zercopy. We can only use XDP generic for zerocopy in
>>>>> this case.
>>>> I think I understand when driver XDP can be used. What I'm not sure and
>>>> was going to narrow down is why zerocopy is mostly not applied.
>>>>
>>> I see, any touch t...
2017 Sep 27
0
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...ing on top. For zerocopy case and for simplicity, batched
>> processing were simply disabled by only fetching and processing one
>> descriptor at a time, this could be optimized in the future.
>>
>> XDP_DROP (without touching skb) on tun (with Moongen in guest) with
>> zercopy disabled:
>>
>> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz:
>> Before: 3.20Mpps
>> After: 3.90Mpps (+22%)
>>
>> No differences were seen with zerocopy enabled.
>>
>> Signed-off-by: Jason Wang <jasowang at redhat.com>
> So where is the speedup...
2018 Jul 22
0
[PATCH net-next 0/9] TX used ring batched updating for vhost
...20, 2018 at 08:15:12AM +0800, Jason Wang wrote:
> Hi:
>
> This series implement batch updating of used ring for TX. This help to
> reduce the cache contention on used ring. The idea is first split
> datacopy path from zerocopy, and do only batching for datacopy. This
> is because zercopy had already supported its own batching.
>
> TX PPS was increased 25.8% and Netperf TCP does not show obvious
> differences.
>
> The split of datapath will also be helpful for future implementation
> like in order completion.
>
> Please review.
>
> Thanks
Acked-by:...
2018 Jul 02
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...sed. I was going to dig into it but not yet.
>> Right, just to confirm this. This is expected.
>>
>> In tuntap, we do native XDP only for small and non zerocopy packets. See
>> tun_can_build_skb(). The reason is XDP may adjust packet header which is
>> not supported by zercopy. We can only use XDP generic for zerocopy in
>> this case.
> I think I understand when driver XDP can be used. What I'm not sure and
> was going to narrow down is why zerocopy is mostly not applied.
>
I see, any touch to the zerocopy packet (clone, header expansion or
segmentat...
2018 Jul 02
0
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...yet.
>>>> Right, just to confirm this. This is expected.
>>>>
>>>> In tuntap, we do native XDP only for small and non zerocopy packets. See
>>>> tun_can_build_skb(). The reason is XDP may adjust packet header which is
>>>> not supported by zercopy. We can only use XDP generic for zerocopy in
>>>> this case.
>>> I think I understand when driver XDP can be used. What I'm not sure and
>>> was going to narrow down is why zerocopy is mostly not applied.
>>>
>> I see, any touch to the zerocopy packe...
2019 Jul 17
0
[PATCH V3 00/15] Packed virtqueue support for vhost
...big patchset.
Should be done by Tuesday.
-next material anyway.
> More optimizations (e.g IN_ORDER) is on the road.
>
> Please review.
>
> [1] https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-610007
>
> This version were tested with:
> - zercopy/datacopy
> - mergeable buffer on/off
> - TCP stream & virtio-user
>
> Changes from V2:
> - rebase on top of vhost metadata accelreation series
> - introduce shadow used ring API
> - new SET_VRING_BASE/GET_VRING_BASE that takes care about warp counter
> and index for b...