Displaying 20 results from an estimated 3000 matches similar to: "[PATCH] vhost-net: initialize zcopy packet counters"
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 14 +++++++++++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index ebd08b2..629d6b5 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -834,8 +834,10 @@ static
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/net.c | 14 +++++++++++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index ebd08b2..629d6b5 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -834,8 +834,10 @@ static
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
On top of 'vhost: Allow device specific fields per vq', we can move device
specific fields to device virt queue from vhost virt queue.
Signed-off-by: Asias He <asias at redhat.com>
---
drivers/vhost/net.c | 164 +++++++++++++++++++++++++++++++++++++++++++-------
drivers/vhost/vhost.c | 57 +-----------------
drivers/vhost/vhost.h | 22 -------
3 files changed, 142
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
On top of 'vhost: Allow device specific fields per vq', we can move device
specific fields to device virt queue from vhost virt queue.
Signed-off-by: Asias He <asias at redhat.com>
---
drivers/vhost/net.c | 164 +++++++++++++++++++++++++++++++++++++++++++-------
drivers/vhost/vhost.c | 57 +-----------------
drivers/vhost/vhost.h | 22 -------
3 files changed, 142
2017 Jan 26
0
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
On Thu, Jan 26, 2017 at 06:39:14PM +0100, Halil Pasic wrote:
>
> Hi!
>
> Recently I have been investigating some strange migration problems on
> s390x.
>
> It turned out under certain circumstances vhost_net corrupts avail.idx by
> using wrong endianness.
>
> I managed to track the problem down (I'm pretty sure). It boils down to
> the following.
>
>
2018 Jul 02
1
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
On 2018/07/02 16:52, Jason Wang wrote:
> On 2018?07?02? 15:11, Toshiaki Makita wrote:
>> On 2018/07/02 15:17, Jason Wang wrote:
>>> On 2018?07?02? 12:37, Toshiaki Makita wrote:
>>>> On 2018/07/02 11:54, Jason Wang wrote:
>>>>> On 2018?07?02? 10:45, Toshiaki Makita wrote:
>>>>>> Hi Jason,
>>>>>>
2013 Jan 06
2
[PATCH V3 0/2] handle polling errors
This is an update version of last version to fix the handling of polling errors
in vhost/vhost_net.
Currently, vhost and vhost_net ignore polling errors which can lead kernel
crashing when it tries to remove itself from waitqueue after the polling
failure. Fix this by checking the poll->wqh before the removing and report an
error when meet polling errors.
Changes from v2:
- check poll->wqh
2013 Jan 06
2
[PATCH V3 0/2] handle polling errors
This is an update version of last version to fix the handling of polling errors
in vhost/vhost_net.
Currently, vhost and vhost_net ignore polling errors which can lead kernel
crashing when it tries to remove itself from waitqueue after the polling
failure. Fix this by checking the poll->wqh before the removing and report an
error when meet polling errors.
Changes from v2:
- check poll->wqh
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
Hi!
Recently I have been investigating some strange migration problems on
s390x.
It turned out under certain circumstances vhost_net corrupts avail.idx by
using wrong endianness.
I managed to track the problem down (I'm pretty sure). It boils down to
the following.
When stopping vhost userspace (QEMU) calls vhost_net_set_backend with
the fd argument set to -1, this leads to is_le being
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
Hi!
Recently I have been investigating some strange migration problems on
s390x.
It turned out under certain circumstances vhost_net corrupts avail.idx by
using wrong endianness.
I managed to track the problem down (I'm pretty sure). It boils down to
the following.
When stopping vhost userspace (QEMU) calls vhost_net_set_backend with
the fd argument set to -1, this leads to is_le being
2017 Sep 26
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
On Fri, Sep 22, 2017 at 04:02:35PM +0800, Jason Wang wrote:
> This patch implements basic batched processing of tx virtqueue by
> prefetching desc indices and updating used ring in a batch. For
> non-zerocopy case, vq->heads were used for storing the prefetched
> indices and updating used ring. It is also a requirement for doing
> more batching on top. For zerocopy case and for
2017 Sep 26
2
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
On Fri, Sep 22, 2017 at 04:02:35PM +0800, Jason Wang wrote:
> This patch implements basic batched processing of tx virtqueue by
> prefetching desc indices and updating used ring in a batch. For
> non-zerocopy case, vq->heads were used for storing the prefetched
> indices and updating used ring. It is also a requirement for doing
> more batching on top. For zerocopy case and for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
On 2018/07/02 15:17, Jason Wang wrote:
> On 2018?07?02? 12:37, Toshiaki Makita wrote:
>> On 2018/07/02 11:54, Jason Wang wrote:
>>> On 2018?07?02? 10:45, Toshiaki Makita wrote:
>>>> Hi Jason,
>>>>
>>>> On 2018/06/29 18:30, Jason Wang wrote:
>>>>> On 2018?06?29? 16:09, Toshiaki Makita wrote:
>>>> ...
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
On 2018/07/02 15:17, Jason Wang wrote:
> On 2018?07?02? 12:37, Toshiaki Makita wrote:
>> On 2018/07/02 11:54, Jason Wang wrote:
>>> On 2018?07?02? 10:45, Toshiaki Makita wrote:
>>>> Hi Jason,
>>>>
>>>> On 2018/06/29 18:30, Jason Wang wrote:
>>>>> On 2018?06?29? 16:09, Toshiaki Makita wrote:
>>>> ...
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the
>>> large
>>> packet flow are still converted to copying. Without the change the rate
>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>> (vq->num >> 2) appears too conservative.
>>>
>>> However, testing with (vq->num >> 1) was
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
>>> This is not a 50/50 split, which impliesTw that some packets from the
>>> large
>>> packet flow are still converted to copying. Without the change the rate
>>> without queue was 80k zerocopy vs 80k copy, so this choice of
>>> (vq->num >> 2) appears too conservative.
>>>
>>> However, testing with (vq->num >> 1) was
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote:
>> From: Willem de Bruijn <willemb at google.com>
>>
>> Vhost-net has a hard limit on the number of zerocopy skbs in flight.
>> When reached, transmission stalls. Stalls cause latency, as well as
>>
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote:
> On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote:
>> From: Willem de Bruijn <willemb at google.com>
>>
>> Vhost-net has a hard limit on the number of zerocopy skbs in flight.
>> When reached, transmission stalls. Stalls cause latency, as well as
>>