Displaying 20 results from an estimated 80 matches for "virtqueue_get_buf_ctx_packed".
2020 Aug 02
0
[PATCH -next v2] virtio_net: Avoid loop in virtnet_poll
On Sun, Aug 02, 2020 at 01:56:33PM +0800, Mao Wenan wrote:
> The loop may exist if vq->broken is true,
> virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
> will return NULL, so virtnet_poll will reschedule napi to
> receive packet, it will lead cpu usage(si) to 100%.
>
> call trace as below:
> virtnet_poll
> virtnet_receive
> virtqueue_get_buf_ctx
> virtqueue_get_buf_ctx_packed
> vi...
2020 Aug 02
0
[PATCH -next v2] virtio_net: Avoid loop in virtnet_poll
Just noticed the subject is wrong: this is no longer
a virtio_net patch.
On Sun, Aug 02, 2020 at 01:56:33PM +0800, Mao Wenan wrote:
> The loop may exist if vq->broken is true,
> virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
> will return NULL, so virtnet_poll will reschedule napi to
> receive packet, it will lead cpu usage(si) to 100%.
>
> call trace as below:
> virtnet_poll
> virtnet_receive
> virtqueue_get_buf_ctx
> virtqueue_get_buf_ctx_packed
> vi...
2020 Aug 04
0
[PATCH -next v3] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
On 2020/8/2 ??3:44, Mao Wenan wrote:
> The loop may exist if vq->broken is true,
> virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
> will return NULL, so virtnet_poll will reschedule napi to
> receive packet, it will lead cpu usage(si) to 100%.
>
> call trace as below:
> virtnet_poll
> virtnet_receive
> virtqueue_get_buf_ctx
> virtqueue_get_buf_ctx_packed
> vir...
2020 Aug 20
0
[PATCH AUTOSEL 5.8 21/27] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 5.7 19/24] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 5.4 17/22] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 4.19 14/18] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 4.14 11/13] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 4.9 09/11] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2020 Aug 20
0
[PATCH AUTOSEL 4.4 08/10] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
From: Mao Wenan <wenan.mao at linux.alibaba.com>
[ Upstream commit 481a0d7422db26fb63e2d64f0652667a5c6d0f3e ]
The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.
call trace as below:
virtnet_poll
virtnet_receive
virtqueue_get_buf_ctx
virtqueue_get_buf_ctx_packed
virtqueue_get_buf_ctx_split
virtqueue_napi_com...
2018 May 16
0
[RFC v4 4/5] virtio_ring: add event idx support in packed ring
...time_valid = false;
#endif
- needs_kick = (flags != VRING_EVENT_F_DISABLE);
+ if (flags == VRING_EVENT_F_DESC)
+ needs_kick = vring_need_event(event_idx, new, old);
+ else
+ needs_kick = (flags != VRING_EVENT_F_DISABLE);
END_USE(vq);
return needs_kick;
}
@@ -1098,7 +1111,7 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
void **ctx)
{
struct vring_virtqueue *vq = to_vvq(_vq);
- u16 last_used, id;
+ u16 wrap_counter, last_used, id;
void *ret;
START_USE(vq);
@@ -1138,6 +1151,19 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
ret = vq->desc_state[id].dat...
2018 May 02
2
[RFC v3 4/5] virtio_ring: add event idx support in packed ring
...I wonder whether or not the math is correct. Both new and event are in
the unit of descriptor ring size, but old looks not.
Thanks
> + else
> + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> END_USE(vq);
> return needs_kick;
> }
> @@ -1116,6 +1124,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> if (vq->last_used_idx >= vq->vring_packed.num)
> vq->last_used_idx -= vq->vring_packed.num;
>
> + /* If we expect an interrupt for the next entry, tell host
> + * by writing event index and flush out the write before
> + * the re...
2018 May 02
2
[RFC v3 4/5] virtio_ring: add event idx support in packed ring
...I wonder whether or not the math is correct. Both new and event are in
the unit of descriptor ring size, but old looks not.
Thanks
> + else
> + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> END_USE(vq);
> return needs_kick;
> }
> @@ -1116,6 +1124,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> if (vq->last_used_idx >= vq->vring_packed.num)
> vq->last_used_idx -= vq->vring_packed.num;
>
> + /* If we expect an interrupt for the next entry, tell host
> + * by writing event index and flush out the write before
> + * the re...
2018 May 16
8
[RFC v4 0/5] virtio: support packed ring
Hello everyone,
This RFC implements packed ring support in virtio driver.
Some simple functional tests have been done with Jason's
packed ring implementation in vhost:
https://lkml.org/lkml/2018/4/23/12
Both of ping and netperf worked as expected (with EVENT_IDX
disabled).
TODO:
- Refinements (for code and commit log);
- More tests;
- Bug fixes;
RFC v3 -> RFC v4:
- Make ID allocation
2018 May 02
2
[RFC v3 4/5] virtio_ring: add event idx support in packed ring
...regards,
> Tiwei Bie
>
>
> >
> > Thanks
> >
> > > + else
> > > + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> > > END_USE(vq);
> > > return needs_kick;
> > > }
> > > @@ -1116,6 +1124,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> > > if (vq->last_used_idx >= vq->vring_packed.num)
> > > vq->last_used_idx -= vq->vring_packed.num;
> > > + /* If we expect an interrupt for the next entry, tell host
> > > + * by writing event index and flush out t...
2018 May 02
2
[RFC v3 4/5] virtio_ring: add event idx support in packed ring
...regards,
> Tiwei Bie
>
>
> >
> > Thanks
> >
> > > + else
> > > + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> > > END_USE(vq);
> > > return needs_kick;
> > > }
> > > @@ -1116,6 +1124,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> > > if (vq->last_used_idx >= vq->vring_packed.num)
> > > vq->last_used_idx -= vq->vring_packed.num;
> > > + /* If we expect an interrupt for the next entry, tell host
> > > + * by writing event index and flush out t...
2023 Mar 28
2
9p regression (Was: [PATCH v2] virtio_ring: don't update event idx on get_buf)
..._shadow & VRING_AVAIL_F_NO_INTERRUPT) &&
> + !vq->event_triggered))
> virtio_store_mb(vq->weak_barriers,
> &vring_used_event(&vq->split.vring),
> cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
> @@ -1744,7 +1745,8 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> * by writing event index and flush out the write before
> * the read in the next get_buf call.
> */
> - if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC)
> + if (unlikely(vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG...
2018 May 16
2
[RFC v4 4/5] virtio_ring: add event idx support in packed ring
...= (flags != VRING_EVENT_F_DISABLE);
> + if (flags == VRING_EVENT_F_DESC)
> + needs_kick = vring_need_event(event_idx, new, old);
> + else
> + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> END_USE(vq);
> return needs_kick;
> }
> @@ -1098,7 +1111,7 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> void **ctx)
> {
> struct vring_virtqueue *vq = to_vvq(_vq);
> - u16 last_used, id;
> + u16 wrap_counter, last_used, id;
> void *ret;
>
> START_USE(vq);
> @@ -1138,6 +1151,19 @@ static void *virtqueue_get_buf_ctx_packed(struc...
2018 Sep 07
1
[PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring
...k = (flags != VRING_EVENT_F_DISABLE);
> + if (flags == VRING_EVENT_F_DESC)
> + needs_kick = vring_need_event(event_idx, new, old);
> + else
> + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> END_USE(vq);
> return needs_kick;
> }
> @@ -1185,6 +1198,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> ret = vq->desc_state_packed[id].data;
> detach_buf_packed(vq, id, ctx);
>
> + /* If we expect an interrupt for the next entry, tell host
> + * by writing event index and flush out the write before
> + * the read in the next get_buf call. */
>...
2018 May 16
2
[RFC v4 4/5] virtio_ring: add event idx support in packed ring
...= (flags != VRING_EVENT_F_DISABLE);
> + if (flags == VRING_EVENT_F_DESC)
> + needs_kick = vring_need_event(event_idx, new, old);
> + else
> + needs_kick = (flags != VRING_EVENT_F_DISABLE);
> END_USE(vq);
> return needs_kick;
> }
> @@ -1098,7 +1111,7 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> void **ctx)
> {
> struct vring_virtqueue *vq = to_vvq(_vq);
> - u16 last_used, id;
> + u16 wrap_counter, last_used, id;
> void *ret;
>
> START_USE(vq);
> @@ -1138,6 +1151,19 @@ static void *virtqueue_get_buf_ctx_packed(struc...