search for: virtqueue_get_buf_ctx

Displaying 20 results from an estimated 71 matches for "virtqueue_get_buf_ctx".

2023 Jun 22
1
[PATCH vhost v10 10/10] virtio_net: support dma premapped
...+ virtnet_generic_unmap(vq, &cursor); > + > + return buf; > +} > + > +static void *virtnet_get_buf_ctx(struct virtqueue *vq, bool premapped, u32 *len, void **ctx) > +{ > + struct virtqueue_detach_cursor cursor; > + void *buf; > + > + if (!premapped) > + return virtqueue_get_buf_ctx(vq, len, ctx); > + > + buf = virtqueue_get_buf_premapped(vq, len, ctx, &cursor); > + if (buf) > + virtnet_generic_unmap(vq, &cursor); > + > + return buf; > +} > + > +#define virtnet_rq_get_buf(rq, plen, pctx) \ > +({ \ > + typeof(rq) _rq = (rq); \ > + vi...
2017 Apr 06
2
[bug report] virtio_net: rework mergeable buffer handling
...struct virtnet_stats *stats = this_cpu_ptr(vi->stats); 1036 1037 if (vi->mergeable_rx_bufs) { 1038 void *ctx; ^^^ 1039 1040 while (received < budget && 1041 (buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx))) { ^^^^ 1042 bytes += receive_buf(vi, rq, buf, len, ctx); ^^^ It's possible that this cod...
2017 Apr 06
2
[bug report] virtio_net: rework mergeable buffer handling
...struct virtnet_stats *stats = this_cpu_ptr(vi->stats); 1036 1037 if (vi->mergeable_rx_bufs) { 1038 void *ctx; ^^^ 1039 1040 while (received < budget && 1041 (buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx))) { ^^^^ 1042 bytes += receive_buf(vi, rq, buf, len, ctx); ^^^ It's possible that this cod...
2017 Mar 29
2
[PATCH 3/6] virtio: allow extra context per descriptor
...[head].indir_desc; } } @@ -660,7 +697,8 @@ static inline bool more_used(const struct vring_virtqueue *vq) * Returns NULL if there are no used buffers, or the "data" token * handed to virtqueue_add_*(). */ -void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) +void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, + void **ctx) { struct vring_virtqueue *vq = to_vvq(_vq); void *ret; @@ -698,7 +736,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) /* detach_buf clears data, so grab it now. */ ret = vq->desc_state[i].data; - deta...
2017 Mar 29
2
[PATCH 3/6] virtio: allow extra context per descriptor
...[head].indir_desc; } } @@ -660,7 +697,8 @@ static inline bool more_used(const struct vring_virtqueue *vq) * Returns NULL if there are no used buffers, or the "data" token * handed to virtqueue_add_*(). */ -void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) +void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, + void **ctx) { struct vring_virtqueue *vq = to_vvq(_vq); void *ret; @@ -698,7 +736,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) /* detach_buf clears data, so grab it now. */ ret = vq->desc_state[i].data; - deta...
2023 Jun 22
1
[PATCH vhost v10 05/10] virtio_ring: split-detach: support return dma info to driver
...; kfree(indir_desc); > vq->split.desc_state[head].indir_desc = NULL; > - } else if (ctx) { > - *ctx = vq->split.desc_state[head].indir_desc; > } > } > > @@ -812,7 +897,8 @@ static bool more_used_split(const struct vring_virtqueue *vq) > > static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, > unsigned int *len, > - void **ctx) > + void **ctx, > + struct virtqueue_detach_cursor *cursor) > { > struct vring_virtqueue *vq = to_vvq(_vq); > void *ret; > @@ -852,7 +938,15 @@ static void *virtqueue_get_buf_ctx_spl...
2018 Feb 23
0
[PATCH RFC 2/2] virtio_ring: support packed ring
...no data leakage in the case of short - * writes. - * - * Caller must ensure we don't call this with other virtqueue - * operations at the same time (except where noted). - * - * Returns NULL if there are no used buffers, or the "data" token - * handed to virtqueue_add_*(). - */ -void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, - void **ctx) +static inline bool more_used_packed(const struct vring_virtqueue *vq) +{ + u16 last_used, flags; + bool avail, used; + + if (vq->vq.num_free == vq->vring.num) + return false; + + last_used = vq->last_used_idx; + flags = virtio...
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2018 Feb 23
5
[PATCH RFC 0/2] Packed ring for virtio
Hello everyone, This RFC implements a subset of packed ring which is described at https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd08.pdf The code was tested with DPDK vhost (testpmd/vhost-PMD) implemented by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html Minor changes are needed for the vhost code, e.g. to kick the guest. It's not a complete
2018 Mar 16
0
[PATCH RFC 2/2] virtio_ring: support packed ring
...ler must ensure we don't call this with other virtqueue > > - * operations at the same time (except where noted). > > - * > > - * Returns NULL if there are no used buffers, or the "data" token > > - * handed to virtqueue_add_*(). > > - */ > > -void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, > > - void **ctx) > > +static inline bool more_used_packed(const struct vring_virtqueue *vq) > > +{ > > + u16 last_used, flags; > > + bool avail, used; > > + > > + if (vq->vq.num_free == vq->vring.num) &...
2018 Mar 16
2
[PATCH RFC 2/2] virtio_ring: support packed ring
...writes. > - * > - * Caller must ensure we don't call this with other virtqueue > - * operations at the same time (except where noted). > - * > - * Returns NULL if there are no used buffers, or the "data" token > - * handed to virtqueue_add_*(). > - */ > -void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, > - void **ctx) > +static inline bool more_used_packed(const struct vring_virtqueue *vq) > +{ > + u16 last_used, flags; > + bool avail, used; > + > + if (vq->vq.num_free == vq->vring.num) > + return false; > + > +...
2018 Mar 16
2
[PATCH RFC 2/2] virtio_ring: support packed ring
...writes. > - * > - * Caller must ensure we don't call this with other virtqueue > - * operations at the same time (except where noted). > - * > - * Returns NULL if there are no used buffers, or the "data" token > - * handed to virtqueue_add_*(). > - */ > -void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len, > - void **ctx) > +static inline bool more_used_packed(const struct vring_virtqueue *vq) > +{ > + u16 last_used, flags; > + bool avail, used; > + > + if (vq->vq.num_free == vq->vring.num) > + return false; > + > +...
2017 Mar 29
0
[PATCH 5/6] virtio_net: rework mergeable buffer handling
...ng)ctx; head_skb = page_to_skb(vi, rq, page, offset, len, truesize); curr_skb = head_skb; @@ -648,7 +634,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, while (--num_buf) { int num_skb_frags; - ctx = (unsigned long)virtqueue_get_buf(rq->vq, &len); + buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx); if (unlikely(!ctx)) { pr_debug("%s: rx error: %d buffers out of %d missing\n", dev->name, num_buf, @@ -658,8 +644,14 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_buf; } - buf = mergeable_ctx_to_buf_a...
2017 Mar 29
0
[PATCH 5/6] virtio_net: rework mergeable buffer handling
...ng)ctx; head_skb = page_to_skb(vi, rq, page, offset, len, truesize); curr_skb = head_skb; @@ -648,7 +634,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, while (--num_buf) { int num_skb_frags; - ctx = (unsigned long)virtqueue_get_buf(rq->vq, &len); + buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx); if (unlikely(!ctx)) { pr_debug("%s: rx error: %d buffers out of %d missing\n", dev->name, num_buf, @@ -658,8 +644,14 @@ static struct sk_buff *receive_mergeable(struct net_device *dev, goto err_buf; } - buf = mergeable_ctx_to_buf_a...
2023 Mar 28
2
9p regression (Was: [PATCH v2] virtio_ring: don't update event idx on get_buf)
...> buffer is relatively small. > > Because event_triggered is true. Therefore, VRING_AVAIL_F_NO_INTERRUPT or > VRING_PACKED_EVENT_FLAG_DISABLE will not be set. So we update > vring_used_event(&vq->split.vring) or vq->packed.vring.driver->off_wrap > every time we call virtqueue_get_buf_ctx. This will bring more interruptions. > > To summarize: > 1) event_triggered was set to true in vring_interrupt() > 2) after this nothing will happen for virtqueue_disable_cb() so > VRING_AVAIL_F_NO_INTERRUPT is not set in avail_flags_shadow > 3) virtqueue_get_buf_ctx_split() w...
2023 Mar 24
1
[External] Re: [PATCH] virtio_ring: Suppress tx interrupt when napi_tx disable
...} > > > Because event_triggered is true.Therefore, VRING_AVAIL_F_NO_INTERRUPT or > > > VRING_PACKED_EVENT_FLAG_DISABLE will not be set.So we update > > > vring_used_event(&vq->split.vring) or vq->packed.vring.driver->off_wrap > > > every time we call virtqueue_get_buf_ctx.This will bring more interruptions. > > > > Can you please post how to test with the performance numbers? > > > > iperf3 tcp stream: > vm1 -----------------> vm2 > vm2 just receive tcp data stream from vm1, and send the ack to vm1, > there are so > many tx int...
2020 Aug 02
0
[PATCH -next v2] virtio_net: Avoid loop in virtnet_poll
On Sun, Aug 02, 2020 at 01:56:33PM +0800, Mao Wenan wrote: > The loop may exist if vq->broken is true, > virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split > will return NULL, so virtnet_poll will reschedule napi to > receive packet, it will lead cpu usage(si) to 100%. > > call trace as below: > virtnet_poll > virtnet_receive > virtqueue_get_buf_ctx > virtqueue_get_buf_ctx_packed &gt...
2020 Aug 02
0
[PATCH -next v2] virtio_net: Avoid loop in virtnet_poll
Just noticed the subject is wrong: this is no longer a virtio_net patch. On Sun, Aug 02, 2020 at 01:56:33PM +0800, Mao Wenan wrote: > The loop may exist if vq->broken is true, > virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split > will return NULL, so virtnet_poll will reschedule napi to > receive packet, it will lead cpu usage(si) to 100%. > > call trace as below: > virtnet_poll > virtnet_receive > virtqueue_get_buf_ctx > virtqueue_get_buf_ctx_packed &gt...
2020 Aug 04
0
[PATCH -next v3] virtio_ring: Avoid loop when vq is broken in virtqueue_poll
On 2020/8/2 ??3:44, Mao Wenan wrote: > The loop may exist if vq->broken is true, > virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split > will return NULL, so virtnet_poll will reschedule napi to > receive packet, it will lead cpu usage(si) to 100%. > > call trace as below: > virtnet_poll > virtnet_receive > virtqueue_get_buf_ctx > virtqueue_get_buf_ctx_packed >...