search for: avail_used_flags

Displaying 20 results from an estimated 27 matches for "avail_used_flags".

2023 Mar 22
1
[PATCH vhost v4 02/11] virtio_ring: packed: separate dma codes
...tic inline int virtqueue_add_packed(struct virtqueue *_vq, struct vring_virtqueue *vq = to_vvq(_vq); struct vring_packed_desc *desc; struct scatterlist *sg; - unsigned int i, n, c, descs_used, err_idx; + unsigned int i, n, c, descs_used; __le16 head_flags, flags; - u16 head, id, prev, curr, avail_used_flags; + u16 head, id, prev, curr; int err; START_USE(vq); @@ -1461,7 +1461,6 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, } head = vq->packed.next_avail_idx; - avail_used_flags = vq->packed.avail_used_flags; WARN_ON_ONCE(total_sg > vq->packed.vring.num &am...
2023 Mar 02
1
[PATCH vhost v1 06/12] virtio_ring: packed: separate DMA codes
...q, @@ -1498,15 +1483,14 @@ static inline int virtqueue_add_vring_packed(struct vring_virtqueue *vq, { struct vring_packed_desc *desc; struct scatterlist *sg; - unsigned int i, n, c, descs_used, err_idx; + unsigned int i, n, c, descs_used; __le16 head_flags, flags; - u16 head, id, prev, curr, avail_used_flags; + u16 head, id, prev, curr; desc = vq->packed.vring.desc; head = vq->packed.next_avail_idx; i = head; descs_used = total_sg; - avail_used_flags = vq->packed.avail_used_flags; id = vq->free_head; BUG_ON(id == vq->packed.vring.num); @@ -1515,11 +1499,6 @@ static inline...
2018 Dec 07
0
[RFC 3/3] virtio_ring: use new vring flags
...+1038,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, vq->packed.desc_extra[id].addr = addr; vq->packed.desc_extra[id].len = total_sg * sizeof(struct vring_packed_desc); - vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | - vq->packed.avail_used_flags; + vq->packed.desc_extra[id].flags = + BIT(VRING_PACKED_DESC_F_INDIRECT) | + vq->packed.avail_used_flags; } /* @@ -1035,8 +1049,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, * the list are made available. */ virtio_wmb(vq->weak_barriers); -...
2018 Dec 07
7
[RFC 0/3] virtio_ring: define flags as shifts consistently
This is a follow up of the discussion in this thread: https://patchwork.ozlabs.org/patch/1001015/#2042353 Tiwei Bie (3): virtio_ring: define flags as shifts consistently virtio_ring: add VIRTIO_RING_NO_LEGACY virtio_ring: use new vring flags drivers/virtio/virtio_ring.c | 100 ++++++++++++++++++------------- include/uapi/linux/virtio_ring.h | 61 +++++++++++++------ 2 files changed,
2018 Dec 07
7
[RFC 0/3] virtio_ring: define flags as shifts consistently
This is a follow up of the discussion in this thread: https://patchwork.ozlabs.org/patch/1001015/#2042353 Tiwei Bie (3): virtio_ring: define flags as shifts consistently virtio_ring: add VIRTIO_RING_NO_LEGACY virtio_ring: use new vring flags drivers/virtio/virtio_ring.c | 100 ++++++++++++++++++------------- include/uapi/linux/virtio_ring.h | 61 +++++++++++++------ 2 files changed,
2023 Mar 02
12
[PATCH vhost v1 00/12] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 21
11
[PATCH vhost v3 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 22
11
[PATCH vhost v4 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 24
11
[PATCH vhost v5 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 27
11
[PATCH vhost v6 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 May 17
12
[PATCH vhost v9 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 May 09
12
[PATCH vhost v8 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 May 17
2
[PATCH vhost v9 01/12] virtio_ring: put mapping error check in vring_map_one_sg
...g(vq, sg, n < out_sgs ? - DMA_TO_DEVICE : DMA_FROM_DEVICE); - if (vring_mapping_error(vq, addr)) + dma_addr_t addr; + + if (vring_map_one_sg(vq, sg, n < out_sgs ? + DMA_TO_DEVICE : DMA_FROM_DEVICE, &addr)) goto unmap_release; flags = cpu_to_le16(vq->packed.avail_used_flags | -- 2.32.0.3.g01195cf9f
2023 Apr 25
12
[PATCH vhost v7 00/11] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Feb 14
11
[PATCH vhost 00/10] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 May 26
1
[PATCH] virtio_ring: validate used buffer length
...+ if (vq->packed.buflen) + vq->packed.buflen[id] = buflen; + vq->num_added += 1; pr_debug("Added buffer head %i to %p\n", head, vq); @@ -1416,6 +1470,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, __le16 head_flags, flags; u16 head, id, prev, curr, avail_used_flags; int err; + u32 buflen = 0; START_USE(vq); @@ -1498,6 +1553,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, 1 << VRING_PACKED_DESC_F_AVAIL | 1 << VRING_PACKED_DESC_F_USED; } + if (n >= out_sgs) + buflen += sg->length; } } @@ -...
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should