search for: vring_unmap_extra_packed

Displaying 20 results from an estimated 21 matches for "vring_unmap_extra_packed".

2023 Mar 15
2
[PATCH v2 3/3] virtio_ring: Use const to annotate read-only pointer params
...truct vring_virtqueue *vq, u32 num) */ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, - struct vring_desc *desc) + const struct vring_desc *desc) { u16 flags; @@ -1183,7 +1183,7 @@ static u16 packed_last_used(u16 last_used_idx) } static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, - struct vring_desc_extra *extra) + const struct vring_desc_extra *extra) { u16 flags; @@ -1206,7 +1206,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, } static void vring_unmap_desc_packed(const struct vring_vir...
2023 Mar 10
0
[PATCH v2 3/3] virtio_ring: Use const to annotate read-only pointer params
...truct vring_virtqueue *vq, u32 num) */ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq, - struct vring_desc *desc) + const struct vring_desc *desc) { u16 flags; @@ -1183,7 +1183,7 @@ static u16 packed_last_used(u16 last_used_idx) } static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, - struct vring_desc_extra *extra) + const struct vring_desc_extra *extra) { u16 flags; @@ -1206,7 +1206,7 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, } static void vring_unmap_desc_packed(const struct vring_vir...
2023 Mar 22
1
[PATCH vhost v4 02/11] virtio_ring: packed: separate dma codes
...4,6 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, END_USE(vq); return 0; - -unmap_release: - err_idx = i; - i = head; - curr = vq->free_head; - - vq->packed.avail_used_flags = avail_used_flags; - - for (n = 0; n < total_sg; n++) { - if (i == err_idx) - break; - vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]); - curr = vq->packed.desc_extra[curr].next; - i++; - if (i >= vq->packed.vring.num) - i = 0; - } - - END_USE(vq); - return -EIO; } static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq) -- 2.32.0.3.g01195cf9f
2023 May 17
12
[PATCH vhost v9 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Mar 02
1
[PATCH vhost v1 06/12] virtio_ring: packed: separate DMA codes
...ring_virtqueue *vq, pr_debug("Added buffer head %i to %p\n", head, vq); return 0; - -unmap_release: - err_idx = i; - i = head; - curr = vq->free_head; - - vq->packed.avail_used_flags = avail_used_flags; - - for (n = 0; n < total_sg; n++) { - if (i == err_idx) - break; - vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]); - curr = vq->packed.desc_extra[curr].next; - i++; - if (i >= vq->packed.vring.num) - i = 0; - } - - return -EIO; } static inline int virtqueue_add_packed(struct virtqueue *_vq, @@ -1621,6 +1581,10 @@ static inline int virtqueue_add_packed(...
2023 Mar 07
3
[PATCH 0/3] virtio_ring: Clean up code for virtio ring and pci
This patch series performs a clean up of the code in virtio_ring and virtio_pci, modifying it to conform with the Linux kernel coding style guidance [1]. The modifications ensure the code easy to read and understand. This small series does few short cleanups in the code. Patch-1 Remove unnecessary num zero check, which performs in power_of_2. Patch-2 Avoid using inline for small functions.
2023 Mar 21
11
[PATCH vhost v3 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 22
11
[PATCH vhost v4 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 24
11
[PATCH vhost v5 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 27
11
[PATCH vhost v6 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 15
4
[PATCH v2 0/3] virtio_ring: Clean up code for virtio ring and pci
This patch series performs a clean up of the code in virtio_ring and virtio_pci, modifying it to conform with the Linux kernel coding style guidance [1]. The modifications ensure the code easy to read and understand. This small series does few short cleanups in the code. Patch-1 Allow non power of 2 sizes for packed virtqueues. Patch-2 Avoid using inline for small functions. Patch-3 Use const to
2023 Mar 10
4
[PATCH v2 0/3] virtio_ring: Clean up code for virtio ring and pci
This patch series performs a clean up of the code in virtio_ring and virtio_pci, modifying it to conform with the Linux kernel coding style guidance [1]. The modifications ensure the code easy to read and understand. This small series does few short cleanups in the code. Patch-1 Allow non power of 2 sizes for virtqueues Patch-2 Avoid using inline for small functions. Patch-3 Use const to annotate
2023 May 09
12
[PATCH vhost v8 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Apr 25
12
[PATCH vhost v7 00/11] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Feb 14
11
[PATCH vhost 00/10] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 02
12
[PATCH vhost v1 00/12] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Jul 10
10
[PATCH vhost v11 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Aug 10
12
[PATCH vhost v13 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Aug 10
12
[PATCH vhost v13 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use