similar to: [PATCH vhost 00/10] virtio core prepares for AF_XDP

Displaying 20 results from an estimated 6000 matches similar to: "[PATCH vhost 00/10] virtio core prepares for AF_XDP"

2023 Mar 02
12
[PATCH vhost v1 00/12] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Apr 25
12
[PATCH vhost v7 00/11] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Mar 21
11
[PATCH vhost v3 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 22
11
[PATCH vhost v4 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 24
11
[PATCH vhost v5 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 Mar 27
11
[PATCH vhost v6 00/11] virtio core prepares for AF_XDP
XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. ENV: Qemu with vhost. vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS -----------------------------|---------------|------------------|------------ xmit by sockperf: 90% | 100%
2023 May 09
12
[PATCH vhost v8 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 May 17
12
[PATCH vhost v9 00/12] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Mar 02
1
[PATCH vhost v1 06/12] virtio_ring: packed: separate DMA codes
DMA-related logic is separated from the virtqueue_add_vring_packed() to prepare for subsequent support for premapped. DMA address will be saved as sg->dma_address, then virtqueue_add_vring_packed() will use it directly. If it is a premapped scene, the transmitted sgs should have saved DMA address in dma_address, and in virtio core, we need to pass virtqueue_map_sgs(). Signed-off-by: Xuan
2023 Feb 20
2
[PATCH vhost 01/10] virtio_ring: split: refactor virtqueue_add_split() for premapped
On Tue, Feb 14, 2023 at 3:27 PM Xuan Zhuo <xuanzhuo at linux.alibaba.com> wrote: > > DMA-related logic is separated from the virtqueue_add_split to prepare > for subsequent support for premapped. The patch seems to do more than what is described here. To simplify reviewers, I'd suggest to split this patch into three: 1) virtqueue_add_split_prepare() (could we have a better
2023 Mar 22
1
[PATCH vhost v4 04/11] virtio_ring: split: support premapped
virtio core only supports virtual addresses, dma is completed in virtio core. In some scenarios (such as the AF_XDP), the memory is allocated and DMA mapping is completed in advance, so it is necessary for us to support passing the DMA address to virtio core. Drives can use sg->dma_address to pass the mapped dma address to virtio core. If one sg->dma_address is used then all sgs must use
2023 Mar 21
1
[PATCH vhost v3 04/11] virtio_ring: split: support premapped
On Tue, Mar 21, 2023 at 05:34:59PM +0800, Xuan Zhuo wrote: > virtio core only supports virtual addresses, dma is completed in virtio > core. > > In some scenarios (such as the AF_XDP), the memory is allocated > and DMA mapping is completed in advance, so it is necessary for us to > support passing the DMA address to virtio core. > > Drives can use sg->dma_address to
2023 Feb 20
1
[PATCH vhost 04/10] virtio_ring: split: introduce virtqueue_add_split_premapped()
On Mon, 20 Feb 2023 13:38:13 +0800, Jason Wang <jasowang at redhat.com> wrote: > On Tue, Feb 14, 2023 at 3:27 PM Xuan Zhuo <xuanzhuo at linux.alibaba.com> wrote: > > > > virtqueue_add_split() only supports virtual addresses, dma is completed > > in virtqueue_add_split(). > > > > In some scenarios (such as the AF_XDP scenario), the memory is allocated
2023 Mar 02
1
[PATCH vhost v1 01/12] virtio_ring: split: refactor virtqueue_add_split() for premapped
This commit splits virtqueue_add_split() to two functions. The purpose of such splitting is to separate DMA operations. The first function includes all codes that may fail before the DMA operation. The subsequent part is used as the second function. In this way, we can perform DMA operations in the middle of the two functions. If the first function fails, we do not need to perform DMA
2023 Mar 07
2
[PATCH vhost v1 03/12] virtio_ring: split: introduce virtqueue_add_split_premapped()
On Thu, Mar 2, 2023 at 7:59?PM Xuan Zhuo <xuanzhuo at linux.alibaba.com> wrote: > > virtqueue_add_split() only supports virtual addresses, dma is completed > in virtqueue_add_split(). > > In some scenarios (such as the AF_XDP scenario), the memory is allocated > and DMA is completed in advance, so it is necessary for us to support > passing the DMA address to virtio
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2023 Jun 02
12
[PATCH vhost v10 00/10] virtio core prepares for AF_XDP
## About DMA APIs Now, virtio may can not work with DMA APIs when virtio features do not have VIRTIO_F_ACCESS_PLATFORM. 1. I tried to let DMA APIs return phy address by virtio-device. But DMA APIs just work with the "real" devices. 2. I tried to let xsk support callballs to get phy address from virtio-net driver as the dma address. But the maintainers of xsk may want to use
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should
2023 Mar 07
1
[PATCH vhost v1 03/12] virtio_ring: split: introduce virtqueue_add_split_premapped()
On Tue, 7 Mar 2023 14:43:42 +0800, Jason Wang <jasowang at redhat.com> wrote: > On Thu, Mar 2, 2023 at 7:59?PM Xuan Zhuo <xuanzhuo at linux.alibaba.com> wrote: > > > > virtqueue_add_split() only supports virtual addresses, dma is completed > > in virtqueue_add_split(). > > > > In some scenarios (such as the AF_XDP scenario), the memory is allocated >