search for: ring_dma_addr

Displaying 10 results from an estimated 10 matches for "ring_dma_addr".

2019 Oct 29
2
[RFC PATCH 0/2] virtio: allow per vq DMA domain
We used to have use a single parent for all DMA operations. This tends to complicate the mdev based hardware virtio datapath offloading which may not implement the control path over datapath like ctrl vq in the case of virtio-net. So this series tries to intorduce per DMA domain by allowing trasnport to specify the parent device for each virtqueue. Then for the case of virtio-mdev device, it can
2023 May 26
1
[PATCH] virtio_ring: validate used buffer length
...ze_t queue_size_in_bytes; @@ -145,6 +151,9 @@ struct vring_virtqueue_packed { struct vring_desc_state_packed *desc_state; struct vring_desc_extra *desc_extra; + /* Maximum in buffer length, NULL means no used validation */ + u32 *buflen; + /* DMA address and size information */ dma_addr_t ring_dma_addr; dma_addr_t driver_event_dma_addr; @@ -552,6 +561,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, unsigned int i, n, avail, descs_used, prev, err_idx; int head; bool indirect; + u32 buflen = 0; START_USE(vq); @@ -635,6 +645,7 @@ static inline int virtqueue_add_split(...
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi, This patch set implements packed ring support in virtio driver. A performance test between pktgen (pktgen_sample03_burst_single_flow.sh) and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw ~30% performance gain in packed ring in this case. To make this patch set work with below patch set for vhost, some hacks are needed to set the _F_NEXT flag in indirect descriptors (this should
2023 May 31
1
[PATCH] virtio_ring: validate used buffer length
...gt; > + /* Maximum in buffer length, NULL means no used validation */ > > > > > > > + u32 *buflen; > > > > > > > + > > > > > > > /* DMA address and size information */ > > > > > > > dma_addr_t ring_dma_addr; > > > > > > > dma_addr_t driver_event_dma_addr; > > > > > > > @@ -552,6 +561,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > > unsigned int i, n, avail, descs_used, prev, err_idx; > > &gt...
2023 May 31
1
[PATCH] virtio_ring: validate used buffer length
...ximum in buffer length, NULL means no used validation */ > > > > > > > > + u32 *buflen; > > > > > > > > + > > > > > > > > /* DMA address and size information */ > > > > > > > > dma_addr_t ring_dma_addr; > > > > > > > > dma_addr_t driver_event_dma_addr; > > > > > > > > @@ -552,6 +561,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > > > unsigned int i, n, avail, descs_used, prev, err_idx...
2023 Jun 01
1
[PATCH] virtio_ring: validate used buffer length
...ximum in buffer length, NULL means no used validation */ > > > > > > > > + u32 *buflen; > > > > > > > > + > > > > > > > > /* DMA address and size information */ > > > > > > > > dma_addr_t ring_dma_addr; > > > > > > > > dma_addr_t driver_event_dma_addr; > > > > > > > > @@ -552,6 +561,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > > > unsigned int i, n, avail, descs_used, prev, err_idx...
2023 Jun 01
1
[PATCH] virtio_ring: validate used buffer length
...th, NULL means no used validation */ > > > > > > > > > + u32 *buflen; > > > > > > > > > + > > > > > > > > > /* DMA address and size information */ > > > > > > > > > dma_addr_t ring_dma_addr; > > > > > > > > > dma_addr_t driver_event_dma_addr; > > > > > > > > > @@ -552,6 +561,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > > > > > > > > > unsigned int i, n, avail, descs_used...
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread