Displaying 10 results from an estimated 10 matches for "device_event_dma_addr".
2019 Oct 29
2
[RFC PATCH 0/2] virtio: allow per vq DMA domain
We used to have use a single parent for all DMA operations. This tends
to complicate the mdev based hardware virtio datapath offloading which
may not implement the control path over datapath like ctrl vq in the
case of virtio-net.
So this series tries to intorduce per DMA domain by allowing trasnport
to specify the parent device for each virtqueue. Then for the case of
virtio-mdev device, it can
2023 May 26
1
[PATCH] virtio_ring: validate used buffer length
...buffer len %u\n",
+ *len, vq->packed.buflen[id]);
+ return NULL;
+ }
/* detach_buf_packed clears data, so grab it now. */
ret = vq->packed.desc_state[id].data;
@@ -1937,6 +2003,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
vring_packed->device_event_dma_addr,
dma_dev);
+ kfree(vring_packed->buflen);
kfree(vring_packed->desc_state);
kfree(vring_packed->desc_extra);
}
@@ -1988,6 +2055,14 @@ static int vring_alloc_queue_packed(struct vring_virtqueue_packed *vring_packed,
vring_packed->vring.num = num;
+ if (vring_needs_used_...
2023 May 31
1
[PATCH] virtio_ring: validate used buffer length
...*/
> > > > > > > ret = vq->packed.desc_state[id].data;
> > > > > > > @@ -1937,6 +2003,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
> > > > > > > vring_packed->device_event_dma_addr,
> > > > > > > dma_dev);
> > > > > > >
> > > > > > > + kfree(vring_packed->buflen);
> > > > > > > kfree(vring_packed->desc_state);
> > > > > > >...
2023 May 31
1
[PATCH] virtio_ring: validate used buffer length
...t; > > > > > ret = vq->packed.desc_state[id].data;
> > > > > > > > @@ -1937,6 +2003,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
> > > > > > > > vring_packed->device_event_dma_addr,
> > > > > > > > dma_dev);
> > > > > > > >
> > > > > > > > + kfree(vring_packed->buflen);
> > > > > > > > kfree(vring_packed->desc_state);
> > > &g...
2023 Jun 01
1
[PATCH] virtio_ring: validate used buffer length
...t; > > > > > ret = vq->packed.desc_state[id].data;
> > > > > > > > @@ -1937,6 +2003,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
> > > > > > > > vring_packed->device_event_dma_addr,
> > > > > > > > dma_dev);
> > > > > > > >
> > > > > > > > + kfree(vring_packed->buflen);
> > > > > > > > kfree(vring_packed->desc_state);
> > > &g...
2023 Jun 01
1
[PATCH] virtio_ring: validate used buffer length
...t; > > > ret = vq->packed.desc_state[id].data;
> > > > > > > > > @@ -1937,6 +2003,7 @@ static void vring_free_packed(struct vring_virtqueue_packed *vring_packed,
> > > > > > > > > vring_packed->device_event_dma_addr,
> > > > > > > > > dma_dev);
> > > > > > > > >
> > > > > > > > > + kfree(vring_packed->buflen);
> > > > > > > > > kfree(vring_packed->desc_state...
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi,
This patch set implements packed ring support in virtio driver.
A performance test between pktgen (pktgen_sample03_burst_single_flow.sh)
and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw
~30% performance gain in packed ring in this case.
To make this patch set work with below patch set for vhost,
some hacks are needed to set the _F_NEXT flag in indirect
descriptors (this should
2018 Nov 21
19
[PATCH net-next v3 00/13] virtio: support packed ring
Hi,
This patch set implements packed ring support in virtio driver.
A performance test between pktgen (pktgen_sample03_burst_single_flow.sh)
and DPDK vhost (testpmd/rxonly/vhost-PMD) has been done, I saw
~30% performance gain in packed ring in this case.
To make this patch set work with below patch set for vhost,
some hacks are needed to set the _F_NEXT flag in indirect
descriptors (this should
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a
performance improvement. I'm publishing them so they are archived in case
someone picks up this work again in the future.
The goal of these patches is to allocate virtqueues and driver state from the
device's NUMA node for optimal memory access latency. Only guests with a vNUMA
topology and virtio devices spread