search for: virtio_queue_get_desc_addr

Displaying 20 results from an estimated 31 matches for "virtio_queue_get_desc_addr".

2013 May 29
1
[RFC 7/11] virtio_pci: new, capability-aware driver.
...e field, so it doesn't > matter if the accesses aren't atomic for that. > > Cheers, > Rusty. I mean the struct should have separate _lo and _hi fields. Otherwise I have to do: + case offsetof(struct virtio_pci_common_cfg, queue_desc): + assert(size == 4); + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) & low; + case offsetof(struct virtio_pci_common_cfg, queue_desc) + 4: + assert(size == 4); + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) >> 32; Would be nicer as: + case offsetof(struct virtio_pci_common_cfg, queue_desc_lo): + ass...
2013 May 28
2
[RFC 7/11] virtio_pci: new, capability-aware driver.
On Mon, Dec 12, 2011 at 01:49:13PM +0200, Michael S. Tsirkin wrote: > On Mon, Dec 12, 2011 at 09:15:03AM +1030, Rusty Russell wrote: > > On Sun, 11 Dec 2011 11:42:56 +0200, "Michael S. Tsirkin" <mst at redhat.com> wrote: > > > On Thu, Dec 08, 2011 at 09:09:33PM +1030, Rusty Russell wrote: > > > > +/* There is no iowrite64. We use two 32-bit ops. */
2013 May 29
0
[PATCH RFC] virtio-pci: new config layout: using memory BAR
...e); /* TODO */ return 0; case offsetof(struct virtio_pci_common_cfg, queue_notify_off): + assert(size == sizeof cfg.queue_notify_off); return vdev->queue_sel; case offsetof(struct virtio_pci_common_cfg, queue_desc): + assert(size == 4); return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) & low; case offsetof(struct virtio_pci_common_cfg, queue_desc) + 4: + assert(size == 4); return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) >> 32; case offsetof(struct virtio_pci_common_cfg, queue_avail): + assert(size ==...
2013 May 28
3
[PATCH RFC] virtio-pci: new config layout: using memory BAR
...vdev->queue_sel); + case offsetof(struct virtio_pci_common_cfg, queue_enable): + /* TODO */ + return 0; + case offsetof(struct virtio_pci_common_cfg, queue_notify_off): + return vdev->queue_sel; + case offsetof(struct virtio_pci_common_cfg, queue_desc): + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) & low; + case offsetof(struct virtio_pci_common_cfg, queue_desc) + 4: + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) >> 32; + case offsetof(struct virtio_pci_common_cfg, queue_avail): + return virtio_queue_get_avail_addr(vdev, v...
2013 May 28
3
[PATCH RFC] virtio-pci: new config layout: using memory BAR
...vdev->queue_sel); + case offsetof(struct virtio_pci_common_cfg, queue_enable): + /* TODO */ + return 0; + case offsetof(struct virtio_pci_common_cfg, queue_notify_off): + return vdev->queue_sel; + case offsetof(struct virtio_pci_common_cfg, queue_desc): + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) & low; + case offsetof(struct virtio_pci_common_cfg, queue_desc) + 4: + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) >> 32; + case offsetof(struct virtio_pci_common_cfg, queue_avail): + return virtio_queue_get_avail_addr(vdev, v...
2013 May 29
6
[PATCH RFC] virtio-pci: new config layout: using memory BAR
Anthony Liguori <aliguori at us.ibm.com> writes: > "Michael S. Tsirkin" <mst at redhat.com> writes: >> + case offsetof(struct virtio_pci_common_cfg, device_feature_select): >> + return proxy->device_feature_select; > > Oh dear no... Please use defines like the rest of QEMU. It is pretty ugly. Yet the structure definitions are descriptive,
2013 May 29
6
[PATCH RFC] virtio-pci: new config layout: using memory BAR
Anthony Liguori <aliguori at us.ibm.com> writes: > "Michael S. Tsirkin" <mst at redhat.com> writes: >> + case offsetof(struct virtio_pci_common_cfg, device_feature_select): >> + return proxy->device_feature_select; > > Oh dear no... Please use defines like the rest of QEMU. It is pretty ugly. Yet the structure definitions are descriptive,
2013 May 28
0
[PATCH RFC] virtio-pci: new config layout: using memory BAR
...e offsetof(struct virtio_pci_common_cfg, queue_enable): > + /* TODO */ > + return 0; > + case offsetof(struct virtio_pci_common_cfg, queue_notify_off): > + return vdev->queue_sel; > + case offsetof(struct virtio_pci_common_cfg, queue_desc): > + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) & low; > + case offsetof(struct virtio_pci_common_cfg, queue_desc) + 4: > + return virtio_queue_get_desc_addr(vdev, vdev->queue_sel) >> 32; > + case offsetof(struct virtio_pci_common_cfg, queue_avail): > + return virtio_queue_ge...
2015 May 12
2
[Qemu-devel] [PATCH RFC 4/7] vhost: set vring endianness for legacy virtio
...virtio_is_big_endian(vdev), > + vhost_vq_index); > + if (r) { > + return -errno; > + } > + } > + > s = l = virtio_queue_get_desc_size(vdev, idx); > a = virtio_queue_get_desc_addr(vdev, idx); > vq->desc = cpu_physical_memory_map(a, &l, 0);
2015 May 12
2
[Qemu-devel] [PATCH RFC 4/7] vhost: set vring endianness for legacy virtio
...virtio_is_big_endian(vdev), > + vhost_vq_index); > + if (r) { > + return -errno; > + } > + } > + > s = l = virtio_queue_get_desc_size(vdev, idx); > a = virtio_queue_get_desc_addr(vdev, idx); > vq->desc = cpu_physical_memory_map(a, &l, 0);
2015 May 06
0
[PATCH RFC 4/7] vhost: set vring endianness for legacy virtio
...ng_endian_legacy(dev, + virtio_is_big_endian(vdev), + vhost_vq_index); + if (r) { + return -errno; + } + } + s = l = virtio_queue_get_desc_size(vdev, idx); a = virtio_queue_get_desc_addr(vdev, idx); vq->desc = cpu_physical_memory_map(a, &l, 0); @@ -747,8 +780,9 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev, struct vhost_virtqueue *vq, unsigned idx) { + int vhost_vq_index = idx - de...
2015 May 12
0
[Qemu-devel] [PATCH RFC 4/7] vhost: set vring endianness for legacy virtio
...o_is_big_endian(vdev), > > + vhost_vq_index); > > + if (r) { > > + return -errno; > > + } > > + } > > + > > s = l = virtio_queue_get_desc_size(vdev, idx); > > a = virtio_queue_get_desc_addr(vdev, idx); > > vq->desc = cpu_physical_memory_map(a, &l, 0);
2016 Apr 21
0
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
..._layout", _state, _field, \ > - VIRTIO_F_ANY_LAYOUT, true) > + VIRTIO_F_ANY_LAYOUT, true), \ > + DEFINE_PROP_BIT64("iommu_platform", _state, _field, \ > + VIRTIO_F_IOMMU_PLATFORM, false) > > hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n); > hwaddr virtio_queue_get_avail_addr(VirtIODevice *vdev, int n); > diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h > index bcc445b..3fcfbb1 100644 > --- a/include/standard-headers/linux/virtio_conf...
2016 Apr 18
2
[PATCH RFC] fixup! virtio: convert to use DMA api
...VIRTIO_F_ANY_LAYOUT, true), \ + DEFINE_PROP_BIT64("iommu_passthrough", _state, _field, \ + VIRTIO_F_IOMMU_PASSTHROUGH, false), \ + DEFINE_PROP_BIT64("iommu_platform", _state, _field, \ + VIRTIO_F_IOMMU_PLATFORM, false) hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n); hwaddr virtio_queue_get_avail_addr(VirtIODevice *vdev, int n); diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h index bcc445b..5564dab 100644 --- a/include/standard-headers/linux/virtio_config.h +++ b/include/s...
2016 Apr 18
2
[PATCH RFC] fixup! virtio: convert to use DMA api
...VIRTIO_F_ANY_LAYOUT, true), \ + DEFINE_PROP_BIT64("iommu_passthrough", _state, _field, \ + VIRTIO_F_IOMMU_PASSTHROUGH, false), \ + DEFINE_PROP_BIT64("iommu_platform", _state, _field, \ + VIRTIO_F_IOMMU_PLATFORM, false) hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n); hwaddr virtio_queue_get_avail_addr(VirtIODevice *vdev, int n); diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h index bcc445b..5564dab 100644 --- a/include/standard-headers/linux/virtio_config.h +++ b/include/s...
2015 May 06
9
[PATCH RFC 0/7] vhost: cross-endian support (vhost-net only)
Hi, This series allows QEMU to use vhost with legacy virtio devices when host and target don't have the same endianness. Only network devices are covered for the moment. I had already posted a series some monthes ago but it never got reviewed. Moreover, the underlying kernel support was entirely re-written and is still waiting to be applied by Michael. I hence post as RFC. The corresponding
2015 May 06
9
[PATCH RFC 0/7] vhost: cross-endian support (vhost-net only)
Hi, This series allows QEMU to use vhost with legacy virtio devices when host and target don't have the same endianness. Only network devices are covered for the moment. I had already posted a series some monthes ago but it never got reviewed. Moreover, the underlying kernel support was entirely re-written and is still waiting to be applied by Michael. I hence post as RFC. The corresponding
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...DEFINE_PROP_BIT64("any_layout", _state, _field, \ - VIRTIO_F_ANY_LAYOUT, true) + VIRTIO_F_ANY_LAYOUT, true), \ + DEFINE_PROP_BIT64("iommu_platform", _state, _field, \ + VIRTIO_F_IOMMU_PLATFORM, false) hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n); hwaddr virtio_queue_get_avail_addr(VirtIODevice *vdev, int n); diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h index bcc445b..3fcfbb1 100644 --- a/include/standard-headers/linux/virtio_config.h +++ b/include/s...
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...DEFINE_PROP_BIT64("any_layout", _state, _field, \ - VIRTIO_F_ANY_LAYOUT, true) + VIRTIO_F_ANY_LAYOUT, true), \ + DEFINE_PROP_BIT64("iommu_platform", _state, _field, \ + VIRTIO_F_IOMMU_PLATFORM, false) hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n); hwaddr virtio_queue_get_avail_addr(VirtIODevice *vdev, int n); diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h index bcc445b..3fcfbb1 100644 --- a/include/standard-headers/linux/virtio_config.h +++ b/include/s...
2011 May 04
4
[PATCH 0/3] virtio-net: 64 bit features, event index
OK, here's a patch that implements the virtio spec update that I sent earlier. It supercedes the PUBLISH_USED_IDX patches I sent out earlier. Support is added in both userspace and vhost-net. I see nice performance improvements: e.g. from 12 to 18 Gbit/s host to guest with netperf, but did not spend a lot of time testing performance. I hope others will try this out and report. Note: there