similar to: [PATCH v2 0/6] kvm: pci PORT IO MMIO and PV MMIO speed tests

Displaying 20 results from an estimated 5000 matches similar to: "[PATCH v2 0/6] kvm: pci PORT IO MMIO and PV MMIO speed tests"

2013 Apr 04
1
[PATCH RFC] kvm: add PV MMIO EVENTFD
With KVM, MMIO is much slower than PIO, due to the need to do page walk and emulation. But with EPT, it does not have to be: we know the address from the VMCS so if the address is unique, we can look up the eventfd directly, bypassing emulation. Add an interface for userspace to specify this per-address, we can use this e.g. for virtio. The implementation adds a separate bus internally. This
2013 Apr 04
1
[PATCH RFC] kvm: add PV MMIO EVENTFD
With KVM, MMIO is much slower than PIO, due to the need to do page walk and emulation. But with EPT, it does not have to be: we know the address from the VMCS so if the address is unique, we can look up the eventfd directly, bypassing emulation. Add an interface for userspace to specify this per-address, we can use this e.g. for virtio. The implementation adds a separate bus internally. This
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
From: Asias He <asias.hejun at gmail.com> This patch is based on Sasha's 'kvm tools: Add support for virtio-mmio' patch. ioeventfds support is added which was missing in the previous one. VQ size/align is still not supported. It adds support for the new virtio-mmio transport layer added in 3.2-rc1. The purpose of this new layer is to allow virtio to work on systems which
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
From: Asias He <asias.hejun at gmail.com> This patch is based on Sasha's 'kvm tools: Add support for virtio-mmio' patch. ioeventfds support is added which was missing in the previous one. VQ size/align is still not supported. It adds support for the new virtio-mmio transport layer added in 3.2-rc1. The purpose of this new layer is to allow virtio to work on systems which
2005 Jun 30
0
[PATCH][2/10] Extend the VMX intercept mechanism to include mmio as well as portio.
Extend the VMX intercept mechanism to include mmio as well as portio. Signed-off-by: Yunhong Jiang <yunhong.jiang@intel.com> Signed-off-by: Xiaofeng Ling <xiaofeng.ling@intel.com> Signed-off-by: Arun Sharma <arun.sharma@intel.com> diff -r febfcd0a1a0a -r 9a43d5c12b95 xen/include/asm-x86/vmx_platform.h --- a/xen/include/asm-x86/vmx_platform.h Thu Jun 30 03:20:48 2005 +++
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
On Fri, 2015-11-20 at 09:58 +0100, Paolo Bonzini wrote: > > On 20/11/2015 09:11, Ming Lin wrote: > > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: > >> > >> On 18/11/2015 06:47, Ming Lin wrote: > >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > >>> } > >>> >
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems: for example IBM pSeries machines not
2012 Mar 19
1
[PATCHv2] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems: for example IBM pSeries machines not
2020 Jun 02
1
[PATCH 1/6] vhost: allow device that does not depend on vhost worker
On Fri, May 29, 2020 at 04:02:58PM +0800, Jason Wang wrote: > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d450e16c5c25..70105e045768 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -166,11 +166,16 @@ static int vhost_poll_wakeup(wait_queue_entry_t *wait, unsigned mode, int sync, > void *key) > { > struct vhost_poll
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: > > On 18/11/2015 06:47, Ming Lin wrote: > > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > > } > > > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > > - cq->head = new_head; > > + /* When the mapped pointer memory area is setup,
2014 Nov 05
1
[Qemu-devel] [RFC PATCH] virtio-mmio: support for multiple irqs
On 11/05/2014 03:12 AM, Shannon Zhao wrote: > Hi R?my, > > On 2014/11/5 16:26, GAUGUEY R?my 228890 wrote: >> Hi Shannon, >> >>> Type of backend bandwith(GBytes/sec) >>> virtio-net 0.66 >>> vhost-net 1.49 >>> vhost-net with irqfd 2.01 >>> >>> Test cmd: ./iperf -c 192.168.0.2 -P 1 -i 10
2014 Nov 05
1
[Qemu-devel] [RFC PATCH] virtio-mmio: support for multiple irqs
On 11/05/2014 03:12 AM, Shannon Zhao wrote: > Hi R?my, > > On 2014/11/5 16:26, GAUGUEY R?my 228890 wrote: >> Hi Shannon, >> >>> Type of backend bandwith(GBytes/sec) >>> virtio-net 0.66 >>> vhost-net 1.49 >>> vhost-net with irqfd 2.01 >>> >>> Test cmd: ./iperf -c 192.168.0.2 -P 1 -i 10
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems which don't implement PIO or
2012 Mar 19
2
[PATCH RFC] virtio-pci: add MMIO property
Currently virtio-pci is specified so that configuration of the device is done through a PCI IO space (via BAR 0 of the virtual PCI device). However, Linux guests happen to use ioread/iowrite/iomap primitives for access, and these work uniformly across memory/io BARs. While PCI IO accesses are faster than MMIO on x86 kvm, MMIO might be helpful on other systems which don't implement PIO or
2015 Apr 27
0
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 15:01 schrieb Stefan Hajnoczi: > On Mon, Apr 27, 2015 at 1:55 PM, Jan Kiszka <jan.kiszka at siemens.com> wrote: >> Am 2015-04-27 um 14:35 schrieb Jan Kiszka: >>> Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi: >>>> On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote: >>>>> On 24 April 2015 at 15:22, Stefan
2015 Apr 27
1
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
Am 2015-04-27 um 12:17 schrieb Stefan Hajnoczi: > On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote: >> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote: >>> >>> The motivation for making VM-to-VM fast is that while software >>> switches on the host are efficient today (thanks to vhost-user), there