search for: ioeventfd

Displaying 20 results from an estimated 164 matches for "ioeventfd".

2013 Feb 26
4
[PATCH v3 0/5] kvm: Make ioeventfd usable on s390.
On Mon, Feb 25, 2013 at 04:27:45PM +0100, Cornelia Huck wrote: > Here's the latest version of my patch series enabling ioeventfds > on s390, again against kvm-next. > > Patches 1 and 2 (cleaning up initialization and exporting the virtio-ccw > api) would make sense even independent of the ioeventfd enhancements. > > Patches 3-5 are concerned with adding a new type of ioeventfds for > virtio-ccw notific...
2013 Feb 26
4
[PATCH v3 0/5] kvm: Make ioeventfd usable on s390.
On Mon, Feb 25, 2013 at 04:27:45PM +0100, Cornelia Huck wrote: > Here's the latest version of my patch series enabling ioeventfds > on s390, again against kvm-next. > > Patches 1 and 2 (cleaning up initialization and exporting the virtio-ccw > api) would make sense even independent of the ioeventfd enhancements. > > Patches 3-5 are concerned with adding a new type of ioeventfds for > virtio-ccw notific...
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
From: Asias He <asias.hejun at gmail.com> This patch is based on Sasha's 'kvm tools: Add support for virtio-mmio' patch. ioeventfds support is added which was missing in the previous one. VQ size/align is still not supported. It adds support for the new virtio-mmio transport layer added in 3.2-rc1. The purpose of this new layer is to allow virtio to work on systems which don't necessarily support PCI, such as embedded sys...
2012 Apr 07
0
[PATCH 05/14] kvm tools: Add virtio-mmio support
From: Asias He <asias.hejun at gmail.com> This patch is based on Sasha's 'kvm tools: Add support for virtio-mmio' patch. ioeventfds support is added which was missing in the previous one. VQ size/align is still not supported. It adds support for the new virtio-mmio transport layer added in 3.2-rc1. The purpose of this new layer is to allow virtio to work on systems which don't necessarily support PCI, such as embedded sys...
2014 Nov 05
1
[Qemu-devel] [RFC PATCH] virtio-mmio: support for multiple irqs
...orm are you using and which GbE controller ? > Sorry for not telling the test scenario. This test scenario is from Host to Guest. It just > compare the performance of different backends. I did this test on ARM64 platform. > > The setup was based on: > 1)on host kvm-arm should support ioeventfd and irqfd > The irqfd patch is from Eric "ARM: KVM: add irqfd support". > http://www.spinics.net/lists/kvm-arm/msg11014.html > > The ioeventfd patch is reworked by me from Antonios. > http://www.spinics.net/lists/kvm-arm/msg08413.html > > 2)qemu should enable ioev...
2014 Nov 05
1
[Qemu-devel] [RFC PATCH] virtio-mmio: support for multiple irqs
...orm are you using and which GbE controller ? > Sorry for not telling the test scenario. This test scenario is from Host to Guest. It just > compare the performance of different backends. I did this test on ARM64 platform. > > The setup was based on: > 1)on host kvm-arm should support ioeventfd and irqfd > The irqfd patch is from Eric "ARM: KVM: add irqfd support". > http://www.spinics.net/lists/kvm-arm/msg11014.html > > The ioeventfd patch is reworked by me from Antonios. > http://www.spinics.net/lists/kvm-arm/msg08413.html > > 2)qemu should enable ioev...
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes: > On 2014-05-30 00:10, Rusty Russell wrote: >> Jens Axboe <axboe at kernel.dk> writes: >>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker. >> >> Really stable? It improves performance, which is nice. But every patch >> which goes into the kernel fixes a bug, improves clarity, improves
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes: > On 2014-05-30 00:10, Rusty Russell wrote: >> Jens Axboe <axboe at kernel.dk> writes: >>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker. >> >> Really stable? It improves performance, which is nice. But every patch >> which goes into the kernel fixes a bug, improves clarity, improves
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...just the packet data but also the vring, let's call it the Shared Virtqueues BAR. The Shared Virtqueues BAR eliminates the need for vhost-net on the host because VM1 and VM2 communicate directly using virtqueue notify or polling vring memory. Virtqueue notify works by connecting an eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have an ioeventfd that is irqfd for VM1 to signal completions. Stefan
2015 Apr 27
5
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...just the packet data but also the vring, let's call it the Shared Virtqueues BAR. The Shared Virtqueues BAR eliminates the need for vhost-net on the host because VM1 and VM2 communicate directly using virtqueue notify or polling vring memory. Virtqueue notify works by connecting an eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have an ioeventfd that is irqfd for VM1 to signal completions. Stefan
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...;> Virtqueues BAR. >>> >>> The Shared Virtqueues BAR eliminates the need for vhost-net on the >>> host because VM1 and VM2 communicate directly using virtqueue notify >>> or polling vring memory. Virtqueue notify works by connecting an >>> eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have >>> an ioeventfd that is irqfd for VM1 to signal completions. >> >> We had such a discussion before: >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/123014/focus=279658 >> >> Would be great to get thi...
2015 Apr 27
4
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...;> Virtqueues BAR. >>> >>> The Shared Virtqueues BAR eliminates the need for vhost-net on the >>> host because VM1 and VM2 communicate directly using virtqueue notify >>> or polling vring memory. Virtqueue notify works by connecting an >>> eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have >>> an ioeventfd that is irqfd for VM1 to signal completions. >> >> We had such a discussion before: >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/123014/focus=279658 >> >> Would be great to get thi...
2014 Nov 05
2
[RFC PATCH] virtio-mmio: support for multiple irqs
Hi Shannon, >Type of backend bandwith(GBytes/sec) >virtio-net 0.66 >vhost-net 1.49 >vhost-net with irqfd 2.01 > >Test cmd: ./iperf -c 192.168.0.2 -P 1 -i 10 -p 5001 -f G -t 60 Impressive results ! Could you please detail your setup ? which platform are you using and which GbE controller ? As a reference, it would be good also to have
2014 Nov 05
2
[RFC PATCH] virtio-mmio: support for multiple irqs
Hi Shannon, >Type of backend bandwith(GBytes/sec) >virtio-net 0.66 >vhost-net 1.49 >vhost-net with irqfd 2.01 > >Test cmd: ./iperf -c 192.168.0.2 -P 1 -i 10 -p 5001 -f G -t 60 Impressive results ! Could you please detail your setup ? which platform are you using and which GbE controller ? As a reference, it would be good also to have
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
...= new_head; > + } You are still checking if (new_head >= cq->size) { return; } above. I think this is incorrect when the extension is present, and furthermore it's the only case where val is being used. If you're not using val, you could use ioeventfd for the MMIO. An ioeventfd cuts the MMIO cost by at least 55% and up to 70%. Here are quick and dirty measurements from kvm-unit-tests's vmexit.flat benchmark, on two very different machines: Haswell-EP Ivy Bridge i7 MMIO memory write 5100 -> 2250 (55%) 7000 -> 3000 (58%) I/O po...
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
...= new_head; > + } You are still checking if (new_head >= cq->size) { return; } above. I think this is incorrect when the extension is present, and furthermore it's the only case where val is being used. If you're not using val, you could use ioeventfd for the MMIO. An ioeventfd cuts the MMIO cost by at least 55% and up to 70%. Here are quick and dirty measurements from kvm-unit-tests's vmexit.flat benchmark, on two very different machines: Haswell-EP Ivy Bridge i7 MMIO memory write 5100 -> 2250 (55%) 7000 -> 3000 (58%) I/O po...
2015 Apr 27
1
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
...the vring, let's call it the Shared > Virtqueues BAR. > > The Shared Virtqueues BAR eliminates the need for vhost-net on the > host because VM1 and VM2 communicate directly using virtqueue notify > or polling vring memory. Virtqueue notify works by connecting an > eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have > an ioeventfd that is irqfd for VM1 to signal completions. We had such a discussion before: http://thread.gmane.org/gmane.comp.emulators.kvm.devel/123014/focus=279658 Would be great to get this ball rolling again. Jan -- Siemens AG, Corpora...
2015 Jul 09
1
[PATCH] KVM: Add Kconfig option to signal cross-endian guests
...s > >> that support cross-endian guests. > > > > I'm sure I misunderstand something, but what happens if we use QEMU with > > TCG instead of KVM, i.e. a big endian powerpc kernel guest on x86_64 > > little endian host ? > > TCG does not yet support irqfd/ioeventfd, so it cannot be used with vhost. > > Paolo vhost does not require irqfd anymore. I think ioeventfd actually works fine though I didn't try, it would be easy to support. > > Do you forbid the use of vhost in this case ? > > > >> Signed-off-by: Thomas Huth <th...
2015 Jul 09
1
[PATCH] KVM: Add Kconfig option to signal cross-endian guests
...s > >> that support cross-endian guests. > > > > I'm sure I misunderstand something, but what happens if we use QEMU with > > TCG instead of KVM, i.e. a big endian powerpc kernel guest on x86_64 > > little endian host ? > > TCG does not yet support irqfd/ioeventfd, so it cannot be used with vhost. > > Paolo vhost does not require irqfd anymore. I think ioeventfd actually works fine though I didn't try, it would be easy to support. > > Do you forbid the use of vhost in this case ? > > > >> Signed-off-by: Thomas Huth <th...
2020 Jun 02
1
[PATCH 1/6] vhost: allow device that does not depend on vhost worker
...t; return 0; > > - vhost_poll_queue(poll); > + if (!poll->dev->use_worker) > + work->fn(work); > + else > + vhost_poll_queue(poll); > + > return 0; > } > So a wakeup function wakes up eventfd directly. What if user supplies e.g. the same eventfd as ioeventfd? Won't this cause infinite loops? -- MST