Hello all, On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:> ivshmem has a performance disadvantage for guest-to-host > communication. Since the shared memory is exposed as PCI BARs, the > guest has to memcpy into the shared memory. > > vhost-user can access guest memory directly and avoid the copy inside the guest.Actually, you can avoid this memory copy using frameworks like DPDK.> Unless someone steps up and maintains ivshmem, I think it should be > deprecated and dropped from QEMU.Then I can maintain ivshmem for QEMU. If this is ok, I will send a patch for MAINTAINERS file. -- David Marchand
Il 17/06/2014 11:03, David Marchand ha scritto:>> Unless someone steps up and maintains ivshmem, I think it should be >> deprecated and dropped from QEMU. > > Then I can maintain ivshmem for QEMU. > If this is ok, I will send a patch for MAINTAINERS file.Typically, adding yourself to maintainers is done only after having proved your ability to be a maintainer. :) So, let's stop talking and go back to code! You can start doing what was suggested elsewhere in the thread: get the server and uio driver merged into the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt, and start fixing bugs such as the ones that Markus reported. Since ivshmem is basically KVM-only (it has a soft dependency on ioeventfd), CC the patches to kvm at vger.kernel.org and I'll merge them via the KVM tree for now. I'll (more than) gladly give maintainership away in due time. Paolo
On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:> Il 17/06/2014 11:03, David Marchand ha scritto: > >>Unless someone steps up and maintains ivshmem, I think it should be > >>deprecated and dropped from QEMU. > > > >Then I can maintain ivshmem for QEMU. > >If this is ok, I will send a patch for MAINTAINERS file. > > Typically, adding yourself to maintainers is done only after having proved > your ability to be a maintainer. :) > > So, let's stop talking and go back to code! You can start doing what was > suggested elsewhere in the thread: get the server and uio driver merged into > the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt, > and start fixing bugs such as the ones that Markus reported.One more thing to add to the list: static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) The "flags" argument should be "size". Size should be checked before accessing buf. Please also see the bug fixes in the following unapplied patch: "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20140618/3cb9f30f/attachment.sig>
On Tue, Jun 17, 2014 at 11:03:32AM +0200, David Marchand wrote:> On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote: > >ivshmem has a performance disadvantage for guest-to-host > >communication. Since the shared memory is exposed as PCI BARs, the > >guest has to memcpy into the shared memory. > > > >vhost-user can access guest memory directly and avoid the copy inside the guest. > > Actually, you can avoid this memory copy using frameworks like DPDK.I guess it's careful to allocate all packets in the mmapped BAR? That's fine if you can modify applications but doesn't work for unmodified applications using regular networking APIs. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 473 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20140618/36a0f558/attachment-0001.sig>
On 06/18/2014 12:51 PM, Stefan Hajnoczi wrote:>> >> Actually, you can avoid this memory copy using frameworks like DPDK. > > I guess it's careful to allocate all packets in the mmapped BAR?Yes.> That's fine if you can modify applications but doesn't work for > unmodified applications using regular networking APIs.If you have access to source code, this should not be a problem. -- David Marchand