Hi, i am working on the jailhouse[1] project and am currently looking at inter-VM communication. We want to connect guests directly with virtual consoles based on shared memory. The code complexity in the hypervisor should be minimal, it should just make the shared memory discoverable and provide a signaling mechanism. We would like to reuse virtio so that Linux-guests will eventually just work without having to patch them. Having looked at virtio it seems to be focused on host<->guest communication and does not consider direct guest<->guest communication. I.e. the queues use guest-physical addressing, which is only meaningful for the guest and the host. In a first prototype i implemented a ivshmem[2] device for the hypervisor. That way we can share memory between virtual machines. Ivshmem is nice and simple but does not seem to be used anymore. And it does not define higher level devices, like a console. At this point i could: - define a console on top of ivshmem - see how i can get a virtio console to work between guests on shared memory Is anyone already using something like that? I guess zero-copy virtio devices in Xen would be a similar case. I read a suggestion from may 2010 to introduce a virtio feature bit for shared memory (VIRTIO_F_RING_SHMEM_ADDR). But that did not make it into the virtio-spec. regards, Henning [1] jailhouse https://github.com/siemens/jailhouse [2] ivshmem https://gitorious.org/nahanni
On 10/06/2014 18:48, Henning Schild wrote:> Hi, > In a first prototype i implemented a ivshmem[2] device for the > hypervisor. That way we can share memory between virtual machines. > Ivshmem is nice and simple but does not seem to be used anymore. > And it > does not define higher level devices, like a console. FYI, ivhsmem is used here: http://dpdk.org/browse/memnic/tree/ http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449 There are some few other references too, if needed. Best regards, Vincent
Henning Schild <henning.schild at siemens.com> writes:> Hi, > > i am working on the jailhouse[1] project and am currently looking at > inter-VM communication. We want to connect guests directly with virtual > consoles based on shared memory. The code complexity in the hypervisor > should be minimal, it should just make the shared memory discoverable > and provide a signaling mechanism.Hi Henning, The virtio assumption was that the host can see all of guest memory. This simplifies things significantly, and makes it efficient. If you don't have this, *someone* needs to do a copy. Usually the guest OS does a bounce buffer into your shared region. Goodbye performance. Or you can play remapping tricks. Goodbye performance again. My preferred model is to have a trusted helper (ie. host) which understands how to copy between virtio rings. The backend guest (to steal Xen vocab) R/O maps the descriptor, avail ring and used rings in the guest. It then asks the trusted helper to do various operation (copy into writable descriptor, copy out of readable descriptor, mark used). The virtio ring itself acts as a grant table. Note: that helper mechanism is completely protocol agnostic. It was also explicitly designed into the virtio mechanism (with its 4k boundaries for data structures and its 'len' field to indicate how much was written into the descriptor). It was also never implemented, and remains a thought experiment. However, implementing it in lguest should be fairly easy. Cheers, Rusty.
On 2014-06-12 04:27, Rusty Russell wrote:> Henning Schild <henning.schild at siemens.com> writes: >> Hi, >> >> i am working on the jailhouse[1] project and am currently looking at >> inter-VM communication. We want to connect guests directly with virtual >> consoles based on shared memory. The code complexity in the hypervisor >> should be minimal, it should just make the shared memory discoverable >> and provide a signaling mechanism. > > Hi Henning, > > The virtio assumption was that the host can see all of guest > memory. This simplifies things significantly, and makes it efficient. > > If you don't have this, *someone* needs to do a copy. Usually the guest > OS does a bounce buffer into your shared region. Goodbye performance. > Or you can play remapping tricks. Goodbye performance again. > > My preferred model is to have a trusted helper (ie. host) which > understands how to copy between virtio rings. The backend guest (to > steal Xen vocab) R/O maps the descriptor, avail ring and used rings in > the guest. It then asks the trusted helper to do various operation > (copy into writable descriptor, copy out of readable descriptor, mark > used). The virtio ring itself acts as a grant table. > > Note: that helper mechanism is completely protocol agnostic. It was > also explicitly designed into the virtio mechanism (with its 4k > boundaries for data structures and its 'len' field to indicate how much > was written into the descriptor). > > It was also never implemented, and remains a thought experiment. > However, implementing it in lguest should be fairly easy.The reason why a trusted helper, i.e. additional logic in the hypervisor, is not our favorite solution is that we'd like to keep the hypervisor as small as possible. I wouldn't exclude such an approach categorically, but we have to weigh the costs (lines of code, additional hypervisor interface) carefully against the gain (existing specifications and guest driver infrastructure). Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA working group discussion): What speaks against introducing an alternative encoding of addresses inside virtio data structures? The idea of this flag was to replace guest-physical addresses with offsets into a shared memory region associated with or part of a virtio device. That would preserve zero-copy capabilities (as long as you can work against the shared mem directly, e.g. doing DMA from a physical NIC or storage device into it) and keep the hypervisor out of the loop. Is it too invasive to existing infrastructure or does it have some other pitfalls? Jan -- Siemens AG, Corporate Technology, CT RTC ITP SES-DE Corporate Competence Center Embedded Linux
Vincent JARDIN <vincent.jardin at 6wind.com> writes:> On 10/06/2014 18:48, Henning Schild wrote:> Hi, >> In a first prototype i implemented a ivshmem[2] device for the >> hypervisor. That way we can share memory between virtual machines. >> Ivshmem is nice and simple but does not seem to be used anymore. >> And it >> does not define higher level devices, like a console. > > FYI, ivhsmem is used here: > http://dpdk.org/browse/memnic/tree/ > > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449 > > There are some few other references too, if needed.It may be used, but that doesn't mean it's maintained, or robust against abuse. My advice is to steer clear of it.
Seemingly Similar Threads
- Using virtio for inter-VM communication
- Using virtio for inter-VM communication
- Using virtio for inter-VM communication
- Why I advise against using ivshmem (was: [Qemu-devel] Using virtio for inter-VM communication)
- Why I advise against using ivshmem (was: [Qemu-devel] Using virtio for inter-VM communication)