Stefan Hajnoczi
2015-Apr-24 13:22 UTC
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Fri, Apr 24, 2015 at 1:17 PM, Luke Gorrie <luke at snabb.co> wrote:> On 24 April 2015 at 11:47, Stefan Hajnoczi <stefanha at gmail.com> wrote: >> >> My concern is the overhead of the vhost_net component copying >> descriptors between NICs. > > > I see. So you would not have to reserve CPU resources for vswitches. Instead > you would give all cores to the VMs and they would pay for their own > networking. This would be especially appealing in the extreme case where all > networking is "Layer 1" connectivity between local virtual machines. > > This would make VM<->VM links different to VM<->network links. I suppose > that when you created VMs you would need to be conscious of whether or not > you are placing them on the same host or NUMA node so that you can predict > what network performance will be available.The motivation for making VM-to-VM fast is that while software switches on the host are efficient today (thanks to vhost-user), there is no efficient solution if the software switch is a VM. Have you had requests to run SnabbSwitch in a VM instead of on the host? For example, if someone wants to deploy it in a cloud environment they will not be allowed to run arbitrary software on the host. Stefan
Luke Gorrie
2015-Apr-26 13:24 UTC
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote:> The motivation for making VM-to-VM fast is that while software > switches on the host are efficient today (thanks to vhost-user), there > is no efficient solution if the software switch is a VM. >I see. This sounds like a noble goal indeed. I would love to run the software switch as just another VM in the long term. It would make it much easier for the various software switches to coexist in the world. The main technical risk I see in this proposal is that eliminating the memory copies might not have the desired effect. I might be tempted to keep the copies but prevent the kernel from having to inspect the vrings (more like vhost-user). But that is just a hunch and I suppose the first step would be a prototype to check the performance anyway. For what it is worth here is my view of networking performance on x86 in the Haswell+ era: https://groups.google.com/forum/#!topic/snabb-devel/aez4pEnd4ow Have you had requests to run SnabbSwitch in a VM instead of on the> host?This is not something we have discussed. I can say that I am not satisfied with our installation process on the host. I want this to be trivially easy, but it is not. On the one hand we make some parts easy: we only require one executable file (~1.5MB) and it works on any modern distro and kernel. On the other hand we require the user to edit grub.conf to reserve cores and keep the IOMMU out of the way, and to manually run a traffic process for each 10G port pinned to a suitable core. That requires a bunch of downstream work. Gory details: https://github.com/SnabbCo/snabb-nfv/wiki/Compute-node-requirements This should be much simpler. I would quite like to be able to wrap this up in a VM or a container. The risk is that then we become dependent on other systems (e.g. OpenStack) pinning cores correctly, etc, and that might be placing unrealistic expectations on the orchestration systems of the present and near future (?). I mean: if we make this somebody else's problem, we had better trust that they will do it right. Cheers, -Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20150426/21461491/attachment.html>
Stefan Hajnoczi
2015-Apr-27 10:17 UTC
[virtio-dev] Zerocopy VM-to-VM networking using virtio-net
On Sun, Apr 26, 2015 at 2:24 PM, Luke Gorrie <luke at snabb.co> wrote:> On 24 April 2015 at 15:22, Stefan Hajnoczi <stefanha at gmail.com> wrote: >> >> The motivation for making VM-to-VM fast is that while software >> switches on the host are efficient today (thanks to vhost-user), there >> is no efficient solution if the software switch is a VM. > > > I see. This sounds like a noble goal indeed. I would love to run the > software switch as just another VM in the long term. It would make it much > easier for the various software switches to coexist in the world. > > The main technical risk I see in this proposal is that eliminating the > memory copies might not have the desired effect. I might be tempted to keep > the copies but prevent the kernel from having to inspect the vrings (more > like vhost-user). But that is just a hunch and I suppose the first step > would be a prototype to check the performance anyway. > > For what it is worth here is my view of networking performance on x86 in the > Haswell+ era: > https://groups.google.com/forum/#!topic/snabb-devel/aez4pEnd4owThanks. I've been thinking about how to eliminate the VM <-> host <-> VM switching and instead achieve just VM <-> VM. The holy grail of VM-to-VM networking is an exitless I/O path. In other words, packets can be transferred between VMs without any vmexits (this requires a polling driver). Here is how it works. QEMU gets "-device vhost-user" so that a VM can act as the vhost-user server: VM1 (virtio-net guest driver) <-> VM2 (vhost-user device) VM1 has a regular virtio-net PCI device. VM2 has a vhost-user device and plays the host role instead of the normal virtio-net guest driver role. The ugly thing about this is that VM2 needs to map all of VM1's guest RAM so it can access the vrings and packet data. The solution to this is something like the Shared Buffers BAR but this time it contains not just the packet data but also the vring, let's call it the Shared Virtqueues BAR. The Shared Virtqueues BAR eliminates the need for vhost-net on the host because VM1 and VM2 communicate directly using virtqueue notify or polling vring memory. Virtqueue notify works by connecting an eventfd as ioeventfd in VM1 and irqfd in VM2. And VM2 would also have an ioeventfd that is irqfd for VM1 to signal completions. Stefan
Seemingly Similar Threads
- [virtio-dev] Zerocopy VM-to-VM networking using virtio-net
- [virtio-dev] Zerocopy VM-to-VM networking using virtio-net
- [virtio-dev] Zerocopy VM-to-VM networking using virtio-net
- [virtio-dev] Zerocopy VM-to-VM networking using virtio-net
- [virtio-dev] Zerocopy VM-to-VM networking using virtio-net