Anthony Liguori
2013-Aug-29 16:08 UTC
Re: Is fallback vhost_net to qemu for live migrate available?
Hi Qin, On Mon, Aug 26, 2013 at 10:32 PM, Qin Chuanyu <qinchuanyu@huawei.com> wrote:> Hi all > > I am participating in a project which try to port vhost_net on Xen。Neat!> By change the memory copy and notify mechanism ,currently virtio-net with > vhost_net could run on Xen with good performance。I think the key in doing this would be to implement a property ioeventfd and irqfd interface in the driver domain kernel. Just hacking vhost_net with Xen specific knowledge would be pretty nasty IMHO. Did you modify the front end driver to do grant table mapping or is this all being done by mapping the domain's memory?> TCP receive throughput of > single vnic from 2.77Gbps up to 6Gps。In VM receive side,I instead grant_copy > with grant_map + memcopy,it efficiently reduce the cost of grant_table > spin_lock of dom0,So the hole server TCP performance from 5.33Gps up to > 9.5Gps。 > > Now I am consider the live migrate of vhost_net on Xen,vhost_net use > vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the hole > memory of VM,So I am trying to fallback datapath from vhost_net to qemu when > doing live migrate ,and fallback datapath from qemu to > vhost_net again after vm migrate to new server。KVM and Xen represent memory in a very different way. KVM can only track when guest mode code dirties memory. It relies on QEMU to track when guest memory is dirtied by QEMU. Since vhost is running outside of QEMU, vhost also needs to tell QEMU when it has dirtied memory. I don't think this is a problem with Xen though. I believe (although could be wrong) that Xen is able to track when either the domain or dom0 dirties memory. So I think you can simply ignore the dirty logging with vhost and it should Just Work.> > My question is: > why didn't vhost_net do the same fallback operation for live migrate > on KVM,but use vhost_log to mark the dirty page? > Is there any mechanism fault for the idea of fallback datapath from > vhost_net to qemu for live migrate?No, we don't have a mechanism to fallback to QEMU for the datapath. It would be possible but I think it's a bad idea to mix and match the two. Regards, Anthony Liguori> any question about the detail of vhost_net on Xen is welcome。 > > Thanks > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Possibly Parallel Threads
- Re: Is fallback vhost_net to qemu for live migrate available?
- [PATCH net-next] vhost_net: disable zerocopy by default
- [PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
- [PATCH] xen/gntdev,gntalloc: Remove unneeded VM flags
- [PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done