I have an important question about XEN''s migation operation related to the XEN block and network back/front drivers. If I migrate a domain from one machine to another in the middle of heavy disk/network activity, what happnens to the back/front end XEN drivers? (I see they have suspend/resume operations but I don''t believe they are being used) I see that the front end drivers are passing machine addresses to the back-end drivers - are these pointers somehow still valid when a domain is moved to another machine? Shouldn''t the drivers wait until there is no I/O activity before being migrated? Thanks, Eric --------------------------------- Do you Yahoo!? Yahoo! Search presents - Jib Jab''s ''Second Term''
> I have an important question about XEN''s migation operation related to the > XEN block and network back/front drivers. If I migrate a domain from one > machine to another in the middle of heavy disk/network activity, what > happnens to the back/front end XEN drivers?First of all, bear in mind that Xend doesn''t migrate storage for you, so the backend on the destination machine needs to somehow be able to access the original VBD. Assuming you have that, the migration works in the same way as a backend restart: messages sent by Xend allow it to detect the broken device channel connection, reinitiate a connection with the new backend and resend all requests that were pending in the ring to make sure they complete.> (I see they have suspend/resume > operations but I don''t believe they are being used) I see that the front > end drivers are passing machine addresses to the back-end drivers - are > these pointers somehow still valid when a domain is moved to another > machine? Shouldn''t the drivers wait until there is no I/O activity before > being migrated?The old suspend / resume functionality used to require a domain to stop doing IO before it was suspended. Since the backend restart support was developed for the driver domains project, it has been adopted instead as it''s a more general mechanism. HTH, Mark ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi Eric, Migration will handle network interfaces and *network attached* block devices, even under i/o load. For network devices, an ARP advertisement is sent indicating the new location of the VM. We''ve tested this a fair bit in cluster environments and are able to migrate heavily i/o loaded servers very fast. You may see a few dropped packets, but TCP will take care of that. Block devices will see the backend disconnect when they are suspended, and should reconnect and reissue any outstanding requests when they resume. In our migration tests, we generally use network storage. One approach that seems to work pretty well is to import GNBD mounts in domain 0 and export them to the migrating domains -- /dev/gnbd/my-dom-disk will have relevance on both ends of the migration. Migration does not currently have explicit support for local block devices. It is possible to do, for instance using the block tap to tunnel the request stream over TCP between the two hosts, it just isn''t clear that it''s all that useful -- most people seem to want to use migration to offload a running VM completely. hth, andy. On Thu, 20 Jan 2005 12:44:40 -0800 (PST), Eric Tessler <maiden1134@yahoo.com> wrote:> I have an important question about XEN''s migation operation related to the > XEN block and network back/front drivers. If I migrate a domain from one > machine to another in the middle of heavy disk/network activity, what > happnens to the back/front end XEN drivers? (I see they have suspend/resume > operations but I don''t believe they are being used) I see that the front > end drivers are passing machine addresses to the back-end drivers - are > these pointers somehow still valid when a domain is moved to another > machine? Shouldn''t the drivers wait until there is no I/O activity before > being migrated? > > Thanks, > Eric > > ________________________________ > Do you Yahoo!? > Yahoo! Search presents - Jib Jab''s ''Second Term'' > >------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Jacob Gorm Hansen
2005-Jan-20 21:56 UTC
Re: [Xen-devel] XEN migration architecture question
One thing I found fairly easy to setup for self-migration, was to use software RAID-1 (mirroring) to iSCSI targets on the original and destination hosts. When its time to migrate, I add the remote iSCSI disk to the RAID-1 array, and when that has synced up 100%, I self-migrate there. You can either run your iSCSI (or GNBD haven''t tried that) targets in dom0, or you can run them in separate domUs (in my implementation, you will first fire off a bootstrap of a small iSCSI-server domain to the remote host, as my dom0 is not allowed to include dangerous stuff such as a TPC/IP stack and an iSCSI server). That gives me live disk-migration using a simple shell-script. Jacob ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I have an important question about XEN''s migation operation > related to the XEN block and network back/front drivers. If > I migrate a domain from one machine to another in the middle > of heavy disk/network activity, what happnens to the > back/front end XEN drivers? (I see they have suspend/resume > operations but I don''t believe they are being used) I see > that the front end drivers are passing machine addresses to > the back-end drivers - are these pointers somehow still valid > when a domain is moved to another machine? Shouldn''t the > drivers wait until there is no I/O activity before being migrated?The front end drivers store enough information to be able to re-issue all unacknowleged IOs when resuming from a migration. Ian ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Interesting .... So, what are you using in DOM0 as a watchdog to make sure all the proper domU domains are up and functional? Or how do you go about this? Brian. On Thu, 2005-01-20 at 17:56, Jacob Gorm Hansen wrote:> One thing I found fairly easy to setup for self-migration, was to use > software RAID-1 (mirroring) to iSCSI targets on the original and > destination hosts. When its time to migrate, I add the remote iSCSI disk > to the RAID-1 array, and when that has synced up 100%, I self-migrate > there. You can either run your iSCSI (or GNBD haven''t tried that) > targets in dom0, or you can run them in separate domUs (in my > implementation, you will first fire off a bootstrap of a small > iSCSI-server domain to the remote host, as my dom0 is not allowed to > include dangerous stuff such as a TPC/IP stack and an iSCSI server). > > That gives me live disk-migration using a simple shell-script. > > Jacob > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting > Tool for open source databases. Create drag-&-drop reports. Save time > by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. > Download a FREE copy at http://www.intelliview.com/go/osdn_nl > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/xen-devel >------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Jacob Gorm Hansen
2005-Jan-20 23:47 UTC
Re: [Xen-devel] XEN migration architecture question
B.G. Bruce wrote:> Interesting .... > So, what are you using in DOM0 as a watchdog to make sure all the proper > domU domains are up and functional? Or how do you go about this?My setup is nontypical, in that I do not allow much remote administration, everything happens within the unprivileged domains. Basically, all you can do is bootstrap or migrate a domain into an existing domU, and you need to keep feeding it ''tokens'' for it to stay alive, or it will be destroyed. My payment model is like for a Laundromat; you feed it tokens, it solves your problem. If you don''t like the laundromat (if it is too slow or too expensive), you move your laundry to a different one and start feeding that one your tokens instead. You don''t have to call some superuser-person to move the laundry for you. All bootstrap and migration, apart from the initial part where an almost empty VM gets instantiated, happens inside domUs. I have modified Linux to be self-migrating, so I do not use the standard Xen migration mechanism. If you have hundreds or thousands of machines, you do not wish to periodically log in to each one''s dom0 to see if things are going as expected. If a VM goes missing, you restart the VM or you restore it from a checkpoint, both of which can be handled remotely. I do have some network-facing code in dom0, but that is only around 100 lines of C, mostly related to answering ARP and ICMP Echo requests. The rest (e.g. TCP/IP) runs within my domUs. More info at http://www.diku.dk/~jacobg/self-migration/ Jacob ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel