How does live (or normal) migration handle the disks involved? Are they transfered, or does the xfrd keep some sort of channel open? And I assume a saved domain that is restored on another host must have all its disk and swap space transfered to the new host as well, correct? David ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> How does live (or normal) migration handle the disks involved? > Are they transfered, or does the xfrd keep some sort of channel open?xend currently doesn''t give you any help and do this automatically, but its something we might add in the future. For most of our VMs we either use NFS root to a central server, or iscsi direct to a SAN, so there''s no block device as such to migrate.[*] If you do want to access back to a local drive, then you''ll need to set this up yourself. There are a bunch of network block device options available, all of which should work: iscsi, NBD, ENBD, GNBD. We could add the equivalent of the vif-bridge script to xend for block devices. This would provide simple hooks to enable the block device export/import to be set up. However, chances are we''d want this to be stateful and smarter, so its probably best to do a xend plugin rather than just use scripts.> And I assume a saved domain that is restored on another host must have > all its disk and swap space transfered to the new host as well, correct?Yep. No auto magic, not yet ;-) Ian [*] Even using iscsi, there are some situations where you''d prefer to have the iscsi driver in domain 0 and export the disk as a block device to other domains. For example, if you have a hardware iscsi initiator in the machine. In this instance, you''d prefer to move the iscsi disk as part of the migrate. ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Lots of questions here... On the subject of network block devices (iSCSI, NBD, ENBD & GNBD), do you have any recommendations? How should it be set up for migration purposes? Should dom0 be the initiator and export it to the other domains, or should each domain be the client directly? I''d expect the latter but am unfamiliar with how migration should work. Assuming the latter, are any of the nbd technologies available bootable without using initrd''s? I don''t have any particular problem with initrd, but it''s just an extra thing to set up... How does swap work over the nbd''s? Is there any way of providing high availability with any nbd devices, so that if one target goes down it can seamlessly failover to another? Maybe raid1 over iSCSI? A really nice xen setup would be: Machine A hosting iSCSI devices Machine B hosting iSCSI devices Machine C hosting xen domains Machine D hosting xen domains Raid1 over iSCSI would ensure that one of A or B could be down without affecting availability. Xen domains can wander between C & D easily using migration. A & C could be the same physical machine, as could B & D. lots of possibilities... wish I had time to play with them all... James From: Ian Pratt Sent: Fri 20/08/2004 4:16 AM To: David Becker Cc: xen-devel@lists.sourceforge.net; Ian.Pratt@cl.cam.ac.uk Subject: Re: [Xen-devel] disks under migration> How does live (or normal) migration handle the disks involved? > Are they transfered, or does the xfrd keep some sort of channel open?xend currently doesn''t give you any help and do this automatically, but its something we might add in the future. For most of our VMs we either use NFS root to a central server, or iscsi direct to a SAN, so there''s no block device as such to migrate.[*] If you do want to access back to a local drive, then you''ll need to set this up yourself. There are a bunch of network block device options available, all of which should work: iscsi, NBD, ENBD, GNBD. We could add the equivalent of the vif-bridge script to xend for block devices. This would provide simple hooks to enable the block device export/import to be set up. However, chances are we''d want this to be stateful and smarter, so its probably best to do a xend plugin rather than just use scripts.> And I assume a saved domain that is restored on another host must have > all its disk and swap space transfered to the new host as well, correct?Yep. No auto magic, not yet ;-) Ian [*] Even using iscsi, there are some situations where you''d prefer to have the iscsi driver in domain 0 and export the disk as a block device to other domains. For example, if you have a hardware iscsi initiator in the machine. In this instance, you''d prefer to move the iscsi disk as part of the migrate. ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> Lots of questions here... > > On the subject of network block devices (iSCSI, NBD, ENBD & GNBD), do you have any recommendations?iSCSI is definitely the most complex and heavy weight, but it does have the advantage of being the one that''s getting most development, plus you can get h/w assistance from iSCSI initiator NICs too. ResHat seem to care about GNBD for their cluster stuff, so maybe that''s worth looking at too.> How should it be set up for migration purposes? Should dom0 be the initiator and export it to the other domains, or should each domain be the client directly? I''d expect the latter but am unfamiliar with how migration should work.Exporting directly to domains certainly makes migration easier as you don''t have to do anything special to provide remote access to the block devices. However, there are advantages to doing it in dom0: 1) it''s totally transparent to the domains -- they just see hda etc as usual. 2) If you have a h/w iscsi NIC, it probably makes sense to use it...> Assuming the latter, are any of the nbd technologies available bootable without using initrd''s? I don''t have any particular problem with initrd, but it''s just an extra thing to set up...To my knowledge, none work for root without an initrd, which is a pain.> How does swap work over the nbd''s?It should work just fine.> Is there any way of providing high availability with any nbd > devices, so that if one target goes down it can seamlessly > failover to another? Maybe raid1 over iSCSI?Yep, you should be able to use MD to setup a RAID1 configuration just fine. Even if you''re using a local disk you can setup a network block device to enable you to mirror writes remotely. Ian ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Setting up dom0 as the initiator and re-exporting to other domains is currently more complex to manage under migration: you''d have to work out a way of getting dom0 on the new host to initiate a connection and do the re-exporting. If you have the migrated domain talk directly to the target then you don''t need to explicitly support this... OTOH, you can boot directly off disks exported by dom0 without requiring an initrd, which would be useful. If you choose to use NFS then you can use it (as you probably realise) as a root filesystem without an initrd. I believe there''s a patch available somewhere that enables swapping over NFS (Adam Heath''s Xen 1.2 Debian packages included this). The ideal way to do this sort of stuff would modify the Xend migration tools to export a domain''s block devices over the network when it is moved away from its "home" node. Domains would use Xen disk interface but when on the "home" node, this would be backed directly by storage in dom0, whereas if migrated to "remote" nodes, the remote dom0 would additionally have to import this block device over the network, then re-export to the domain. This would all be transparent to the domain in question, whilst avoiding network overheads when on the "home" node. We''ve talked about this scenario and it would be interesting to try it out. We may get round to implementing it for some later release but nobody (AFAIK) is working on it right now. HTH, Mark ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel