Kraska, Joe A \(US SSA\)
2007-Feb-13 22:43 UTC
[Xen-users] Live Migration... shortest path
In a prior message I documented my woes in getting an NFS_ROOT xen going. I haven''t resolved those yet, want to try a different tack at this: If the group were to recommend a path of least resistance to showing migration/live migration, which configuration would it be? Joe. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Feb 13, 2007 at 02:43:15PM -0800, Kraska, Joe A (US SSA) wrote:> In a prior message I documented my woes in getting an NFS_ROOT xen > > going. I haven''t resolved those yet, want to try a different tack at > this: > > > > If the group were to recommend a path of least resistance > > to showing migration/live migration, which configuration would it be?Create a file based disk image stored on an NFS partition shared between your two Dom0s. File based images on NFS are certainly slow, but its the minimal effort required to demo migration. For more serious production use you''d want either a cluster filesystem like GFS, or a network block device like iSCSI, or shared SAN with cluster LVM. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=| _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Kraska, Joe A \(US SSA\)
2007-Feb-14 00:10 UTC
RE: [Xen-users] Live Migration... shortest path
> Create a file based disk image stored on an NFS partition sharedbetween> your two Dom0s. File based images on NFS are certainly slow, but itsthe> minimal effort required to demo migration. For more serious production > use you''d want either a cluster filesystem like GFS, or a networkblock> device like iSCSI, or shared SAN with cluster LVM.Thank you for your fast response. I hadn''t seen a xen domU config recipe to do this, but I inferred that the approach using dd, mkfs.ext3, and an NFS store might get me there. I was a bit unclear in part because I don''t know how the dom0 goes about knowing that the shared image on both sides is something that will make the migration happy, whereas with the nfs_root technique, dom0 is unambiguously notified of the common store.>From troving the list archives, I''d thought that NFS ROOT was thepreferred way on several levels, insofar as a SAN fs wasn''t around. Joe. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joe, I''d agree with Daniel''s previous post that using a file-backed VBD on NFS is clearly the easiest and fastest way to get you going in terms of being able to demonstrate live migration. I''ve seen it used by vendors on trade shows and expos. Just make sure you use the same mount point for your NFS share in both dom0''s. If, however, you''d like to demonstrate live migration where your VBDs are backed by "physical" (that is, dom0 block) devices, and you don''t have access to an iSCSI or FC SAN infrastructure, you might want to use DRBD (www.drbd.org), which enables you to keep the block device in sync over the wire, kind of like networked RAID 1. Starting with version 8, you can use DRBD in an active/active configuration. This is normally used for cluster file systems such as GFS and OCFS2, however it might come in handy for Xen live migration as well. That approach may be faster (in terms of performance) than file-backed VBDs on NFS. I''ll do some testing on this over the next couple of weeks or so and let you know the results. Cheers, Florian -- : Florian G. Haas Tel +43-1-8178292-60 : : LINBIT Information Technologies GmbH Fax +43-1-8178292-82 : : Vivenotgasse 48, A-1120 Vienna, Austria http://www.linbit.com : _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 2007-02-13 at 14:43 -0800, Kraska, Joe A (US SSA) wrote:> In a prior message I documented my woes in getting an NFS_ROOT xen > > going. I haven’t resolved those yet, want to try a different tack at > this: > > > > If the group were to recommend a path of least resistance > > to showing migration/live migration, which configuration would it be?I like AoE personally because its easy, route-less and stable. All of my servers however have a secondary gig-e nic and switch just for AoE and migration / interconnect stuff. I put guest FS''s on a NAS and their swap locally on the node supporting them (stupid to swap to network storage). Half of your battle is just keeping your naming convention correlated to AoE block devices available so you can tell them apart at a glance. i.e. /dev/etherd/e1.0 would be something dom-0 on node1 is using, e3.4 would be something vm #4 on node 3 is using. Then name your dom-u''s and vifnames appropriately so bandwidth accounting can follow it as it migrates around. Most of my guests have names using this nomenclature : x-y-z Where : x = location ID y = node ID z = VM id So migrating 6-8-32 over to 7-9-21 means I migrated vm #32 on node 8 at location 6 over to node 9 at location 7 (locations being geographical) with iscsi bridging in between to move from location to location. I have "helpers" setup to auto increment and rename ID''s so they inherit their bandwidth accounting from the previous location or node, which ever. So , vm named 1-2-3 would have ''vifname=1-2-3.0'' , meaning , eth0 on vif 1-2-3. Since AoE remains a local phenomenon for me at each location, the major / minor always point to the right place. This *really* makes automation simpler especially when working with distributed applications. It takes a little hammering and scripting to do, but a worth while effort. Otherwise you''ll soon have a bowl of block and ethernet device spaghetti to sort out, if your network is of any substantial size. I migrate locally, and geographically based on this nomenclature and its worked out pretty well. Planning is as important as what you use to accomplish it. Hope I didn''t just confuse you even more. Best, --Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Reasonably Related Threads
- Anyone running Xen on HP DL380 G5? Intel 5150 and 5160 CPU''s?
- Anyone able to NFS boot on xen 4.x ?
- 1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
- AoE LVM2 DRBD Xen Setup
- handling mac addresses and guid''s