Igor Morgado
2006-Jun-19 18:41 UTC
[Xen-users] Looking for tips about Physical Migration on XEN
Hi people. Im new on Xen and I''m looking in how to do a physical migration on Xen. I know that there is a lot of choices (that is the first problem) My environment is simple: 2 physical servers, each one running one instance of XEN. Each host has 2 gigabit cards. One to talk with the world, other to talk between theirselves. I want to run the every vm on the both hosts, if one fail the other one can keep the work (something as heartbeat), but I want to choose if do a migration of every VM from one host to another with a lesser stop time possible. How can I proceed? If there is some url about this please point me, Im looking for environments already tested on production and with a good performance/speed on migration. Best regards. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Chris de Vidal
2006-Jun-19 22:42 UTC
Re: [Xen-users] Looking for tips about Physical Migration on XEN
--- Igor Morgado <igormorgado.listas@gmail.com> wrote:> Hi people.Hi person! ;-)> Im new on Xen and I''m looking in how to do a physical migration on Xen. I > know that there is a lot of choices (that is the first problem) > > My environment is simple: > > 2 physical servers, each one running one instance of XEN. Each host has 2 > gigabit cards. One to talk with the world, other to talk between > theirselves. > > I want to run the every vm on the both hosts, if one fail the other one can > keep the work (something as heartbeat), but I want to choose if do a > migration of every VM from one host to another with a lesser stop time > possible. > > How can I proceed? If there is some url about this please point me, Im > looking for environments already tested on production and with a good > performance/speed on migration.I don''t have URLs or tested environments or performance/speed migration results, but I am planning on implementing a similar setup so I can offer tips. It sounds as if you want high availability along with live migration; two slightly different goals but which should be reachable. I''ve learned from the OpenVZ message boards that TCP has a timeout of about 2 minutes so live migration isn''t always necessary. Because of this, I am probably going to install DRBD. In order to fail over DRBD the partition must be unmounted, meaning the virtual machine must be suspended, precluding the use of live migration. But because TCP gives you 2 minutes this is acceptable for me. The advantage of this setup is you only need two nodes (important for me). If you still require live migration (as in the case of a game server), it seems you must have an external NFS or iSCSI or AoE server. This is because both Xen node servers need to access the storage at the same time. I think AoE is the simplest and best performing, and with the vblade daemon it''s free and works on any server. I''d use 2 servers and install Heartbeat+DRBD on them for the ultimate in HA. As an alternative you can use shared SCSI storage. So for live migration and high availability, four nodes. Two are the front-end Xen hosts and two are back-end storage hosts. Heartbeat on the front-end nodes and Heartbeat+DRBD on the back. Or shared SCSI storage. Earlier this month I''d thought I''d figured out how to do 2-node HA + live migration (read the archives) using AoE in place of DRBD and software RAID inside each Xen host. The problem with this is a slight network interruption will result in a RAID resync. You could install the "Fast RAID" patch if your Xen guest is running Linux, or you could just deal with it. Perhaps network interruptions are infrequent enough not to worry. You should be very careful to avoid split-brain situations; it is for this reason I''m probably going to forego using Heartbeat and just use a "meatware" Heartbeat (that is, if a node dies I log in and manually bring up the Xen guest on the other node). I''ll monitor health with something like Nagios. Hope that helps! CD TenThousandDollarOffer.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
tbrown@baremetal.com
2006-Jun-19 23:01 UTC
Re: [Xen-users] Looking for tips about Physical Migration on XEN
> > I don''t have URLs or tested environments or performance/speed migration results, but I am planning > on implementing a similar setup so I can offer tips.I have done some testing...> If you still require live migration (as in the case of a game server), > it seems you must have an external NFS or iSCSI or AoE server. This is > because both Xen node servers need to access the storage at the same > time. I think AoE is the simplest and best performing, and with the > vblade daemon it''s free and works on any server. I''d use 2 servers and > install Heartbeat+DRBD on them for the ultimate in HA. As an > alternative you can use shared SCSI storage.NFS performance sucks. AFAIK, this is because your kernel can''t cache anything... this causes latency on every file open. That said, NFS is trivial to setup and works fine for live migration (been there). AoE exports a block device, AFAIK, this means you can _not_ have two nodes accessing (mounting) it at the same time or you are basically guaranteed to corrupt either your file system or the kernel''s view of it (unless all mounts are read-only). Also, at least with the version I tested, the vblade write performance sucked (5 Mbyte vs 40 Mbyte read) ... and the coraid docs showed similar numbers 5 Mbyte/s read/write per drive. That may be perfectly acceptable for you. It isn''t bad. I tried nbd and it was much more symetric (40 or more Mbyte/sec both ways). That said, I haven''t put it all together and into production yet. Going to have to do that darn soon though. -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2006-Jun-19 23:26 UTC
Re: [Xen-users] Looking for tips about Physical Migration on XEN
On Monday 19 June 2006 6:01 pm, tbrown@baremetal.com wrote:> AoE exports a block device, AFAIK, this means you can _not_ have two nodes > accessing (mounting) it at the same time or you are basically guaranteedthat''s precisely the point of it. AoE (or NBD, or iSCSI, or FC) gives you a block device. you then partition it (GPT, LVM, EVMS) and _DON''T_ mount those LV on dom0, just give them to the domUs. only one domU would mount each LV, no problem there. when migrating, the ''new'' domU must have access to the same LV, but at that time, the ''old'' domU isn''t running anymore. so, at no moment any LV is used by more than one domU.> Also, at least with the version I tested, the vblade write performance > sucked (5 Mbyte vs 40 Mbyte read) ... and the coraid docs showed similar > numbers 5 Mbyte/s read/write per drive. That may be perfectly acceptable > for you. It isn''t bad. I tried nbd and it was much more symetric (40 or > more Mbyte/sec both ways).i haven''t tested vblade yet, but the Coraid 15-bay SATA box gives me easily over 45-50MB/sec either read or write on GbE, no jumbo-frames (yet) -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2006-Jun-20 17:33 UTC
Re: [Xen-users] Looking for tips about Physical Migration on XEN
On Tuesday 20 June 2006 9:41 am, Igor Morgado wrote:> My hardware is simple as I said: > > Each host has a 0,5T physical (PERC-4) raid-5 that is partitioned and > shared. as lvm to the local domUs. Export the LVM partition as AoE shouldalthough i believe it''s possible to use DRBD to work on your configuration; it''s much simpler and more scalable if you separate storage from processing. once you have that, it''s dead-easy to add either more storage or more processing, letting you grow as you need and without interrupting the systems. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users