Hi, Let me introduce our setup so you can have a better ideas on my situation. We have 2 physical hosts. Both have Xen for the vm and as for the disk, we have DRBD that sync between the two host. We would like to be able to do migration or live migration between host. This is possible as we have successfully done it. As we can read on documentation and all is that migration and live migration is done with --allow-two-primaries options on DRBD. Unfortunately and for obvious reasons this option is too dangerous to be left on and we would like to avoid using this at all cost. I''ve heard from a post somewhere that a migration Secondary-Primary can be done. Our best way to do this will be to put de VM on pause. Put the drbd disk on Secondary. Then migrate the vm to the second host, then put the drbd disk on second host to primary and unpause the vm. I''ve searched on google and I can''t find anything on this method. Is there a way to do this? Is there another way you are thinking of? Thanks for you help, -- Eric Laflamme [iWeb] IT Architecture Specialist Spécialiste de l''Architecture TI http://www.iWeb.com/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I''ve heard from a post somewhere that a migration Secondary-Primary can be done. Our best way to do this will be to put de VM on pause. Put the drbd disk on Secondary. Then migrate the vm to the second host, then put the drbd disk on second host to primary and unpause the vm. I''ve searched on google and I can''t find anything on this method. > > Is there a way to do this? Is there another way you are thinking of?What about a drbd device per vm, both set active/active? Migration already ensures that only one vm at a time can write to the underlying device through its pause/unpause mechanics. John -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Eric Laflamme sent a missive on 2010-08-10:> Hi, > > Let me introduce our setup so you can have a better ideas on my > situation. > > We have 2 physical hosts. Both have Xen for the vm and as for the > disk, we have DRBD that sync between the two host. We would like to be > able to do migration or live migration between host. This is possible > as we have successfully done it. > > As we can read on documentation and all is that migration and live > migration is done with --allow-two-primaries options on DRBD. > Unfortunately and for obvious reasons this option is too dangerous to > be left on and we would like to avoid using this at all cost. > > I''ve heard from a post somewhere that a migration Secondary-Primary > can be done. Our best way to do this will be to put de VM on pause. > Put the drbd disk on Secondary. Then migrate the vm to the second > host, then put the drbd disk on second host to primary and unpause the > vm. I''ve searched on google and I can''t find anything on this method. > > Is there a way to do this? Is there another way you are thinking of? > > Thanks for you help,I think that doing a live migration is very simple with DRBD and that provided you''re not attempting to write to the same drbd resource from both machines at the same time I _think_ you''ll be fine. I''ve live migrated back and forth between hosts with no problem using drbd storage in primary/primary mode. I allocate a drbd resource for each domU. I also have duplicated the config for the domU''s between hosts so that I can fail-over if one of the hosts fails. It would be possible for you to write a little shell script that does the following if you didn''t want to do it manually: Parses the config file of the domU for the drbd resource and then logs into the destination host and brings the required drbd resource from secondary to primary, then issues the live migration command to move the domU to the destination host. It would then need to check that the domU has moved to the other machine by looking at the say the output of xm list (depends on how you are managing the domU''s) and then once the domU has moved over to then take the drbd resource on the originating host to secondary. To do this with a paused domain requires that you copy between the hosts the state file that is created when you pause the domain - but I''ve not done this so I am not 100% sure as to if this actually works or what the pitfalls are. HTH Simon. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> To do this with a paused domain requires that you copy between the hosts the > state file that is created when you pause the domain - but I''ve not done > this so I am not 100% sure as to if this actually works or what the pitfalls > are. >that sounds like it should work. if i recall, internally both pause and migrate use xc_save() and xc_restore(). the only difference in the way they are called is that during a migrate the file descriptor passed to each happens to be a socket, where it''s a regular file during pause and unpause. - jonathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > I''ve heard from a post somewhere that a migration Secondary-Primarycan be> done. Our best way to do this will be to put de VM on pause. Put thedrbd disk> on Secondary. Then migrate the vm to the second host, then put thedrbd disk> on second host to primary and unpause the vm. I''ve searched on googleand I> can''t find anything on this method. > > > > Is there a way to do this? Is there another way you are thinking of? > > What about a drbd device per vm, both set active/active? Migration > already ensures that only one vm at a time can write to the underlying > device through its pause/unpause mechanics. >I can''t speak for the OP but one of the attractions about primary/secondary drbd for me is that you can''t accidentally start the vm on two nodes at once, ever. It would be nice to have the migration handle the volume state transition automatically as I don''t think there is ever actually a need to have the drbd volume writeable on both nodes at once, although a drbd volume isn''t even open-able unless it''s in the primary state. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wednesday 11 August 2010 01:40:07 James Harper wrote:> > > I''ve heard from a post somewhere that a migration Secondary-Primary > > can be > > > done. Our best way to do this will be to put de VM on pause. Put the > > drbd disk > > > on Secondary. Then migrate the vm to the second host, then put the > > drbd disk > > > on second host to primary and unpause the vm. I''ve searched on google > > and I > > > can''t find anything on this method. > > > > > Is there a way to do this? Is there another way you are thinking of? > > > > What about a drbd device per vm, both set active/active? Migration > > already ensures that only one vm at a time can write to the underlying > > device through its pause/unpause mechanics. > > I can''t speak for the OP but one of the attractions about > primary/secondary drbd for me is that you can''t accidentally start the > vm on two nodes at once, ever. It would be nice to have the migration > handle the volume state transition automatically as I don''t think there > is ever actually a need to have the drbd volume writeable on both nodes > at once, although a drbd volume isn''t even open-able unless it''s in the > primary state. > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >What we basically do is have a iSCSI target on TOP OF DRBD. It does take two more servers, but yuo have a nice HA solution where one servers is passive and the other is active. Of course, nothing stops you from having more than one DRBD resource runninng on the different nodes connected to different IP addresses, sort of to distribute the load. Migration works flowlessly on a setup like that. B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi James, Yes, this is exactly our point to do migration without allowing two primaries. We have tested the other way to migrate but none seems to work. Saving the vm doesn''t stop the vm to be listed in xm list. If I destroy the save vm, then the restore fail. So our conclusion is that a save vm can''t be restore on another host. Also the save feature takes a lot of time to complete, since our VM have 32 gig of ram. So it create a 22 gig save file... I don''t know why migration and even live migration need primaries.. why the system can''t just put in secondaries for a less than a second to put the other host in primary to start the vm? If you have other ideas.. let me know so I can test it. -- Eric Laflamme [iWeb] IT Architecture Specialist Spécialiste de l''Architecture TI http://www.iWeb.com/ Le 2010-08-10 à 19:40, James Harper a écrit :>> >>> I''ve heard from a post somewhere that a migration Secondary-Primary > can be >> done. Our best way to do this will be to put de VM on pause. Put the > drbd disk >> on Secondary. Then migrate the vm to the second host, then put the > drbd disk >> on second host to primary and unpause the vm. I''ve searched on google > and I >> can''t find anything on this method. >>> >>> Is there a way to do this? Is there another way you are thinking of? >> >> What about a drbd device per vm, both set active/active? Migration >> already ensures that only one vm at a time can write to the underlying >> device through its pause/unpause mechanics. >> > > I can''t speak for the OP but one of the attractions about > primary/secondary drbd for me is that you can''t accidentally start the > vm on two nodes at once, ever. It would be nice to have the migration > handle the volume state transition automatically as I don''t think there > is ever actually a need to have the drbd volume writeable on both nodes > at once, although a drbd volume isn''t even open-able unless it''s in the > primary state. > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi,> Yes, this is exactly our point to do migration without allowing two > primaries. > > We have tested the other way to migrate but none seems to work. Saving > the vm doesn''t stop the vm to be listed in xm list. If I destroy the > save vm, then the restore fail. So our conclusion is that a save vm > can''t be restore on another host.You''re running into issues because of the way the xen stores the configurations and what happens when you do an xm list For example: Name ID Mem VCPUs State Time(s) My_DomU 2048 1 34.1 Domain-0 0 12073 4 r----- 2622.7 My_DomU is not running (notice the lack of state info) but is in the xm list - this means that if I move the configuration file and the paused state file to the new machine, putting them into the correct locations and put the drbd resource into secondary mode on the originating host and then drbd into primary on the destination host, I will them be able to issue xm create my_DomU on the destination host and all will be fine.> > Also the save feature takes a lot of time to complete, since our VM > have 32 gig of ram. So it create a 22 gig save file... > > I don''t know why migration and even live migration need primaries.. why > the system can''t just put in secondaries for a less than a second to > put the other host in primary to start the vm?Migration doesn''t need to have primary/primary drbd resources. If the domU in question is paused or shutdown, you can move it between hosts without the need. During the move the drbd resources can be in secondary/secondary mode, as you are not actually moving the disk data, only the configuration (the first time around or on changes to the config) and the paused state file. If I want to do an migration between hosts of a DomU that is shutdown, I copy the config file from the originating server to the destination server, shutdown the domU, put the drbd resource on the originating host to secondary, go to the destination host and put the drbd resource into primary and then start the domU on the destination host. For this to work without having different local domU configs, I make sure that the drbd resource names are exactly the same on both drbd hosts and that the /dev/drbd(minor) also match. During the live migration you need to have the drbd resources in primary/primary only whilst the migration is taking place. This I would assume is so that when you issue the command to migrate the domain, the resources are checked on the other side and if they are not ready (which would be the case when the drbd resource is in secondary mode), then the migration would not take place. Without the resource being primary there is no way for the originating host to determine that all will be well when it switches over to the destination host. I understand your concerns regarding dual primaries, I too have them, but I think that they can be overcome with understanding and if needed some small scripting of the migration process. Of course, this presumes that you have a drbd resource per DomU and that the drbd resource are not shared between different domU''s (migration of a domU is considered to be the same domU). Rgds Simon _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
---- Original Message ---- From: "Bart Coninckx" <bart.coninckx@telenet.be> To: <xen-users@lists.xensource.com> Sent: Thursday, August 05, 2010 9:53 AM Subject: [Xen-users] DRBD iSCSI failover while running Xen guests> Hi, > > has anyone running Xen on top of iSCSI on top of a DRBD active/passive > cluster?Yes, I have two openfiler systems on drbd devices, exporting iscsi to xen machines using heartbeat for HA services.> Have you tested failovers?Not as I would.> Do Xen guests survive?It depends. A Win2000 hvm crashes (BSOD), a low load hvm linux 2.6.19.7 survives with some messages from kernel about disks going up and down, a heavy load pv linux 2.6.31.12-0.2-xen (openSUSE) works flawlessly. A Win2003 with pv drivers survives logging about slow disk responses. -- ValeRyo XT600 "Katoki Pajama" - http://www.slimmit.com/go.asp?7Y9 GamerTag: http://card.mygamercard.net/IT/nxe/ValeRyo76.png _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 13 August 2010 02:12:10 Valerio Granato wrote:> ---- Original Message ---- > From: "Bart Coninckx" <bart.coninckx@telenet.be> > To: <xen-users@lists.xensource.com> > Sent: Thursday, August 05, 2010 9:53 AM > Subject: [Xen-users] DRBD iSCSI failover while running Xen guests > > > Hi, > > > > has anyone running Xen on top of iSCSI on top of a DRBD active/passive > > cluster? > > Yes, I have two openfiler systems on drbd devices, exporting iscsi to > xen machines using heartbeat for HA services. > > > Have you tested failovers? > > Not as I would. > > > Do Xen guests survive? > > It depends. > A Win2000 hvm crashes (BSOD), a low load hvm linux 2.6.19.7 survives > with some messages from kernel about disks going up and down, a > heavy load pv linux 2.6.31.12-0.2-xen (openSUSE) works flawlessly. > A Win2003 with pv drivers survives logging about slow disk responses. >We do the same and my experiences are similar. I haven''t done a deliberate failover, but I do notice that Windows HVM guests are way more prone to corruption when ANYTHING happens to their harddisks (being in fact iSCSI targets). What we do as an added precaution is daily diskdumps of the guests OS drives, including memory state (xm save) so we can always revert to these states in case of catastropic failover. Remember DRBD is a high availability solution, not a permanent availability solution. ;-) But it works good though, especially for live migration and stuff. B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users