Hi, I'm experimenting with the migration function of virsh. I'm doing the migrate with following commands: virsh migrate --life --persistent --copy-storage-all --verbose --abort-on-error domain qemu+ssh://root@destination/system virsh migrate-setmaxdowntime domain 20000 However sometimes at the end of the migration, the guest isn't started on the destination host. So I'm left with a machine that had an improper shutdown... If a live migration isn't reliable, I'm thinking to migrate as such: - guest is still runing - copy or rsync the disk image to the destination - shut down the guest - rsync the disk image again (to minimise the off-line period) - dumpxml of guest and copy to destination - define and start the guest on the destination With the methode of the live migration, the machine will be unresponsive for max 20s = 20000ms (or less if I chose to). The migration it self will take op to 5 hours but I guess I plan these nightly. With the other migration the guest will be really off-line for the duration of the (second) rsync. In my case with disk images from 50 to 150Gb, the rsync wil take up at least 30 minutes and every connection will be closed... Does any body has good experiences with live migrations (without shared storage)? Any tips? Any thoughts on the second method of migrating? Greetings, Dominique.
On Thu, Jun 11, 2015 at 07:28:10PM +0200, Dominique Ramaekers wrote:> Hi, > > I'm experimenting with the migration function of virsh. I'm doing the > migrate with following commands: > > virsh migrate --life --persistent --copy-storage-all --verbose > --abort-on-error domain qemu+ssh://root@destination/system virsh > migrate-setmaxdowntime domain 20000 > > However sometimes at the end of the migration, the guest isn't started > on the destination host. So I'm left with a machine that had an > improper shutdown... > > If a live migration isn't reliable, I'm thinking to migrate as such: - > guest is still runing - copy or rsync the disk image to the > destination - shut down the guest - rsync the disk image again (to > minimise the off-line period) - dumpxml of guest and copy to > destination - define and start the guest on the destination > > With the methode of the live migration, the machine will be > unresponsive for max 20s = 20000ms (or less if I chose to). The > migration it self will take op to 5 hours but I guess I plan these > nightly. > > With the other migration the guest will be really off-line for the > duration of the (second) rsync. In my case with disk images from 50 to > 150Gb, the rsync wil take up at least 30 minutes and every connection > will be closed... > > Does any body has good experiences with live migrations (without > shared storage)? Any tips? > > Any thoughts on the second method of migrating?While no immediate answer on why your guest on destination host doesn't start, but I've previously successfully tested wih qemu+tcp URI and these three variants, without shared storage: (1) Native migration, client to two libvirtd servers $ virsh migrate --verbose --copy-storage-all \ --live cvm1 qemu+tcp://kashyapc@desthost/system (2) Native migration, peer2peer between two libvirtd servers: $ virsh migrate --verbose --copy-storage-all \ --p2p --live cvm1 qemu+tcp://kashyapc@desthost/system (3) Tunnelled migration, peer2peer between two libvirtd servers: $ virsh migrate --verbose --copy-storage-all \ --p2p --tunnelled --live cvm1 qemu+tcp://kashyapc@desthost/system Related notes here: https://bugzilla.redhat.com/show_bug.cgi?id=1202453#c19 -- /kashyap
I've tried options 1 and 2. Both failed. Unless someone has an idea about why I have this problem, I'll fill in a bug report... ________________________________________ Van: Kashyap Chamarthy [kchamart@redhat.com] Verzonden: vrijdag 12 juni 2015 13:37 Aan: Dominique Ramaekers CC: libvirt-users@redhat.com Onderwerp: Re: [libvirt-users] Migrating guests On Thu, Jun 11, 2015 at 07:28:10PM +0200, Dominique Ramaekers wrote:> Hi, > > I'm experimenting with the migration function of virsh. I'm doing the > migrate with following commands: > > virsh migrate --life --persistent --copy-storage-all --verbose > --abort-on-error domain qemu+ssh://root@destination/system virsh > migrate-setmaxdowntime domain 20000 > > However sometimes at the end of the migration, the guest isn't started > on the destination host. So I'm left with a machine that had an > improper shutdown... > > If a live migration isn't reliable, I'm thinking to migrate as such: - > guest is still runing - copy or rsync the disk image to the > destination - shut down the guest - rsync the disk image again (to > minimise the off-line period) - dumpxml of guest and copy to > destination - define and start the guest on the destination > > With the methode of the live migration, the machine will be > unresponsive for max 20s = 20000ms (or less if I chose to). The > migration it self will take op to 5 hours but I guess I plan these > nightly. > > With the other migration the guest will be really off-line for the > duration of the (second) rsync. In my case with disk images from 50 to > 150Gb, the rsync wil take up at least 30 minutes and every connection > will be closed... > > Does any body has good experiences with live migrations (without > shared storage)? Any tips? > > Any thoughts on the second method of migrating?While no immediate answer on why your guest on destination host doesn't start, but I've previously successfully tested wih qemu+tcp URI and these three variants, without shared storage: (1) Native migration, client to two libvirtd servers $ virsh migrate --verbose --copy-storage-all \ --live cvm1 qemu+tcp://kashyapc@desthost/system (2) Native migration, peer2peer between two libvirtd servers: $ virsh migrate --verbose --copy-storage-all \ --p2p --live cvm1 qemu+tcp://kashyapc@desthost/system (3) Tunnelled migration, peer2peer between two libvirtd servers: $ virsh migrate --verbose --copy-storage-all \ --p2p --tunnelled --live cvm1 qemu+tcp://kashyapc@desthost/system Related notes here: https://bugzilla.redhat.com/show_bug.cgi?id=1202453#c19 -- /kashyap
Reasonably Related Threads
- Migrating guests
- Re: P2P live migration with non-shared storage: fails to connect to remote libvirt URI qemu+ssh
- P2P live migration with non-shared storage: fails to connect to remote libvirt URI qemu+ssh
- Re: P2P live migration with non-shared storage: fails to connect to remote libvirt URI qemu+ssh
- Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)