Displaying 20 results from an estimated 30000 matches similar to: "Live migration without block copy"
2012 Jul 10
2
Live Block Migration with additional attached storage
Dear all,
I am planning to use live block migration with one VM running on local disk
and also attached additional disk from iSCSI or other shared storage.
When block migration, not only local VM disk is copied to the destination
but also the attached additional disk from shared storage. It is not
desired in this situation.
I just want the local VM disk is copied. Is there anyway to do this
2010 Feb 24
0
live migration not working correctly
I have two servers running Ubuntu 9.10, with shared disk from ISCSI SAN
and OCFS2, identical network configurations, shared ssh keys, and pki
keys. My vm will boot from either machine and run correctly. When I
attempt to do a live migration with "migrate --live base32 qemu
+[ssh/tcp]://vm1/system" it initiates the vm on the other server, but
leaves the vm on the current server in the
2018 Feb 07
0
Samba Migration and AD integration
Hi Rowland,
Following the https://wiki.samba.org/index.php/Changing_the_DNS_Back_End_of_a_Samba_AD_DC, ran some tests migrating from Bind9 to Samba Internal with the following results
Stopped the BIND, Samba-AD-DC services
samba_upgradedns --dns-backend=SAMBA_INTERNAL
Reading domain information
DNS accounts already exist
Reading records from zone file
2009 Nov 11
2
Lost raid when server reboots
Hi all,
I have setup a raid1 between two iscsi disks and mdadm command goes well. Problems
starts when i rebooted the server (CentOS 5.4 fully updated): raid is lost, and i
don't understand why.
"mdadm --detail --scan" doesn't returns me any output. "mdadm --examine --scan"
returns me:
ARRAY /dev/md0 level=raid1 num-devices=2
2014 Nov 05
0
Excluding block device from mdadm scan at boot
Using Centos 6, how to I prevent mdadm from assembling arrays from
specific block devices at boot?
Background:
Due to an accident, one of my servers went down and on reboot, got
stuck first at NFS statd, then at automount after I disable NFS in
single user mode. Only after disabling autofs was the server able to
complete booting into runlevel 3.
On investigation, it seems that mdadm added two
2006 Apr 05
0
Hack Attack: Moving domU''s between hosts without shared disk
So I have two hosts that I want to move domU''s between, but they do
not have any shared disk. I am willing to suffer some downtime to move
the domU''s, but not a lot (like maybe the time needed to do a reboot,
but certainly not the time required to copy the virtual disks over).
This is what I did.
WARNING: THIS IS A HACK AND YOU SHOULD BE CAREFUL DOING THIS.
But it worked for me
2019 Oct 18
0
Re: [libvirt] Some questions about live migration
On Fri, Oct 18, 2019 at 15:00:19 +0800, Luyao Zhong wrote:
> Hi libvirt experts,
>
> I have some questions about live migration.
I'm assuming you are not asking about post-copy migration, since it is a
bit more complicated as the current state if split between the source
and destination hosts and none of them can keep running without the
other until migration finishes.
> * If a
2008 Apr 01
1
RAID1 migration - /dev/md1 is not there
I am trying to convert an existing IDE one-disk system to RAID1 using the
general strategy found here:
http://lists.centos.org/pipermail/centos/2005-March/003813.html
But I am stuck on one thing - when I went to create the second md device with
mdadm,
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hdb2 missing
mdadm: error opening /dev/md1: No such file or directory
And indeed,
2006 May 17
1
Response to query re: calculating intraclass correlations
Karl,
If you use one of the specialized packages to calculate your ICC, make sure that you know what you're getting. (I haven't checked the packages out myself, so I don't know either.)
You might want to read David Futrell's article in the May 1995 issue of Quality Progress where he describes six different ways to calculate ICCs from the same data set, all with different
2014 Nov 24
2
Re: Libvirt Live Migration
Thanks for your answer,
1) In this case i'm not using shared storage, the migration is happening
between two non-shared storage with full disk copy
2) I already created the file with same size, and the vm
works properly after a restart, my problem that it remains paused and I
can't resume it until it's rebooted.
3) I'm using qemu-kvm 1.0
On 24 November 2014 at 10:22, Michal
2008 Apr 12
0
Problems with xm migrate --live
Hello,
I have 2 Dell 1955 blade servers, running RHEL5-Xen. I''m testing
the migrate functionality from one blade to another. I can start the
domain, move it to one blade (minor delay/packet loss) and everything
is fine. When I try to move it back to the original blade the
migration fails and the DomU crashes
c1b1 = Blade 1 (192.168.131.201)
c1b2 = Blade 2
2015 Aug 18
1
Live migration & storage copy broken since 1.2.17
Hi,
It seems that live migration using storage copy is broken since libvirt
1.2.17.
Here is the command line used to do the migration using virsh:
virsh migrate --live --p2p --persistent --undefinesource --copy
-storage-all d2b545d3-db32-48d3-b7fa-f62ff3a7fa18
qemu+tcp://dest/system
XML dump of my storage:
<pool type='logical'>
<name>local</name>
2017 Jun 04
1
vm live migration memory copy
Hi All,
I am wondering when we do live migration, how much memory is transferred? I
guess it must be one of the three below, but not sure which one:
1. The amount of memory allocated to VM
2. The memory currently used by VM (actual Mem usage in guest)
3. The memory used by the qemu process (RSS).
Best Regards,
Hui
2007 Sep 04
0
Keep harddisks (volumes) syncronized over network for live migration on Linux
Hi,
I have 5 very identical server running xen 3.0.3 (Debian etch std.
packages) with serveral DomUs.
Theres my question: It is possible to keep thier lvm volume groups or
singe logical volumes synchronized over network?
Example:
dom01 contains:
domU1
on volume group xenserver1
with logical volume domU1
and LV domU1-swp
domU2
on volume group xenserver1
with logical volume domU2
and LV
2011 Oct 19
2
Live CD boot for KVM guest. How?
Hi,
My host and guest are CentOS 6. The guest is going to be a web server in
production. I am trying to resize (extend) of the base partition of my
guest. But I can of course start the installation of CentOS 6 guest all
over again with a larger image size. However, just for the sake of
better understanding I an trying to solve things not to be end up in a
dead end after some years.
1. I
2019 Oct 18
1
[libvirt]Some questions about live migration
Hi libvirt experts,
I have some questions about live migration.
* If a live migration failed during migrating, will the domain exist on the
destination host?
* Is the flag VIR_MIGRATE_PAUSED make sense to live migration? It's a little
confusing for me. Does that indicate if I set this flag, then the domain on
the destination will not disappear even if the migration is failed, and it will
in
2020 Jul 30
0
Re: libvirt segfaults with "internal,error: Missing monitor reply object", during block live-migration
On Thu, Jul 30, 2020 at 16:13:09 +0200, Alex Walender wrote:
> Dear libvirt community,
>
>
> Using recent Ubuntu Stein Cloud Packages, we are observing random
> libvirtd live-migration crashes on the target host.
> Libvirt is having a SEGFAULT with the qemu driver. Transferring block
> devices usually works without issues.
> However, the following memory transfer is
2020 Jul 30
0
Re: libvirt segfaults with "internal,error: Missing monitor reply object", during block live-migration
On 7/30/20 4:13 PM, Alex Walender wrote:
> Dear libvirt community,
> libvirt-daemon 5.0.0-1ubuntu2.6~cloud0
Also, this is oldish libvirt. Is there a way you could check something
more recent (if not the current HEAD).
It's likely that the bug is fixed.
Michal
2019 Jun 21
0
Intermittent live migration hang with ceph RBD attached volume
Software in use:
*Source hypervisor:* *Qemu:* stable-2.12 branch *Libvirt*: v3.2-maint
branch *OS*: CentOS 6
*Destination hypervisor: **Qemu:* stable-2.12 branch *Libvirt*: v4.9-maint
branch *OS*: CentOS 7
I'm experiencing an intermittent live migration hang of a virtual machine
(KVM) with a ceph RBD volume attached.
At the high level what I see is that when this does happen, the virtual
2011 Oct 28
1
live migration error without shared storage
command:
virsh migrate --live --copy-storage-all vm
qemu+ssh://destinationHost/system
error
: unexpected failure
PS log file in the jpg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20111028/23c5a7bf/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...