shyu
2014-Jul-22 04:01 UTC
[libvirt-users] Problem about Disk Dize of Destination and Source File after Do Blockcopy
Hi There, There is a case I met about the destination file's disk size after do blockcopy Details as below: Env: # rpm -q libvirt qemu-kvm-rhev libvirt-1.1.1-29.el7.x86_64 qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64 1. Check source file # qemu-img info /var/lib/libvirt/images/rhel6.img image: /var/lib/libvirt/images/rhel6.img file format: qcow2 virtual size: 5.0G (5368709120 bytes) disk size: 1.2G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false # ll -lash /var/lib/libvirt/images/rhel6.img 1.2G -rw-r--r--. 1 qemu qemu 1.2G Jul 22 11:04 /var/lib/libvirt/images/rhel6.img Here: Disk Size is "1.2G" 2. Do blockcopy # virsh domblklist rhel6 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rhel6.img # time virsh blockcopy rhel6 vda /var/lib/libvirt/images/copy.img --wait --verbose Block Copy: [100 %] Now in mirroring phase real 0m21.285s user 0m0.038s sys 0m0.018s 3. Check destination file's disk size # qemu-img info /var/lib/libvirt/images/copy.img image: /var/lib/libvirt/images/copy.img file format: qcow2 virtual size: 5.0G (5368709120 bytes) disk size: 2.0G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false # ll -lash /var/lib/libvirt/images/copy.img 2.1G -rw-------. 1 qemu qemu 1.2G Jul 22 11:10 /var/lib/libvirt/images/copy.img Here: Disk Size is "2.0G" So my question is why the "disk size" is different between source file and destination file ? (Actually, the destination always change after I try many times, sometime it is same size as source, sometimes it is lager than source.) 4. Do --pivot After --pivot, the disk size is same size as after blockcopy -- Regards shyu
Eric Blake
2014-Jul-22 12:52 UTC
Re: [libvirt-users] Problem about Disk Dize of Destination and Source File after Do Blockcopy
On 07/21/2014 10:01 PM, shyu wrote:> > # rpm -q libvirt qemu-kvm-rhev > libvirt-1.1.1-29.el7.x86_64 > qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64These are downstream builds. Can you reproduce your situation with upstream libvirt 1.2.6 and qemu 2.1-rc2? It may be that you are hitting behavior that was introduced by downstream backports.> 1. Check source file > # qemu-img info /var/lib/libvirt/images/rhel6.img > image: /var/lib/libvirt/images/rhel6.img > file format: qcow2 > virtual size: 5.0G (5368709120 bytes) > disk size: 1.2GDisk size tracks how much of the qcow2 file has been allocated, NOT how much guest data has been allocated.> 3. Check destination file's disk size > > # qemu-img info /var/lib/libvirt/images/copy.img > image: /var/lib/libvirt/images/copy.img > file format: qcow2 > virtual size: 5.0G (5368709120 bytes) > disk size: 2.0GThe thing to remember here is that blockcopy defaults to doing a cluster at a time, even if the guest has not yet touched every sector within the cluster. It may be that you are hitting cases where the copy operation ends up writing an entire cluster in the destination where only a partial cluster had been allocated in the source. But that does not necessarily mean the copy is flawed, only that the default granularity was large enough to inflate the destination with redundant all-zero sectors in the interest of speeding up the operation, or that the destination is not as sparse as the source. Qemu offers the 'granularity' parameter to the 'drive-mirror' command to alter the granularity, but libvirt is not (currently) exposing this knob to the user so for now libvirt is just relying on qemu defaults. It may also be a factor of how much copy-on-write dirtying is happening. If the guest is actively hammering on the disk during the copy operation, the same cluster may be marked dirty multiple times; if qemu allocates a new destination cluster for each pass through the dirty bitmap, it may result in some inflation in size due to clusters that are written early then discarded as they are later rewritten in a new allocation. I'm not familiar enough with qemu block handling to know if this is happening, or even whether qemu could be patched to do better garbage collection of clusters left unused if it is happening. There is nothing that libvirt can do about this. I don't think it is a bug, but you may want to ask on the qemu list, since it is up to qemu whether or not the copy will be inflated in host size. But inflation is not a bad thing in itself - the real question is whether the copy contains the same guest contents as the original at the time the copy completed - as long as that is the case, even if the host sizes are different, then the copy is reliable. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
shyu
2014-Jul-27 02:28 UTC
Re: [libvirt-users] Problem about Disk Dize of Destination and Source File after Do Blockcopy
Hi Eric, On 07/22/2014 08:52 PM, Eric Blake wrote:> On 07/21/2014 10:01 PM, shyu wrote: > >> # rpm -q libvirt qemu-kvm-rhev >> libvirt-1.1.1-29.el7.x86_64 >> qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64 > These are downstream builds. Can you reproduce your situation with > upstream libvirt 1.2.6 and qemu 2.1-rc2? It may be that you are hitting > behavior that was introduced by downstream backports.I tried with qemu-kvm-2.1.0-0.5.rc3.fc20.x86_64 and libvirt built from latest libvirt.git, I am able to reproduce that.>> 1. Check source file >> # qemu-img info /var/lib/libvirt/images/rhel6.img >> image: /var/lib/libvirt/images/rhel6.img >> file format: qcow2 >> virtual size: 5.0G (5368709120 bytes) >> disk size: 1.2G > Disk size tracks how much of the qcow2 file has been allocated, NOT how > much guest data has been allocated. > >> 3. Check destination file's disk size >> >> # qemu-img info /var/lib/libvirt/images/copy.img >> image: /var/lib/libvirt/images/copy.img >> file format: qcow2 >> virtual size: 5.0G (5368709120 bytes) >> disk size: 2.0GThanks very much for your explanation> The thing to remember here is that blockcopy defaults to doing a cluster > at a time, even if the guest has not yet touched every sector within the > cluster. It may be that you are hitting cases where the copy operation > ends up writing an entire cluster in the destination where only a > partial cluster had been allocated in the source. But that does not > necessarily mean the copy is flawed, only that the default granularity > was large enough to inflate the destination with redundant all-zero > sectors in the interest of speeding up the operation, or that the > destination is not as sparse as the source. Qemu offers the > 'granularity' parameter to the 'drive-mirror' command to alter the > granularity, but libvirt is not (currently) exposing this knob to the > user so for now libvirt is just relying on qemu defaults. > > It may also be a factor of how much copy-on-write dirtying is happening. > If the guest is actively hammering on the disk during the copy > operation, the same cluster may be marked dirty multiple times; if qemu > allocates a new destination cluster for each pass through the dirty > bitmap, it may result in some inflation in size due to clusters that are > written early then discarded as they are later rewritten in a new > allocation. I'm not familiar enough with qemu block handling to know if > this is happening, or even whether qemu could be patched to do better > garbage collection of clusters left unused if it is happening. > > There is nothing that libvirt can do about this. I don't think it is a > bug, but you may want to ask on the qemu list, since it is up to qemu > whether or not the copy will be inflated in host size. But inflation is > not a bad thing in itself - the real question is whether the copy > contains the same guest contents as the original at the time the copy > completed - as long as that is the case, even if the host sizes are > different, then the copy is reliable. >-- Regards shyu
Maybe Matching Threads
- Re: Problem about Disk Dize of Destination and Source File after Do Blockcopy
- Re: Problem about Disk Dize of Destination and Source File after Do Blockcopy
- virsh blockcopy: doesn't seem to flatten the chain by default
- Re: virsh snapshot-create and blockcopy
- Re: virsh blockcopy: doesn't seem to flatten the chain by default