Displaying 20 results from an estimated 20000 matches similar to: "can I undo the snapshot apply of a qcow2 image"
2013 Dec 10
2
virsh attach makes qcow2 format disk to raw format
Hi all
I have a problem when I use `virsh attach` to attach a qcow2 disk to vm without argument --subdirver=qcow2
and this makes the qcow2 disk become a raw format disk, and the data in this disk is missing in guest os
and also I have used guestmout to confirm this, the same result.
Is there any way to let me find back the data in this disk ?
the version of libvirt and kvm is :
Compiled
2013 Apr 12
1
after snapshot-delete, the qcow2 image file size doesn't decrease
After snapshot-delete, the qcow2 image file size doesn't decrease, isn't
that a waste of disk space?
Would someone please tell me how to decrease the file size when
snapshot-delete, if that's possible?
The image file name of my virtual machine is d0.qcow
As follows:
[root at test1 ]# virsh list
Id Name State
2010 May 28
2
Cannot create qcow2 images with libvirt 0.8.1
Hi,
After upgrading to libvirt 0.8.1, I can no longer create empty volumes with no
backing store and an explicit format of qcow2.
This XML volume definition:
<volume>
<name>testserverb-data2.img</name>
<allocation>0</allocation>
<capacity units='G'>20</capacity>
<target>
<format type='qcow2'/>
2014 Apr 11
2
libvirt with glusterfs problem
Hi
I have two node: hcg3, hcg4 which run glusterfs
Also run libvirtd on each node.
The problem cames when I use `virsh attach-device` to attach a glusterfs type disk .
My device xml like:
<disk type="network" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source protocol="gluster"
2013 Dec 12
0
Re: virsh attach makes qcow2 format disk to raw format
On 12/09/2013 09:03 PM, lyz_pro@163.com wrote:
> Hi all
>
> I have a problem when I use `virsh attach` to attach a qcow2 disk to vm without argument --subdirver=qcow2
> and this makes the qcow2 disk become a raw format disk, and the data in this disk is missing in guest os
> and also I have used guestmout to confirm this, the same result.
The format of a file is important. By
2013 Oct 31
1
Undo changes from removed class (ENC)
Hey guys,
i hope someone has an idea for me. At this time im using Puppet Dashboard
as enc for a masterless puppet infrastructure.No im searching for a best
practise to undo the changes from a removed class.
Example: In the enc I´ve added the class auth::test to a node. The class
will be loaded on the next puppet apply, works fine. But now if i remove
the class from the node in the enc, how
2020 Jul 16
1
Cannot pass secret id for backing file after taking external snapshot on encrypted qcow2 file
Hi,
I used 'virsh snapshot-create' create an encrypted external snapshot, when I try to use 'qemu-img check' top file, found no entrance to pass backing-file's secret-id
1、Version
centos-release-8.2-2.2004.0.1.el8.x86_64
libvirt.x86_64 6.0.0-17.el8
qemu-kvm.x86_64
2006 Feb 27
2
Multiple Undo History
I need to implement multiple undo functionality into a Rails app (my
first one). Essentially, every .update and .save needs to record the
previous data set and save them in a history table along with some
identifying stamps, so each db action can be rolled back at any time.
From what i can tell, i could implement this manually using
observers perhaps; but this is Rails, so i wonder if
2013 Jan 31
1
Managing Live Snapshots with Libvirt 1.0.1
Hello,
I recently compiled libvirt 1.0.1 and qemu 1.3.0 on Ubuntu 12.04. I have performed live snapshots on VMs using "virsh snapshot-create-as" and then later re-merge the images together using "virsh blockpull". I am wondering how I can do a couple of other operations on the images while the VM is running. For example, VM1 is running from the snap3 image, with the following
2012 Feb 15
2
a problem with using qcow2 format image files as virtual disks
Hi,
In most cases we use image files as virtual disks under the full-virtualization situation.On the version 4.1.2 of xen,I do the following command:
############################################################
dd if=/dev/zero of=myraw.img bs=1M seek=8K count=1
#Before the command below I install the operating system in myraw.img
qemu-img-xen create -b myraw.img -f qcow2 myqcow1.img 20000M
2018 Jun 19
3
Re: Reintroduce "allocate entire disk" checkbox on virt-manager
Il 19-06-2018 22:16 Cole Robinson ha scritto:
> Sorry, I misunderstood. You can still achieve what you want but it's
> more clicks: new vm, manage storage, add volume, and select raw volume
> with whatever capacity you want but with 0 allocation.
Sure, but the automatic disk creation is very handy and much less error
prone.
As it is now, if using a fallocate-less filesystem (eg:
2018 Jun 19
2
Re: Reintroduce "allocate entire disk" checkbox on virt-manager
Il 19-06-2018 20:14 Cole Robinson ha scritto:
> If you change the disk image format from qcow2 to raw in
> Edit->Preferences, then new disk images are set to fully allocated raw.
> Check the image details with 'qemu-img info $filename' to confirm. So I
> think by default we are doing what you want?
>
> - Cole
Er, the point is that I would really like to have a
2016 Apr 26
2
Re: stream finish throws exception via python API
On 04/26/2016 09:35 AM, Shahar Havivi wrote:
> On 26.04.16 15:30, Shahar Havivi wrote:
>> On 26.04.16 14:14, Shahar Havivi wrote:
>>> On 25.04.16 09:11, Cole Robinson wrote:
>>>> On 04/25/2016 08:10 AM, Shahar Havivi wrote:
>>>>> On 17.04.16 15:41, Shahar Havivi wrote:
>>>>>> Hi,
>>>>>> The following snippet works
2015 Nov 24
2
Any risk in sparsifying a base image (that has a snapshot on top of it)
Assuming the VM is not running, and we have a base (raw, sparse) with a
snapshot (qcow2) on top of it.
Is there any issue with running virt-sparsify on the base image? I assume
deleted blocks in the base can be sparsified, since they are either still
deleted on the snap (which is fine) or were written in the snap (which is
fine either and does not change or matter for the base image).
Can I assume
2015 Feb 13
4
libvirt live migration, qcow2 image, nbd server
Hi all,
When I live migrate a vm using
"migrate --live --copy-storage-all mig-vm qemu+ssh://192.168.1.3/system
tcp://192.168.1.3"
I got the following error
WARNING: Image format was not specified for
'nbd://node2:49155/drive-virtio-disk0' and probing guessed raw.
258 Automatically detecting the format is dangerous for raw
images, write operations on
2013 Sep 11
2
question about backing file path of a qcow2 image
hi,all
images of qcow2 format with backing files, its backing files have two path infos (absolute path and relative path) ,how to let libguestfs use absolute path rather than relative path. in some case, this leads error.
thanks
2014 Apr 14
1
libvirt and glusterfs integrate problem
Hi everyone
I have a problem when use libvirt and glusterfs.
When I use libvirt to start a vm with glusterfs disk. the operation will block.
after I `CTL + c` the 'virsh start domainxx' comand ,
I use `virsh list` will find the vm in a strange state, kind like follow:
Id Name State
----------------------------------------------------
20 vm10
2011 Jun 21
2
function to undo the DIFF command in ARIMA command
Hi users.
I'm new user in R.
I'm workiing with Time series and I would like to know how can I do to undo
the command DIFF(X), for exemple:
If I have the model: m=arima(X, order=c(0,1,1),
seasonal=list(order=c(0,0,1))) (note that have d=1 one difference), to find,
in the same scale, the original numbers (like one "unDiff"), after the
forecast, I need to develop some function or in
2010 Dec 05
1
How to set image format to 'qcow2'
Hi, all
My xml 'dom1.xml' is as following:
<domain type='kvm' id='1'>
<name>kvm-xp</name>
<description>xp kvm</description>
<memory>524288</memory>
<currentMemory>524288</currentMemory>
<os>
<type>hvm</type>
<!--loader>/usr/lib/xen/boot/hvmloader</loader-->
<boot dev="hd"/>
2019 Feb 26
1
IO rate of tar-in, what can we expect on a qcow2 image?
Hi all,
We have done several speed tests on a qcow2 Linux Image to test how fast tar-in with a big
tarball can be. Virtio seems to be active, and we get transfers in a range from 100-160MB/sec,
independent of the disk speed on the host.
For example we had a 20 core host system with 900MB for serial writing and 350MB for mixed
read/write on the native filesystem. We've expected a faster