similar to: snapshot-create-as for a single disk not all disks

Displaying 20 results from an estimated 1000 matches similar to: "snapshot-create-as for a single disk not all disks"

2013 Jun 25
2
Re: snapshot-create-as for a single disk not all disks
Thanks for you reply! Firstly, I'm very sorry I forgot introduce the scenarios in my experiments. Supposing a case, I have a virtual machine with two disks. One is mounted as a root partition and the other is data partition and the second disk is an iscsi lun, that is to say, not a local disk or image. Now the result wanted is that creating a snapshot for the root disk but not for the data
2013 Jun 26
2
Re: snapshot-create-as for a single disk not all disks
try snapshot-create-as like below: virsh snapshot-create-as vm --disk-only --diskspec "vda,snapshot=external" 2013/6/25 cmcc.dylan <dx10years@126.com> > > Hi, everyone, > I have found the API snapshotCreateXML() can create a snapshot for a > virtual machine, and the xml configuration file - snapshot.xml as folllows: > <domainsnapshot> >
2013 Jun 25
0
Re: snapshot-create-as for a single disk not all disks
Hi, everyone, I have found the API snapshotCreateXML() can create a snapshot for a virtual machine, and the xml configuration file - snapshot.xml as folllows: <domainsnapshot> <name>snapshot01</name> <description>Snapshot of OS install and updates</description> <disks> <disk name='vda' snapshot='external'> <source
2013 Jun 26
0
Re: snapshot-create-as for a single disk not all disks
Thank you! But snapshot-create-as will traverse all disks of the virtual machine and create snapshots for all of them. In my case, I want to create snapshot for root disk. What's more. I understand "--diskspec" is a description of the disk. Do you mean it's used to specify a single disk to snapshot. 在 2013-06-26 09:10:31,"Gao Yongwei" <itxx00@gmail.com> 写道:
2013 Jun 24
0
Re: snapshot-create-as for a single disk not all disks
I think what you're looking for is LVM snapshots. The whole purpose of taking VM snapshots is to have a consistent image of the machine as a whole. There's lots of articles out there on LVM snapshots, here's one. http://www.tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html From: libvirt-users-bounces@redhat.com [mailto:libvirt-users-bounces@redhat.com] On Behalf Of cmcc.dylan Sent:
2013 May 30
2
Question, how to coorelate snapshot ID's to the files that they represent?
Hi folks, first post :) I'm running Redhat 6 x64 with ibvirt-0.10.2-18 and qemu-img-rhev-0.12.1.2-2.355 My question is, if I do something like the following.. [root@testbox ~]# virsh snapshot-list STIGtest Name Creation Time State ------------------------------------------------------------ 1369421485 2013-05-24 13:51:25 -0500 disk-snapshot
2015 Apr 29
3
unable to edit existing snapshot
Greetings, due to hardware failure I had to replace my workstation which has a different CPU. I have a VM with several snapshots and I need to revert to a specific one. While reverting to it, I get an error due to unsupported CPU features. Therefore, I try to edit the snapshot XML through the command: virsh snapshot-edit <domain_name> <snapshot_name> When I save the changes I get
2023 Jul 18
2
Installation of R-4.3.1 with intel 2022
Note that 'intel 2022' is a bit vague. The current version is 2023.1.0, and that has both the 'classic' (icc/icpc/ifort which it seems you used) and new (icx/ixpx/ifx) compilers -- the former are said to be going to be discontinued later this year. R did not know about ifx so did not build with the new set. The parts of the manual Tomas referred to were about the old
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Sep 18
6
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Now, we need to add 3 new servers with the same hardware configuration respecting the current volume topology. If I'm right, we will obtain a DITRIBUTED
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes. It depends on the way you are going to add new bricks on the existing volume 'vol" I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down. When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add. Suppose you want to add 6
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2023 Jun 20
1
Installation of R-4.3.1 with intel 2022
Hi all, I have the issue: icc -std=c99 -std=gnu11 -I../../src/extra -I../../src/extra/xdr -I. -I../../src/include -I../../src/include -I/usr/local/include -I../../src/nmath -DHAVE_CONFIG_H -fopenmp -fpic -g -O3 -wd188 -ip -mp -c eval.c -o eval.o arithmetic.c(66): warning #274: declaration is not visible outside of function int matherr(struct exception *exc) ^
2013 Sep 13
3
Regarding libvirt usage
Hi Team, I am using libvirt module to retrieve configuration of the virtual machines, Can you please tell me how to retrieve the disk space of the Virtual machines. In my KVM hyper i am running two virtual machines. Regards Manzoor
2011 Aug 09
7
Disk IDs and DD
Hiya, Is there any reason (and anything to worry about) if disk target IDs don''t start at 0 (zero). For some reason mine are like this (3 controllers - 1 onboard and 2 PCIe); AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA -ST9160314AS -SDM1 cyl 19454 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,cb84 at 5/disk at 0,0 1. c8t1d0 <ATA -ST9160314AS -SDM1
2015 Sep 15
2
libvirt 1.19: could not open drive file (permission denied)
Hi, With libvrit 1.18 all is working fine: I can open my win7 VM on Archlinux without problem. But I cannot use it with libvirt 1.19: it could not open drive file (permission denied). In /etc/libvirt/qemu.conf, I have: - user: root - group: root And the drive file has root:root as owner. Why this configuration is running with libvirt 1.18 and not with libvirt 1.19 ? Many thanks for
2006 Sep 13
10
Snapshots and backing store
Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to