similar to: Problem with the use of domfsfreeze mountpoint option

Displaying 20 results from an estimated 9000 matches similar to: "Problem with the use of domfsfreeze mountpoint option"

2014 Nov 12
3
Re: Problem with the use of domfsfreeze mountpoint option
On 11.11.2014 22:51, Eric Blake wrote: > On 11/11/2014 01:58 PM, Payes Anand wrote: >> Hi everybody, >> >> I am having a problem with the use of domfsfreeze command. >> >> It is freezing all the filesystems present on the domain, >> >> instead of freezing just the mountpoints provided. >> >> I am issuing the command-- >> >> #
2014 Nov 11
0
Re: Problem with the use of domfsfreeze mountpoint option
On 11/11/2014 01:58 PM, Payes Anand wrote: > Hi everybody, > > I am having a problem with the use of domfsfreeze command. > > It is freezing all the filesystems present on the domain, > > instead of freezing just the mountpoints provided. > > I am issuing the command-- > > # virsh domfsfreeze <domain> --mountpoint <mountpoint> > > Output
2014 Dec 11
2
Freeze Windows Guests For Consistent Storage Snapshots
Hi, Is it possible to freeze windows guests for a consistent storage level snapshot. I am using openstack icehouse on centos 6.6 Hypervisor: KVM Libvirt: 0.10.2 Qemu: 0.10.2 Guest OS: Windows 7 and Windows Server 2008 I was able to freeze Centos guests by issuing the command: virsh qemu-agent-command <guest_ID> '{"execute":"guest-fsfreeze-freeze"}' For CentOS
2014 Nov 14
2
Re: Problem with the use of domfsfreeze mountpoint option
On 11/12/14 22:17, Eric Blake wrote: > On 11/12/2014 10:24 AM, Michal Privoznik wrote: ... >> Although, I see a way that we could get something reasonable here. If >> qemu would tell us whenever somebody (dis-)connects (from)to the virtio >> channel. That way we could query the qemu-ga capabilities and make good >> decisions. And whenever we see a disconnect, we may
2020 Feb 17
2
RE: guest-fsfreeze-freeze freezes all mounted block devices
Hi Peter, Should I assume that the virsh domfsfreeze, does not require the qemu-agent service in the guest? PS. I couldn't find the result. Afaik it looks like it is returning the amount of frozen/thawed filesystem's -----Original Message----- Cc: libvirt-users Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos
2020 Feb 14
2
guest-fsfreeze-freeze freezes all mounted block devices
I wondered if anyone here can confirm that virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}' Freezes all mounted block devices filesystems. So if I use 4 block devices they are all frozen for snapshotting. Or just the root fs?
2014 Nov 17
0
Re: Problem with the use of domfsfreeze mountpoint option
As stated in the below link from libvirt-list, https://www.redhat.com/archives/libvir-list/2014-May/msg00033.html > OpenStack cinder provides these storages' snapshot feature, but it cannot > quiesce the guest filesystems automatically for now. > > This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh > domfsfreeze/domfsthaw commands to enable the users to
2014 Nov 12
0
Re: Problem with the use of domfsfreeze mountpoint option
On 11/12/2014 10:24 AM, Michal Privoznik wrote: >> What version of qemu-guest-agent is running in the guest? >> qemu-guest-agent doesn't support per-mountpoint freezing until the >> introduction of guest-fsfreeze-freeze-list in qemu 2.2 (still >> unreleased). >> >>> >>> --Upgraded libvirt to 1.2.10, but that also didn't solve the problem.
2020 Feb 17
2
RE: guest-fsfreeze-freeze freezes all mounted block devices
Hmmm, using 'virsh domfsinfo testdom' gives me a crash in win2008r2 (using software from virtio-win-0.1.171.iso) Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: qemu-ga.exe P2: 100.0.0.0 P3: 5c473543 P4: KERNELBASE.dll P5: 6.1.7601.24545 P6: 5e0eb6bd P7: c0000005 P8: 000000000000c4d2 P9: P10: Attached files: These files may be
2014 Nov 23
3
Live Disk Snapshot Not Supported
# virsh snapshot-create-as small snap1 --disk-only --atomic error: Operation not supported: live disk snapshot not supported with this QEMU binary OS used: CentOS 7 #virsh version Compiled against library: libvirt 1.1.1 Using library: libvirt 1.1.1 Using API: QEMU 1.1.1 Running hypervisor: QEMU 1.5.3 Any help would be greatly appreciated. Best Regards, Payes
2014 Dec 11
0
Re: Freeze Windows Guests For Consistent Storage Snapshots
On 12/10/2014 11:40 PM, Payes Anand wrote: > Hi, > Is it possible to freeze windows guests for a consistent storage level > snapshot. Yes, if you install qemu-guest-agent in the guest, and wire up your libvirt XML to have the guest-agent channel available. Once you have done that, use the --quiesce flag as part of creating your snapshots. > I was able to freeze Centos guests by
2020 Feb 17
0
Re: guest-fsfreeze-freeze freezes all mounted block devices
On Mon, Feb 17, 2020 at 10:03:27 +0100, Marc Roos wrote: > Hi Peter, > > Should I assume that the virsh domfsfreeze, does not require the > qemu-agent service in the guest? No. That's the official way how to drive the "guest-fsfreeze-freeze" agent command via libvirt, thus you must have the guest agent the same way as you used it before. Using qemu-agent-command is a
2020 Feb 17
0
Re: guest-fsfreeze-freeze freezes all mounted block devices
On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote: > > I wondered if anyone here can confirm that > > virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}' Note that libvirt implements this directly via 'virsh domfsfreeze'. This is the corresponding man page entry: domfsfreeze domain [[--mountpoint] mountpoint...]
2014 Oct 29
2
Re: KVM incremental backup using CBT
On 10/29/2014 01:07 PM, Thomas Stein wrote: > About the --quiesce option. Do i need to do something inside the vm? The most > commonly would probably be a sql server running inside the vm. Do i need to > tell the sql server something about the --quiesce option i use? I read this > article here which suggests such a procedure. Okay, it's vmware, but... Is > that right? For
2015 Dec 03
3
Re: snapshot of running vm's
> -----Oorspronkelijk bericht----- > Van: Lentes, Bernd [mailto:bernd.lentes@helmholtz-muenchen.de] > Verzonden: donderdag 3 december 2015 13:54 > Aan: libvirt-ML > CC: Dominique Ramaekers > Onderwerp: RE: snapshot of running vm's > > > ... > > > > > > Hi, > > > > > > i have inserted: > > > > > > <channel
2014 Nov 26
1
Re: Live Disk Snapshot Not Supported
Hi, The package installed is qemu-kvm-1.5.3-60.el7_0.10.x86_64 on CentOS 7. I have upgraded my libvirt and qemu also, still i am getting the same error ( Operation not supported: live disk snapshot not supported with this QEMU binary ) # virsh version Compiled against library: libvirt 1.2.10 Using library: libvirt 1.2.10 Using API: QEMU 1.2.10 Running hypervisor: QEMU 2.0.0 On Mon, Nov 24,
2014 Nov 21
1
Re: some problem with snapshot by libvirt
Eric Blake <eblake@...> writes: > > On 05/27/2012 06:39 PM, xingxing gao wrote: > > Hi,all,i am using libvirt to manage my vm,in these days i am testing > > the libvirt snapshot ,but meet some problem: > > > > the snapshot was created from this command: > > snapshot-create-as win7 --disk-only --diskspec > > vda,snapshot=external --diskspec
2007 Dec 23
7
Help with error "undefined method `downcase' for nil:NilClass" after migration
Hi all, I have a rails 1.5.2 application. I''ve frozen the application via the "rake rails:freeze:gems" command. This application worked well on a server I previously had it installed on. My server was getting really slow, and I requested that I be moved to a new server. When I perform a "gem list rails" command on my new host, I receive only version 1.2.6. Since
2023 Apr 05
1
backup-begin
The reason given is shut off (crashed). So something virsh backup-begin does is causing he guest to crash? Den 2023-04-04 kl. 16:58, skrev Peter Krempa: > On Tue, Apr 04, 2023 at 16:28:18 +0200, Andr? Malm wrote: >> Hello, >> >> For some vms the virsh backup-begin sometimes shuts off the vm and returns >> "error: operation failed: domain is not running"
2008 Oct 07
4
gluster over infiniband....
Hey guys, I am running gluster over infiniband, and I have a couple of questions. We have four servers, each with 1 disk that I am trying to access over infiniband using gluster. The servers look like they start okay, here are the last 10 or so lines of a client log (they are all identical): 2008-10-07 07:18:40 D [spec.y:196:section_sub] parser: child:stripe0->remote1 2008-10-07 07:18:40 D