similar to: Snapshot error "command savevm not found"

Displaying 20 results from an estimated 400 matches similar to: "Snapshot error "command savevm not found""

2011 Jul 28
0
Snapshot error "command savevm not found"
Attempting to take snapshots of VM using virsh with the following command, # virsh -c qemu:///system snapshot-create CentOS6-x86-001 Results in the following error, error: internal error unable to execute QEMU command 'savevm': The command savevm has not been found The VM's virtual disks are qcow2. Below is the XML file for this vm ------------ <domain type='kvm'
2011 Sep 27
2
kvm-qemu: unable to execute QEMU command savevm (monitor missing?)
System: CentOS Linux release 6.0 (final) Kernel: 2.6.32-71.23.1.el6.x86_64 KVM: QEMU PC emulator version 0.12.1 (qemu-kvm-0.12.1.2) Libvirt: ibvirtd (libvirt) 0.8.1 Hi everyone, I only recently subscribed to this list and hope you can shed some light on the following error. I created a VM on my Centos 6 KVM machine, used a qcow2 image and wanted to create a snapshot via 'virsh
2014 Sep 20
2
savevm and qemu 2.1.1
hello. I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do: "savevm $domain" the domain ist gone. Console says "unable to connect to monitor". Libvirt.log says: qemu-system-x86_64: /var/tmp/portage/app- emulation/qemu-2.1.1/work/qemu-2.1.1/hw/net/virtio-net.c:1348: virtio_net_save: Assertion `!n->vhost_started' fail ed. Googling for this error
2014 Sep 22
1
Re: savevm and qemu 2.1.1
Am 22.09.14 10:42, schrieb Daniel P. Berrange: > On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote: >> hello. >> >> I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do: >> >> "savevm $domain" >> >> the domain ist gone. Console says "unable to connect to monitor". Libvirt.log >> says: >>
2015 Feb 05
4
QEMU 2.2.0 managedsave: Unknown savevm section type 5
Hello, I am running into issues restoring VMs during reboot for some of my XP VMs - the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with KVM and libvirt-guests is set to suspend at shutdown. The weird part is Windows 7 is restored properly from the managedsave however XP does not, doing a start from virsh shows this: virsh # start xp-check error: Failed to start domain xp-check
2014 Sep 22
0
Re: savevm and qemu 2.1.1
On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote: > hello. > > I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do: > > "savevm $domain" > > the domain ist gone. Console says "unable to connect to monitor". Libvirt.log > says: > > qemu-system-x86_64: /var/tmp/portage/app- >
2015 Feb 05
0
Re: QEMU 2.2.0 managedsave: Unknown savevm section type 5
On 05.02.2015 01:54, Paul Apostolescu wrote: > Hello, > > I am running into issues restoring VMs during reboot for some of my XP > VMs - the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with > KVM and libvirt-guests is set to suspend at shutdown. The weird part is > Windows 7 is restored properly from the managedsave however XP does not, > doing a start from virsh
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as >> soon as your guest is using VMX itself ("nVMX"), this is not expected to >> work. > > Hi David, thanks for getting back to us on this. Hi Florian, (sombeody please correct me if I'm wrong) > > I see your point, except the issue Kashyap and I are describing does > not
2018 Feb 08
4
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
> Sure, I do understand that Red Hat (or any other vendor) is taking no > support responsibility for this. At this point I'd just like to > contribute to a better understanding of what's expected to definitely > _not_ work, so that people don't bloody their noses on that. :) Indeed. nesting is nice to enable as it works in 99% of all cases. It just doesn't work when
2011 Dec 23
2
[help] QEMUFile's format
Hi, Is anyone clear about the format of qemu file for savevm or loadvm? bruce
2011 May 05
3
converting save/dump output into physical memory image
A lot of people in the security community, myself included, are interested in memory forensics these days. Virtualization is a natural fit with memory forensics because it allows one to get access to a guest's memory without having to introduce any extra software into the guest or otherwise interfere with it. Incident responders are particularly interested in getting memory dumps from
2016 Nov 21
1
blockcommit and gluster network disk path
Hi, I'm running into problems with blockcommit and gluster network disks - wanted to check how to pass path for network disks. How's the protocol and host parameters specified? For a backing volume chain as below, executing virsh blockcommit fioo5 vmstore/912d9062-3881-479b-a6e5-7b074a252cb6/images/27b0cbcb-4dfd-4eeb-8ab0-8fda54a6d8a4/027a3b37-77d4-4fa9-8173-b1fedba1176c --base
2017 Nov 17
0
Help with reconnecting a faulty brick
On 11/17/2017 03:41 PM, Daniel Berteaud wrote: > Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > >> On 11/16/2017 12:54 PM, Daniel Berteaud wrote: >>> Any way in this situation to check which file will be healed from >>> which brick before reconnecting ? Using some getfattr tricks ? >> Yes, there are afr
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > On 11/16/2017 12:54 PM, Daniel Berteaud wrote: > > Any way in this situation to check which file will be healed from > > which brick before reconnecting ? Using some getfattr tricks ? > Yes, there are afr xattrs that determine the heal direction for each > file. The good copy
2017 Nov 13
0
Help with reconnecting a faulty brick
Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: > > Could I just remove the content of the brick (including the .glusterfs > directory) and reconnect ? > In fact, what would be the difference between reconnecting the brick with a wiped FS, and using gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore gluster volume add-brick myvol replica 2
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: > > Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >> >> Could I just remove the content of the brick (including the >> .glusterfs directory) and reconnect ? >> > > In fact, what would be the difference between reconnecting the brick > with a wiped FS, and using > > gluster volume remove-brick vmstore
2011 Oct 15
2
SELinux triggered during Libvirt snapshots
I recently began getting periodic emails from SEalert that SELinux is preventing /usr/libexec/qemu-kvm "getattr" access from the directory I store all my virtual machines for KVM. All VMs are stored under /vmstore , which is it's own mount point, and every file and folder under /vmstore currently has the correct context that was set by doing the following: semanage fcontext -a -t
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: > > > > Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: >> >> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >>> >>> Could I just remove the content of the brick (including the >>> .glusterfs directory) and reconnect ? >>> >> If it is only the brick that is faulty on the bad node,
2016 Apr 19
4
Re: Create multiple domains from single saved domain state (is UUID/name fixed?)
(please don't top-post. Put your responses inline, in context) On 04/19/2016 01:09 PM, Jonas Finnemann Jensen wrote: > virt-builder looks like some fancy guest/host interaction related to > building VM images. > > What I'm looking for is more like: > virsh save running_domain saved-domain-A.img > cp saved-domain-A.img saved-domain-B.img > virsh save-image-edit
2017 Nov 13
0
Prevent total volume size reduction
I have a question regarding total volume size of a mounted GlusterFS volume. At least in a simple replicated volume (2x1) the size of the volume is the one of the smallest brick. We can extend it live by extending the corresponding bricks, and the GLusterFS volume will immediately appear bigger, up to the size of the smallest brick. Now, I had a problem on my setup, long story short, an LVM