Displaying 20 results from an estimated 200 matches similar to: "Snapshot error "command savevm not found""
2011 Aug 02
1
Snapshot error "command savevm not found"
Attempting to take snapshots of VM using virsh with the following command,
# virsh -c qemu:///system snapshot-create CentOS6-x86-001
Results in the following error,
error: internal error unable to execute QEMU command 'savevm': The command
savevm has not been found
The VM's virtual disks are qcow2. Below is the XML file for this vm
------------
<domain type='kvm'
2011 Sep 27
2
kvm-qemu: unable to execute QEMU command savevm (monitor missing?)
System: CentOS Linux release 6.0 (final)
Kernel: 2.6.32-71.23.1.el6.x86_64
KVM: QEMU PC emulator version 0.12.1 (qemu-kvm-0.12.1.2)
Libvirt: ibvirtd (libvirt) 0.8.1
Hi everyone,
I only recently subscribed to this list and hope you can shed some
light on the following error. I created a VM on my Centos 6 KVM
machine, used a qcow2 image and wanted to create a snapshot via 'virsh
2014 Sep 20
2
savevm and qemu 2.1.1
hello.
I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
"savevm $domain"
the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
says:
qemu-system-x86_64: /var/tmp/portage/app-
emulation/qemu-2.1.1/work/qemu-2.1.1/hw/net/virtio-net.c:1348:
virtio_net_save: Assertion `!n->vhost_started' fail
ed.
Googling for this error
2014 Sep 22
1
Re: savevm and qemu 2.1.1
Am 22.09.14 10:42, schrieb Daniel P. Berrange:
> On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote:
>> hello.
>>
>> I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
>>
>> "savevm $domain"
>>
>> the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
>> says:
>>
2014 Sep 22
0
Re: savevm and qemu 2.1.1
On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote:
> hello.
>
> I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
>
> "savevm $domain"
>
> the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
> says:
>
> qemu-system-x86_64: /var/tmp/portage/app-
>
2015 Feb 05
0
Re: QEMU 2.2.0 managedsave: Unknown savevm section type 5
On 05.02.2015 01:54, Paul Apostolescu wrote:
> Hello,
>
> I am running into issues restoring VMs during reboot for some of my XP
> VMs - the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with
> KVM and libvirt-guests is set to suspend at shutdown. The weird part is
> Windows 7 is restored properly from the managedsave however XP does not,
> doing a start from virsh
2015 Feb 05
4
QEMU 2.2.0 managedsave: Unknown savevm section type 5
Hello,
I am running into issues restoring VMs during reboot for some of my XP VMs
- the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with KVM and
libvirt-guests is set to suspend at shutdown. The weird part is Windows 7
is restored properly from the managedsave however XP does not, doing a
start from virsh shows this:
virsh # start xp-check
error: Failed to start domain xp-check
2016 Nov 21
1
blockcommit and gluster network disk path
Hi,
I'm running into problems with blockcommit and gluster network disks -
wanted to check how to pass path for network disks. How's the protocol and
host parameters specified?
For a backing volume chain as below, executing
virsh blockcommit fioo5
vmstore/912d9062-3881-479b-a6e5-7b074a252cb6/images/27b0cbcb-4dfd-4eeb-8ab0-8fda54a6d8a4/027a3b37-77d4-4fa9-8173-b1fedba1176c
--base
2017 Nov 17
0
Help with reconnecting a faulty brick
On 11/17/2017 03:41 PM, Daniel Berteaud wrote:
> Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
>
>> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
>>> Any way in this situation to check which file will be healed from
>>> which brick before reconnecting ? Using some getfattr tricks ?
>> Yes, there are afr
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit:
> On 11/16/2017 12:54 PM, Daniel Berteaud wrote:
> > Any way in this situation to check which file will be healed from
> > which brick before reconnecting ? Using some getfattr tricks ?
> Yes, there are afr xattrs that determine the heal direction for each
> file. The good copy
2017 Nov 13
0
Help with reconnecting a faulty brick
Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>
> Could I just remove the content of the brick (including the .glusterfs
> directory) and reconnect ?
>
In fact, what would be the difference between reconnecting the brick
with a wiped FS, and using
gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore
gluster volume add-brick myvol replica 2
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>
> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>
>> Could I just remove the content of the brick (including the
>> .glusterfs directory) and reconnect ?
>>
>
> In fact, what would be the difference between reconnecting the brick
> with a wiped FS, and using
>
> gluster volume remove-brick vmstore
2011 Oct 15
2
SELinux triggered during Libvirt snapshots
I recently began getting periodic emails from SEalert that SELinux is
preventing /usr/libexec/qemu-kvm "getattr" access from the directory I store
all my virtual machines for KVM.
All VMs are stored under /vmstore , which is it's own mount point, and
every file and folder under /vmstore currently has the correct context that
was set by doing the following:
semanage fcontext -a -t
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote:
>
>
>
> Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:
>>
>> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?:
>>>
>>> Could I just remove the content of the brick (including the
>>> .glusterfs directory) and reconnect ?
>>>
>>
If it is only the brick that is faulty on the bad node,
2017 Nov 13
0
Prevent total volume size reduction
I have a question regarding total volume size of a mounted GlusterFS
volume. At least in a simple replicated volume (2x1) the size of the
volume is the one of the smallest brick. We can extend it live by
extending the corresponding bricks, and the GLusterFS volume will
immediately appear bigger, up to the size of the smallest brick.
Now, I had a problem on my setup, long story short, an LVM
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
Hi David,
thanks for the added input! I'm taking the liberty to snip a few
paragraphs to trim this email down a bit.
On Thu, Feb 8, 2018 at 1:07 PM, David Hildenbrand <david@redhat.com> wrote:
>> Just to give an example,
>> https://www.redhat.com/en/blog/inception-how-usable-are-nested-kvm-guests
>> from just last September talks explicitly about how "guests can
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
On Thu, Feb 8, 2018 at 2:47 PM, David Hildenbrand <david@redhat.com> wrote:
>> Again, I'm somewhat struggling to understand this vs. live migration —
>> but it's entirely possible that I'm sorely lacking in my knowledge of
>> kernel and CPU internals.
>
> (savevm/loadvm is also called "migration to file")
>
> When we migrate to a file, it
2018 Feb 08
4
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
> Sure, I do understand that Red Hat (or any other vendor) is taking no
> support responsibility for this. At this point I'd just like to
> contribute to a better understanding of what's expected to definitely
> _not_ work, so that people don't bloody their noses on that. :)
Indeed. nesting is nice to enable as it works in 99% of all cases. It
just doesn't work when
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled:
[root at dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2:
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Hi David, thanks for getting back to us on this.
Hi Florian,
(sombeody please correct me if I'm wrong)
>
> I see your point, except the issue Kashyap and I are describing does
> not