Displaying 20 results from an estimated 4000 matches similar to: "savevm and qemu 2.1.1"
2014 Sep 22
1
Re: savevm and qemu 2.1.1
Am 22.09.14 10:42, schrieb Daniel P. Berrange:
> On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote:
>> hello.
>>
>> I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
>>
>> "savevm $domain"
>>
>> the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
>> says:
>>
2014 Sep 22
0
Re: savevm and qemu 2.1.1
On Sat, Sep 20, 2014 at 02:13:16PM +0200, Thomas Stein wrote:
> hello.
>
> I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
>
> "savevm $domain"
>
> the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
> says:
>
> qemu-system-x86_64: /var/tmp/portage/app-
>
2011 Sep 27
2
kvm-qemu: unable to execute QEMU command savevm (monitor missing?)
System: CentOS Linux release 6.0 (final)
Kernel: 2.6.32-71.23.1.el6.x86_64
KVM: QEMU PC emulator version 0.12.1 (qemu-kvm-0.12.1.2)
Libvirt: ibvirtd (libvirt) 0.8.1
Hi everyone,
I only recently subscribed to this list and hope you can shed some
light on the following error. I created a VM on my Centos 6 KVM
machine, used a qcow2 image and wanted to create a snapshot via 'virsh
2011 Aug 02
1
Snapshot error "command savevm not found"
Attempting to take snapshots of VM using virsh with the following command,
# virsh -c qemu:///system snapshot-create CentOS6-x86-001
Results in the following error,
error: internal error unable to execute QEMU command 'savevm': The command
savevm has not been found
The VM's virtual disks are qcow2. Below is the XML file for this vm
------------
<domain type='kvm'
2015 Feb 05
4
QEMU 2.2.0 managedsave: Unknown savevm section type 5
Hello,
I am running into issues restoring VMs during reboot for some of my XP VMs
- the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with KVM and
libvirt-guests is set to suspend at shutdown. The weird part is Windows 7
is restored properly from the managedsave however XP does not, doing a
start from virsh shows this:
virsh # start xp-check
error: Failed to start domain xp-check
2011 Jul 28
0
Snapshot error "command savevm not found"
Attempting to take snapshots of VM using virsh with the following command,
# virsh -c qemu:///system snapshot-create CentOS6-x86-001
Results in the following error,
error: internal error unable to execute QEMU command 'savevm': The command
savevm has not been found
The VM's virtual disks are qcow2. Below is the XML file for this vm
------------
<domain type='kvm'
2015 Feb 05
0
Re: QEMU 2.2.0 managedsave: Unknown savevm section type 5
On 05.02.2015 01:54, Paul Apostolescu wrote:
> Hello,
>
> I am running into issues restoring VMs during reboot for some of my XP
> VMs - the environment is QEMU 2.2.0, libvirt 1.2.12 on CentOS 6.5 with
> KVM and libvirt-guests is set to suspend at shutdown. The weird part is
> Windows 7 is restored properly from the managedsave however XP does not,
> doing a start from virsh
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Hi David, thanks for getting back to us on this.
Hi Florian,
(sombeody please correct me if I'm wrong)
>
> I see your point, except the issue Kashyap and I are describing does
> not
2018 Feb 08
4
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
> Sure, I do understand that Red Hat (or any other vendor) is taking no
> support responsibility for this. At this point I'd just like to
> contribute to a better understanding of what's expected to definitely
> _not_ work, so that people don't bloody their noses on that. :)
Indeed. nesting is nice to enable as it works in 99% of all cases. It
just doesn't work when
2011 Dec 23
2
[help] QEMUFile's format
Hi,
Is anyone clear about the format of qemu file for savevm or loadvm?
bruce
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2012 Jul 06
5
[RFC V3 0/5] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2011 May 05
3
converting save/dump output into physical memory image
A lot of people in the security community, myself included, are
interested in memory forensics these days. Virtualization is a natural
fit with memory forensics because it allows one to get access to a
guest's memory without having to introduce any extra software into the
guest or otherwise interfere with it. Incident responders are
particularly interested in getting memory dumps from
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2012 Jun 25
4
[RFC V2 PATCH 0/4] Multiqueue support for tap and virtio-net/vhost
Hello all:
This seires is an update of last version of multiqueue support to add multiqueue
capability to both tap and virtio-net.
Some kinds of tap backends has (macvatp in linux) or would (tap) support
multiqueue. In such kind of tap backend, each file descriptor of a tap is a
qeueu and ioctls were prodived to attach an exist tap file descriptor to the
tun/tap device. So the patch let qemu to
2016 Apr 19
4
Re: Create multiple domains from single saved domain state (is UUID/name fixed?)
(please don't top-post. Put your responses inline, in context)
On 04/19/2016 01:09 PM, Jonas Finnemann Jensen wrote:
> virt-builder looks like some fancy guest/host interaction related to
> building VM images.
>
> What I'm looking for is more like:
> virsh save running_domain saved-domain-A.img
> cp saved-domain-A.img saved-domain-B.img
> virsh save-image-edit
2006 May 17
1
Documentation for taper in spec.taper (PR#8871)
Full_Name: Michael Stein
Version: Version 2.1.1
OS: linux
Submission from: (NULL) (128.135.149.112)
The documentation for spec.taper says
p: The total proportion to be tapered, either a scalar or a
vector of the length of the number of series.
Details:
The cosine-bell taper is applied to the first and last 'p[i]/2'
observations of time series 'x[,
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
Hi David,
thanks for the added input! I'm taking the liberty to snip a few
paragraphs to trim this email down a bit.
On Thu, Feb 8, 2018 at 1:07 PM, David Hildenbrand <david@redhat.com> wrote:
>> Just to give an example,
>> https://www.redhat.com/en/blog/inception-how-usable-are-nested-kvm-guests
>> from just last September talks explicitly about how "guests can
2016 Apr 20
2
Re: Create multiple domains from single saved domain state (is UUID/name fixed?)
On Tue, Apr 19, 2016 at 03:22:18PM -0700, Jonas Finnemann Jensen wrote:
>>
>> You'll also need to change the name and uuid of the domain at the very
>> least.
>
>Agree, but is that possible with libvirt?
>
Not in a supported way. But you can, technically, edit the save file
(not using virsh), change the name and uuid and restore it. But don't
seek help when
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
On Thu, Feb 8, 2018 at 2:47 PM, David Hildenbrand <david@redhat.com> wrote:
>> Again, I'm somewhat struggling to understand this vs. live migration —
>> but it's entirely possible that I'm sorely lacking in my knowledge of
>> kernel and CPU internals.
>
> (savevm/loadvm is also called "migration to file")
>
> When we migrate to a file, it