Displaying 20 results from an estimated 22 matches for "nvmx".
Did you mean:
nvme
2013 Feb 21
2
[PATCH v3] x86/nhvm: properly clean up after failure to set up all vCPU-s
...called for a vCPU
that the corresponding init function was never run on.
Once at it, also remove a redundant check from the corresponding
parameter validation code.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Make sure we fully tear down nHVM when the parameter gets set to 0.
v2: nVMX fixes required by 26486:7648ef657fe7 and 26489:83a3fa9c8434.
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3918,18 +3918,20 @@ long do_hvm_op(unsigned long op, XEN_GUE
}
if ( a.value > 1 )
rc = -EINVAL;
- if (...
2013 Jan 21
6
[PATCH v3 0/4] nested vmx: enable VMCS shadowing feature
Changes from v2 to v3:
- Use pfn_to_paddr() to get the address from frame number instead of doing shift directly.
- Remove some unnecessary initialization code and add "static" to vmentry_fields and gpdptr_fields.
- Enable the VMREAD/VMWRITE bitmap only if nested hvm is enabled.
- Use clear_page() to set all 0 to the page instead of memset().
- Use domheap to allocate the
2018 Feb 08
1
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...Kashyap Chamarthy wrote:
>
> [...]
>
>> Sounds like a similar problem as in
>> https://bugzilla.kernel.org/show_bug.cgi?id=198621
>>
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Actually, live migration with nVMX _does_ work insofar as you have
> _identical_ CPUs on both source and destination — i.e. use the QEMU
> '-cpu host' for the L1 guests. At least that's been the case in my
> experience....
2018 Feb 07
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
....
>
> My L0 CPU is: Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz.
>
> Thoughts?
Sounds like a similar problem as in
https://bugzilla.kernel.org/show_bug.cgi?id=198621
In short: there is no (live) migration support for nested VMX yet. So as
soon as your guest is using VMX itself ("nVMX"), this is not expected to
work.
--
Thanks,
David / dhildenb
2018 Feb 08
1
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...ap Chamarthy wrote:
>
> [...]
>
> > Sounds like a similar problem as in
> > https://bugzilla.kernel.org/show_bug.cgi?id=198621
> >
> > In short: there is no (live) migration support for nested VMX yet. So as
> > soon as your guest is using VMX itself ("nVMX"), this is not expected to
> > work.
>
> Actually, live migration with nVMX _does_ work insofar as you have
> _identical_ CPUs on both source and destination — i.e. use the QEMU
> '-cpu host' for the L1 guests. At least that's been the case in my
> experience...
2012 Aug 23
2
[PATCH] nvmx: fix resource relinquish for nested VMX
The previous order of relinquish resource is:
relinquish_domain_resources() -> vcpu_destroy() -> nvmx_vcpu_destroy().
However some L1 resources like nv_vvmcx and io_bitmaps are free in
nvmx_vcpu_destroy(), therefore the relinquish_domain_resources()
will not reduce the refcnt of the domain to 0, therefore the latter
vcpu release functions will not be called.
To fix this issue, we need to release t...
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Hi David, thanks for getting back to us on this.
Hi Florian,
(sombeody please correct me if I'm wrong)
>
> I see your point, except the issue Kashyap and I are describing does
> not occur with live migration, it occurs with...
2013 Aug 22
9
[PATCH v3 0/4] Nested VMX: APIC-v related bug fixing
From: Yang Zhang <yang.z.zhang@Intel.com>
The following patches fix the issue that fail to boot L2 guest on APIC-v
available machine. The main problem is that with APIC-v, virtual interrupt inject
L1 is totally through APIC-v. But if virtual interrupt is arrived when L2 is running,
L1 will detect interrupt through vmexit with reason external interrupt. If this happens,
we should update
2018 Feb 12
1
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
On Fri, Feb 09, 2018 at 12:02:25PM +0100, Florian Haas wrote:
> On Fri, Feb 9, 2018 at 11:48 AM, Kashyap Chamarthy <kchamart@redhat.com> wrote:
[...]
> > I've made some minor edits to clarify a bunch of bits, and a link to the
> > Kernel doc about Intel nVMX. (Hope that looks fine.)
>
> I'm sure they it does, but just so you know I currently don't see any
> edits from you on the Nested Guests page. Are you sure you
> saved/published your changes?
Thanks for catching that. _Now_ it's updated.
https://www.linux-kvm.org/pa...
2018 Feb 09
2
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...ree edits I've submitted to the wiki:
> https://www.linux-kvm.org/page/Special:Contributions/Fghaas
>
> Feel free to ruthlessly edit/roll back anything that is inaccurate.
> Thanks!
I've made some minor edits to clarify a bunch of bits, and a link to the
Kernel doc about Intel nVMX. (Hope that looks fine.)
You wrote: "L2...which does no further virtualization". Not quite true
— "under right circumstances" (read: sufficiently huge machine with tons
of RAM), L2 _can_ in turn L3. :-)
Last time I checked (this morning), Rich W.M. Jones had 4 levels of
nest...
2018 Feb 08
4
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...e migrate to a file, it really is the same migration stream. You
"dump" the VM state into a file, instead of sending it over to another
(running) target.
Once you load your VM state from that file, it is a completely fresh
VM/KVM environment. So you have to restore all the state. Now, as nVMX
state is not contained in the migration stream, you cannot restore that
state. The L1 state is therefore "damaged" or incomplete.
[...]
>>> Kashyap, can you think of any other limitations that would benefit
>>> from improved documentation?
>>
>> We should c...
2012 Dec 10
26
[PATCH 00/11] Add virtual EPT support Xen.
...ess case
EPT: Make ept data structure or operations neutral
nEPT: Try to enable EPT paging for L2 guest.
nEPT: Sync PDPTR fields if L2 guest in PAE paging mode
nEPT: Use minimal permission for nested p2m.
nEPT: handle invept instruction from L1 VMM
nEPT: expost EPT capablity to L1 VMM
nVMX: Expose VPID capability to nested VMM.
xen/arch/x86/hvm/hvm.c | 7 +-
xen/arch/x86/hvm/svm/nestedsvm.c | 31 +++
xen/arch/x86/hvm/svm/svm.c | 3 +-
xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
xen/arch/x86/hvm/vmx/vmx.c | 76 +++++-...
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...d wrote:
> On 07.02.2018 16:31, Kashyap Chamarthy wrote:
[...]
> Sounds like a similar problem as in
> https://bugzilla.kernel.org/show_bug.cgi?id=198621
>
> In short: there is no (live) migration support for nested VMX yet. So as
> soon as your guest is using VMX itself ("nVMX"), this is not expected to
> work.
Actually, live migration with nVMX _does_ work insofar as you have
_identical_ CPUs on both source and destination — i.e. use the QEMU
'-cpu host' for the L1 guests. At least that's been the case in my
experience. FWIW, I frequently use that...
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...ng? I'd be happy to summarize those and
>> add them to the linux-kvm.org FAQ so others are less likely to hit
>> their head on this issue. In particular:
>
> The general problem is that migration of an L1 will not work when it is
> running L2, so when L1 is using VMX ("nVMX").
>
> Migrating an L2 should work as before.
>
> The problem is, in order for L1 to make use of VMX to run L2, we have to
> run L2 in L0, simulating VMX -> nested VMX a.k.a. nVMX . This requires
> additional state information about L1 ("nVMX" state), which is no...
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...really is the same migration stream. You
> "dump" the VM state into a file, instead of sending it over to another
> (running) target.
>
> Once you load your VM state from that file, it is a completely fresh
> VM/KVM environment. So you have to restore all the state. Now, as nVMX
> state is not contained in the migration stream, you cannot restore that
> state. The L1 state is therefore "damaged" or incomplete.
*lightbulb* Thanks a lot, that's a perfectly logical explanation. :)
>> Now, here's a bit more information on my continued testing. A...
2013 Feb 20
8
crash in nvmx_vcpu_destroy
...-live domU localhost;do sleep 1;done" I
just got the crash shown below. And it can be reproduced.
The guest has 2 vcpus and 512mb, it runs pvops 3.7.9
(XEN) ----[ Xen-4.3.26579-20130219.172714 x86_64 debug=n Not tainted ]----
(XEN) CPU: 14
(XEN) RIP: e008:[<ffff82c4c01dd197>] nvmx_vcpu_destroy+0xb7/0x150
(XEN) RFLAGS: 0000000000010282 CONTEXT: hypervisor
(XEN) rax: 0000000000000000 rbx: ffff830084309000 rcx: 0000000000000060
(XEN) rdx: 0000000000000000 rsi: 0000000000000003 rdi: fffffffffffffff8
(XEN) rbp: ffff8300843096e0 rsp: ffff83036ff37e40 r8: 00000000000...
2013 May 14
1
guestfish runs w/ a nested guest
# Ref: http://libguestfs.org/guestfs-performance.1.html
Run the below command:
$ time guestfish -a /dev/null run
NOTE: Discard the first few results, to get a hot cache. (Thanks Rich.)
1/ L0. with L1 running.
----------------------------------------------------------------------
$ for i in {1..10}; do time guestfish -a /dev/null run; done
real 0m28.277s
user 0m11.028s
2018 Feb 09
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...to the wiki:
>> https://www.linux-kvm.org/page/Special:Contributions/Fghaas
>>
>> Feel free to ruthlessly edit/roll back anything that is inaccurate.
>> Thanks!
>
> I've made some minor edits to clarify a bunch of bits, and a link to the
> Kernel doc about Intel nVMX. (Hope that looks fine.)
I'm sure they it does, but just so you know I currently don't see any
edits from you on the Nested Guests page. Are you sure you
saved/published your changes?
> You wrote: "L2...which does no further virtualization". Not quite true
> — "unde...
2018 Feb 08
0
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
...) Xeon(R) CPU E5-2609 v3 @ 1.90GHz.
>>
>> Thoughts?
>
> Sounds like a similar problem as in
> https://bugzilla.kernel.org/show_bug.cgi?id=198621
>
> In short: there is no (live) migration support for nested VMX yet. So as
> soon as your guest is using VMX itself ("nVMX"), this is not expected to
> work.
Hi David, thanks for getting back to us on this.
I see your point, except the issue Kashyap and I are describing does
not occur with live migration, it occurs with savevm/loadvm (virsh
managedsave/virsh start in libvirt terms, nova suspend/resume in
Open...
2013 Jun 13
3
Haswell 4770 misidentified as Sandy Bridge
Hi,
I'm running libvert on a Debian 7 system. I have upgraded libvert and qemu
from source (v1.06 and 1.5.0 respectively) and the problem persists. The
guest OS is also a Debian 7 system running a non-SMP kernel. The error
message from virt-manager is
Error starting domain: unsupported configuration: guest and host CPU are
not compatible: Host CPU does not provide required features: rtm,