search for: vmentri

Displaying 20 results from an estimated 56 matches for "vmentri".

Did you mean: vmentry
2013 Oct 30
3
[PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to UC guest
From 159251a04afcdcd8ca08e9f2bdfae279b2aa5471 Mon Sep 17 00:00:00 2001 From: Liu Jinsong <jinsong.liu@intel.com> Date: Thu, 31 Oct 2013 06:38:15 +0800 Subject: [PATCH 4/4] XSA-60 security hole: flush cache when vmentry back to UC guest This patch flush cache when vmentry back to UC guest, to prevent cache polluted by hypervisor access guest memory during UC mode. The elegant way to do this
2013 Nov 25
14
[PATCH] VMX: wbinvd when vmentry under UC
From e2d47e2f75bac6876b7c2eaecfe946966bf27516 Mon Sep 17 00:00:00 2001 From: Liu Jinsong <jinsong.liu@intel.com> Date: Tue, 26 Nov 2013 04:53:17 +0800 Subject: [PATCH] VMX: wbinvd when vmentry under UC This patch flush cache when vmentry back to UC guest, to prevent cache polluted by hypervisor access guest memory during UC mode. However, wbinvd is a _very_ time consuming operation, so 1.
2007 Oct 29
4
Avoiding VmEntry/VmExit.
Hi All, I am trying to provide services to guest VMs where I wish to run guest VMs in a loop. I wish to use a core to schedule a guest VM, service it eg. execute an ISR etc and then return to the context of Xen on that core, so that I can then schedule the next VM on that core. In doing all this, the goal is to avoid the calls to VMEntry and VMExit. Is there a workaround for this to be done or
2006 Apr 13
0
minor patch for tracing VMEXIT/VMENTRY for 64 bit system
Attached. -Himanshu -- ------------------------------------------------------------------------- Himanshu Raj PhD Student, GaTech (www.cc.gatech.edu/~rhim) I prefer to receive attachments in an open, non-proprietary format. ------------------------------------------------------------------------- _______________________________________________ Xen-devel mailing list
2006 Apr 14
1
[PATCH][VT] minor patch for tracing VMEXIT/VMENTRY for 64 bit systems
This patch enables tracing VMEXIT/ENTRY for 64-bit systems (are there any 32-bit VT enabled systems out there?) Signed-off by Himanshu Raj (rhim.list@nosuchaddr.com) -- ------------------------------------------------------------------------- Himanshu Raj PhD Student, GaTech (www.cc.gatech.edu/~rhim) I prefer to receive attachments in an open, non-proprietary format.
2010 Aug 18
4
RE: [PATCH 05/15] Nested Virtualization: core
> + > +/* The exitcode is in native SVM/VMX format. The forced exitcode > + * is in generic format. > + */ Introducing a 3rd format of exitcode is over-complicated IMO. > +enum nestedhvm_vmexits > +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, > + uint64_t exitcode) > +{ I doubt about the necessary of this kind of wrapper. In single layer
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the guest. Instead of trapping each LBR stack MSR access, the MSRs are passthroughed to the guest. Those MSRs are switched (i.e. load and saved) on VMExit and VMEntry. Test: Try "perf record -b ./test_program" on guest. Wei Wang (4): KVM/vmx: re-write the msr auto switch feature KVM/vmx: auto switch
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the guest. Instead of trapping each LBR stack MSR access, the MSRs are passthroughed to the guest. Those MSRs are switched (i.e. load and saved) on VMExit and VMEntry. Test: Try "perf record -b ./test_program" on guest. Wei Wang (4): KVM/vmx: re-write the msr auto switch feature KVM/vmx: auto switch
2008 Mar 14
4
[PATCH] vmx: fix debugctl handling
I recently realized that the original way of dealing with the DebugCtl MSR on VMX failed to make use of the dedicated guest VMCS field. This is being fixed with this patch. What is puzzling me to a certain degree is that while there is a guest VMCS field for this MSR, there''s no equivalent host load field, but there''s also no indication that the MSR would be cleared during a
2013 Apr 09
39
[PATCH 0/4] Add posted interrupt supporting
From: Yang Zhang <yang.z.zhang@Intel.com> The follwoing patches are adding the Posted Interrupt supporting to Xen: Posted Interrupt allows vAPIC interrupts to inject into guest directly without any vmexit. - When delivering a interrupt to guest, if target vcpu is running, update Posted-interrupt requests bitmap and send a notification event to the vcpu. Then the vcpu will handle this
2009 Feb 10
7
hang on restore in 3.3.1
I am having problems with save/restore under 3.3.1 in the GPLPV drivers. I call hvm_shutdown(xpdd, SHUTDOWN_suspend), but as soon as I lower IRQL (enabling interrupts), qemu goes to 100% CPU and the DomU load goes right up too. Xentrace is showing a whole lot of this going on: CPU0 200130258143212 (+ 770) hypercall [ rip = 0x000000008020632a, eax = 0xffffffff ] CPU0 200130258151107 (+
2017 Sep 25
2
[PATCH v1 1/4] KVM/vmx: re-write the msr auto switch feature
On 25/09/2017 06:44, Wei Wang wrote: > > +static void update_msr_autoload_count_max(void) > +{ > + u64 vmx_msr; > + int n; > + > + /* > + * According to the Intel SDM, if Bits 27:25 of MSR_IA32_VMX_MISC is > + * n, then (n + 1) * 512 is the recommended max number of MSRs to be > + * included in the VMExit and VMEntry MSR auto switch list. > + */ > +
2017 Sep 25
2
[PATCH v1 1/4] KVM/vmx: re-write the msr auto switch feature
On 25/09/2017 06:44, Wei Wang wrote: > > +static void update_msr_autoload_count_max(void) > +{ > + u64 vmx_msr; > + int n; > + > + /* > + * According to the Intel SDM, if Bits 27:25 of MSR_IA32_VMX_MISC is > + * n, then (n + 1) * 512 is the recommended max number of MSRs to be > + * included in the VMExit and VMEntry MSR auto switch list. > + */ > +
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2017 Sep 25
0
[PATCH v1 1/4] KVM/vmx: re-write the msr auto switch feature
This patch clarifies a vague statement in the SDM: the recommended maximum number of MSRs that can be automically switched by CPU during VMExit and VMEntry is 512, rather than 512 Bytes of MSRs. Depending on the CPU implementations, it may also support more than 512 MSRs to be auto switched. This can be calculated by (MSR_IA32_VMX_MISC[27:25] + 1) * 512. Signed-off-by: Wei Wang <wei.w.wang at
2007 Feb 26
4
[PATCH][xentrace][HVM] introduce HVM tracing to unify SVM and VMX tracing
Hello, this patch introduces HVM tracing: one tracing class for both, SVM and VMX. It adds several new trace events. So we can differentiate between them in the xentrace formats file and format each event''s data items appropriately. With this patch the xentrace_format output is much more informative. The previous simple tracing in SVM and VMX is completely replaced. Unfortunately I
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
On 25/09/2017 06:44, Wei Wang wrote: > Passthrough the LBR stack to the guest, and auto switch the stack MSRs > upon VMEntry and VMExit. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> This has to be enabled separately for each guest, because it may prevent live migration to hosts with a different family/model. Paolo > --- > arch/x86/kvm/vmx.c | 50
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
On 25/09/2017 06:44, Wei Wang wrote: > Passthrough the LBR stack to the guest, and auto switch the stack MSRs > upon VMEntry and VMExit. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> This has to be enabled separately for each guest, because it may prevent live migration to hosts with a different family/model. Paolo > --- > arch/x86/kvm/vmx.c | 50
2020 Sep 14
0
Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
On Mon, Sep 14, 2020 at 8:42 AM Yedidyah Bar David <didi@redhat.com> wrote: > > On Mon, Sep 14, 2020 at 12:28 AM wodel youchi <wodel.youchi@gmail.com> wrote: > > > > Hi, > > > > Thanks for the help, I think I found the solution using this link : https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/ > > > >