Displaying 13 results from an estimated 13 matches for "x86_cr4_vmxe".
Did you mean:
x86_cr4_vme
2006 Sep 29
1
[PATCH] hvm: clear vmxe if vmxoff
hvm: clear vmxe if vmxoff
The current Xen code keeps X86_CR4_VMXE set even if VMXON has not been
executed. The stop_vmx() code assumes that it is possible to call VMXOFF
if X86_CR4_VMXE is set which is not always true. Calling VMXOFF without
VMXON results in an illegal opcode trap, and to avoid this condition this
patch makes sure that X86_CR4_VMXE is only set...
2011 Nov 24
0
[PATCH 6/6] X86: implement PCID/INVPCID for hvm
...v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PAE))
#define hvm_smep_enabled(v) \
@@ -334,6 +336,7 @@ static inline int hvm_do_pmu_interrupt(s
(cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) | \
((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\
? X86_CR4_VMXE : 0) | \
+ (cpu_has_pcid ? X86_CR4_PCIDE : 0) | \
(xsave_enabled(_v) ? X86_CR4_OSXSAVE : 0))))
/* These exceptions must always be intercepted. */
diff -r c61a5ba8c972 xen/include/asm-x86/hvm/vmx/vmcs.h
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h Tue Nov 22 0...
2013 Apr 19
0
[PATCH] x86/HVM: move per-vendor function tables into .init.data
...repare,
.cpu_dead = vmx_cpu_dead,
@@ -1559,7 +1559,7 @@ static struct hvm_function_table __read_
.nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
};
-struct hvm_function_table * __init start_vmx(void)
+const struct hvm_function_table * __init start_vmx(void)
{
set_in_cr4(X86_CR4_VMXE);
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -199,8 +199,8 @@ extern bool_t hvm_enabled;
extern bool_t cpu_has_lmsl;
extern s8 hvm_port80_allowed;
-extern struct hvm_function_table *start_svm(void);
-extern struct hvm_function_table *start_vmx(void);
+extern co...
2013 Aug 23
2
[PATCH] Nested VMX: Allow to set CR4.OSXSAVE if guest has xsave feature
...int msr, u64 *msr_content)
{
struct vcpu *v = current;
u64 data = 0, host_data = 0;
+ unsigned int eax, ebx, ecx, edx;
int r = 1;
if ( !nestedhvm_enabled(v->domain) )
@@ -1925,8 +1926,13 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
data = X86_CR4_VMXE;
break;
case MSR_IA32_VMX_CR4_FIXED1:
+ data = 0x267ff;
+ /* Allow to set OSXSAVE if guest has xsave feature. */
+ hvm_cpuid(0x1, &eax, &ebx, &ecx, &edx);
+ if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) )
+ data |= X86_CR4_OSXSAV...
2013 Sep 23
11
[PATCH v4 0/4] x86/HVM: miscellaneous improvements
The first and third patches are cleaned up versions of an earlier v3
submission by Yang.
1: Nested VMX: check VMX capability before read VMX related MSRs
2: VMX: clean up capability checks
3: Nested VMX: fix IA32_VMX_CR4_FIXED1 msr emulation
4: x86: make hvm_cpuid() tolerate NULL pointers
Signed-off-by: Jan Beulich <jbeulich@suse.com>
2013 Sep 23
57
[PATCH RFC v13 00/20] Introduce PVH domU support
This patch series is a reworking of a series developed by Mukesh
Rathor at Oracle. The entirety of the design and development was done
by him; I have only reworked, reorganized, and simplified things in a
way that I think makes more sense. The vast majority of the credit
for this effort therefore goes to him. This version is labelled v13
because it is based on his most recent series, v11.
2007 Apr 18
2
[PATCH] Clean up x86 control register and MSR macros (corrected)
...able */
+#define X86_CR4_PGE 0x00000080 /* enable global pages */
+#define X86_CR4_PCE 0x00000100 /* enable performance counters at ipl 3 */
+#define X86_CR4_OSFXSR 0x00000200 /* enable fast FPU save and restore */
+#define X86_CR4_OSXMMEXCPT 0x00000400 /* enable unmasked SSE exceptions */
+#define X86_CR4_VMXE 0x00002000 /* enable VMX virtualization */
+
+/*
+ * x86-64 Task Priority Register, CR8
+ */
+#define X86_CR8_TPR 0x00000007 /* task priority register */
+
+/*
+ * AMD and Transmeta use MSRs for configuration; see <asm/msr-index.h>
+ */
+
+/*
+ * NSC/Cyrix CPU configuration register inde...
2007 Apr 18
2
[PATCH] Clean up x86 control register and MSR macros (corrected)
...able */
+#define X86_CR4_PGE 0x00000080 /* enable global pages */
+#define X86_CR4_PCE 0x00000100 /* enable performance counters at ipl 3 */
+#define X86_CR4_OSFXSR 0x00000200 /* enable fast FPU save and restore */
+#define X86_CR4_OSXMMEXCPT 0x00000400 /* enable unmasked SSE exceptions */
+#define X86_CR4_VMXE 0x00002000 /* enable VMX virtualization */
+
+/*
+ * x86-64 Task Priority Register, CR8
+ */
+#define X86_CR8_TPR 0x00000007 /* task priority register */
+
+/*
+ * AMD and Transmeta use MSRs for configuration; see <asm/msr-index.h>
+ */
+
+/*
+ * NSC/Cyrix CPU configuration register inde...
2007 Apr 18
1
No subject
...able */
+#define X86_CR4_PGE 0x00000080 /* enable global pages */
+#define X86_CR4_PCE 0x00000100 /* enable performance counters at ipl 3 */
+#define X86_CR4_OSFXSR 0x00000200 /* enable fast FPU save and restore */
+#define X86_CR4_OSXMMEXCPT 0x00000400 /* enable unmasked SSE exceptions */
+#define X86_CR4_VMXE 0x00002000 /* enable VMX virtualization */
+
+/*
+ * x86-64 Task Priority Register, CR8
+ */
+#define X86_CR8_TPR 0x00000007 /* task priority register */
+
+/*
+ * AMD and Transmeta use MSRs for configuration; see <asm/msr-index.h>
+ */
+
+/*
+ * NSC/Cyrix CPU configuration register inde...
2007 Apr 18
1
No subject
...able */
+#define X86_CR4_PGE 0x00000080 /* enable global pages */
+#define X86_CR4_PCE 0x00000100 /* enable performance counters at ipl 3 */
+#define X86_CR4_OSFXSR 0x00000200 /* enable fast FPU save and restore */
+#define X86_CR4_OSXMMEXCPT 0x00000400 /* enable unmasked SSE exceptions */
+#define X86_CR4_VMXE 0x00002000 /* enable VMX virtualization */
+
+/*
+ * x86-64 Task Priority Register, CR8
+ */
+#define X86_CR8_TPR 0x00000007 /* task priority register */
+
+/*
+ * AMD and Transmeta use MSRs for configuration; see <asm/msr-index.h>
+ */
+
+/*
+ * NSC/Cyrix CPU configuration register inde...
2013 Apr 09
39
[PATCH 0/4] Add posted interrupt supporting
From: Yang Zhang <yang.z.zhang@Intel.com>
The follwoing patches are adding the Posted Interrupt supporting to Xen:
Posted Interrupt allows vAPIC interrupts to inject into guest directly
without any vmexit.
- When delivering a interrupt to guest, if target vcpu is running,
update Posted-interrupt requests bitmap and send a notification event
to the vcpu. Then the vcpu will handle this
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running
on the host or in a separate VM, to control the execution of other VM-s
(pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.),
alter the page access bits in the shadow page tables (only for the hardware
backed ones, eg. Intel's EPT) and receive notifications when events of
interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running
on the host or in a separate VM, to control the execution of other VM-s
(pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.),
alter the page access bits in the shadow page tables (only for the hardware
backed ones, eg. Intel's EPT) and receive notifications when events of
interest have taken place