From: Yang Zhang <yang.z.zhang@Intel.com> This patch fixs two issues: 1. The CR_READ_SHADOW should only cover the value that L2 wirtes to CR when L2 is running. But currently, L0 wirtes wrong value to it during virtual vmentry and L2''s CR access emualtion. 2. L2 changed cr0 in a way that did not change any of L1''s shadowed bits, but did change L0 shadowed bits. In this case, the effective cr0 value that L1 would like to write into the hardware is consist of the L2-owned bits from the new value combined with the L1-owned bits from L1''s guest cr0. Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> --- xen/arch/x86/hvm/hvm.c | 6 ++++ xen/arch/x86/hvm/vmx/vmx.c | 51 ++++++++++++++++++++++++++++++++++++++- xen/arch/x86/hvm/vmx/vvmx.c | 2 + xen/include/asm-x86/hvm/vcpu.h | 3 ++ 4 files changed, 60 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index de81e45..77ff40e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) } v->arch.hvm_vcpu.guest_cr[0] = value; + if ( !nestedhvm_vmswitch_in_progress(v) && + nestedhvm_vcpu_in_guestmode(v) ) + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; hvm_update_guest_cr(v, 0); if ( (value ^ old_value) & X86_CR0_PG ) { @@ -1899,6 +1902,9 @@ int hvm_set_cr4(unsigned long value) } v->arch.hvm_vcpu.guest_cr[4] = value; + if ( !nestedhvm_vmswitch_in_progress(v) && + nestedhvm_vcpu_in_guestmode(v) ) + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; hvm_update_guest_cr(v, 4); hvm_memory_event_cr4(value, old_cr); diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 9ca8632..279a00a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) vmx_update_cpu_exec_control(v); } + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + if ( !nestedhvm_vmswitch_in_progress(v) ) + { + /* + * We get here when L2 changed cr0 in a way that did not change + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), + * but did change L0 shadowed bits. So we first calculate the + * effective cr0 value that L1 would like to write into the + * hardware. It consists of the L2-owned bits from the new + * value combined with the L1-owned bits from L1''s guest cr0. + */ + v->arch.hvm_vcpu.guest_cr[0] &+ ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + v->arch.hvm_vcpu.guest_cr[0] |+ __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + } + /* nvcpu.guest_cr[0] is what L2 write to cr0 actually. */ + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[0]); + } + else + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); + if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) ) { if ( v != current ) @@ -1144,7 +1168,6 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[0] v->arch.hvm_vcpu.guest_cr[0] | hw_cr0_mask; __vmwrite(GUEST_CR0, v->arch.hvm_vcpu.hw_cr[0]); - __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); /* Changing CR0 can change some bits in real CR4. */ vmx_update_guest_cr(v, 4); @@ -1169,6 +1192,31 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[4] = HVM_CR4_HOST_MASK; if ( paging_mode_hap(v->domain) ) v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_PAE; + + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + if ( !nestedhvm_vmswitch_in_progress(v) ) + { + /* + * We get here when L2 changed cr4 in a way that did not change + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), + * but did change L0 shadowed bits. So we first calculate the + * effective cr4 value that L1 would like to write into the + * hardware. It consists of the L2-owned bits from the new + * value combined with the L1-owned bits from L1''s guest cr4. + */ + v->arch.hvm_vcpu.guest_cr[4] &+ ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + v->arch.hvm_vcpu.guest_cr[4] |+ __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR4) & + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + } + /* nvcpu.guest_cr[4] is what L2 write to cr4 actually. */ + __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[4]); + } + else + __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); + v->arch.hvm_vcpu.hw_cr[4] |= v->arch.hvm_vcpu.guest_cr[4]; if ( v->arch.hvm_vmx.vmx_realmode ) v->arch.hvm_vcpu.hw_cr[4] |= X86_CR4_VME; @@ -1188,7 +1236,6 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_SMEP; } __vmwrite(GUEST_CR4, v->arch.hvm_vcpu.hw_cr[4]); - __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); break; default: BUG(); diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 2b2de77..6b2d51e 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct vcpu *v) vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), vmcs_gstate_field); + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); hvm_set_cr0(__get_vvmcs(vvmcs, GUEST_CR0)); hvm_set_cr4(__get_vvmcs(vvmcs, GUEST_CR4)); hvm_set_cr3(__get_vvmcs(vvmcs, GUEST_CR3)); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index e8b8cd7..ab5349a 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -100,6 +100,9 @@ struct nestedvcpu { */ bool_t nv_ioport80; bool_t nv_ioportED; + + /* L2''s control-resgister, just as the L2 sees them. */ + unsigned long guest_cr[5]; }; #define vcpu_nestedhvm(v) ((v)->arch.hvm_vcpu.nvcpu) -- 1.7.1
Acked-by Eddie.dong@intel.com -----Original Message----- From: Zhang, Yang Z Sent: Tuesday, October 08, 2013 3:30 PM To: xen-devel@lists.xensource.com Cc: JBeulich@suse.com; Dong, Eddie; Zhang, Yang Z Subject: [PATCH] Nested VMX: CR emulation fix up From: Yang Zhang <yang.z.zhang@Intel.com> This patch fixs two issues: 1. The CR_READ_SHADOW should only cover the value that L2 wirtes to CR when L2 is running. But currently, L0 wirtes wrong value to it during virtual vmentry and L2''s CR access emualtion. 2. L2 changed cr0 in a way that did not change any of L1''s shadowed bits, but did change L0 shadowed bits. In this case, the effective cr0 value that L1 would like to write into the hardware is consist of the L2-owned bits from the new value combined with the L1-owned bits from L1''s guest cr0. Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> --- xen/arch/x86/hvm/hvm.c | 6 ++++ xen/arch/x86/hvm/vmx/vmx.c | 51 ++++++++++++++++++++++++++++++++++++++- xen/arch/x86/hvm/vmx/vvmx.c | 2 + xen/include/asm-x86/hvm/vcpu.h | 3 ++ 4 files changed, 60 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index de81e45..77ff40e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) } v->arch.hvm_vcpu.guest_cr[0] = value; + if ( !nestedhvm_vmswitch_in_progress(v) && + nestedhvm_vcpu_in_guestmode(v) ) + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; hvm_update_guest_cr(v, 0); if ( (value ^ old_value) & X86_CR0_PG ) { @@ -1899,6 +1902,9 @@ int hvm_set_cr4(unsigned long value) } v->arch.hvm_vcpu.guest_cr[4] = value; + if ( !nestedhvm_vmswitch_in_progress(v) && + nestedhvm_vcpu_in_guestmode(v) ) + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; hvm_update_guest_cr(v, 4); hvm_memory_event_cr4(value, old_cr); diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 9ca8632..279a00a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) vmx_update_cpu_exec_control(v); } + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + if ( !nestedhvm_vmswitch_in_progress(v) ) + { + /* + * We get here when L2 changed cr0 in a way that did not change + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), + * but did change L0 shadowed bits. So we first calculate the + * effective cr0 value that L1 would like to write into the + * hardware. It consists of the L2-owned bits from the new + * value combined with the L1-owned bits from L1''s guest cr0. + */ + v->arch.hvm_vcpu.guest_cr[0] &+ ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + v->arch.hvm_vcpu.guest_cr[0] |+ __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + } + /* nvcpu.guest_cr[0] is what L2 write to cr0 actually. */ + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[0]); + } + else + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); + if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) ) { if ( v != current ) @@ -1144,7 +1168,6 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[0] v->arch.hvm_vcpu.guest_cr[0] | hw_cr0_mask; __vmwrite(GUEST_CR0, v->arch.hvm_vcpu.hw_cr[0]); - __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); /* Changing CR0 can change some bits in real CR4. */ vmx_update_guest_cr(v, 4); @@ -1169,6 +1192,31 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[4] = HVM_CR4_HOST_MASK; if ( paging_mode_hap(v->domain) ) v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_PAE; + + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + if ( !nestedhvm_vmswitch_in_progress(v) ) + { + /* + * We get here when L2 changed cr4 in a way that did not change + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), + * but did change L0 shadowed bits. So we first calculate the + * effective cr4 value that L1 would like to write into the + * hardware. It consists of the L2-owned bits from the new + * value combined with the L1-owned bits from L1''s guest cr4. + */ + v->arch.hvm_vcpu.guest_cr[4] &+ ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + v->arch.hvm_vcpu.guest_cr[4] |+ __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR4) & + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + } + /* nvcpu.guest_cr[4] is what L2 write to cr4 actually. */ + __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[4]); + } + else + __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); + v->arch.hvm_vcpu.hw_cr[4] |= v->arch.hvm_vcpu.guest_cr[4]; if ( v->arch.hvm_vmx.vmx_realmode ) v->arch.hvm_vcpu.hw_cr[4] |= X86_CR4_VME; @@ -1188,7 +1236,6 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_SMEP; } __vmwrite(GUEST_CR4, v->arch.hvm_vcpu.hw_cr[4]); - __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); break; default: BUG(); diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 2b2de77..6b2d51e 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct vcpu *v) vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), vmcs_gstate_field); + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); hvm_set_cr0(__get_vvmcs(vvmcs, GUEST_CR0)); hvm_set_cr4(__get_vvmcs(vvmcs, GUEST_CR4)); hvm_set_cr3(__get_vvmcs(vvmcs, GUEST_CR3)); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index e8b8cd7..ab5349a 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -100,6 +100,9 @@ struct nestedvcpu { */ bool_t nv_ioport80; bool_t nv_ioportED; + + /* L2''s control-resgister, just as the L2 sees them. */ + unsigned long guest_cr[5]; }; #define vcpu_nestedhvm(v) ((v)->arch.hvm_vcpu.nvcpu) -- 1.7.1
Dong, Eddie wrote on 2013-10-08:> Acked-by Eddie.dong@intel.com > > -----Original Message----- > From: Zhang, Yang Z > Sent: Tuesday, October 08, 2013 3:30 PM > To: xen-devel@lists.xensource.com > Cc: JBeulich@suse.com; Dong, Eddie; Zhang, Yang Z > Subject: [PATCH] Nested VMX: CR emulation fix up > > From: Yang Zhang <yang.z.zhang@Intel.com> > > This patch fixs two issues: > 1. The CR_READ_SHADOW should only cover the value that L2 wirtes to CR > when L2 is running. But currently, L0 wirtes wrong value to it during > virtual vmentry and L2''s CR access emualtion. > 2. L2 changed cr0 in a way that did not change any of L1''s shadowed > bits, but did change L0 shadowed bits. In this case, the effective cr0 > value that L1 would like to write into the hardware is consist of the > L2-owned bits from the new value combined with the L1-owned bits from L1''s guest cr0.The same issue in cr4 emulation which also fixed by this patch.> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> > --- > xen/arch/x86/hvm/hvm.c | 6 ++++ xen/arch/x86/hvm/vmx/vmx.c > | 51 ++++++++++++++++++++++++++++++++++++++- > xen/arch/x86/hvm/vmx/vvmx.c | 2 + xen/include/asm-x86/hvm/vcpu.h > | 3 ++ 4 files changed, 60 insertions(+), 2 deletions(-) > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index > de81e45..77ff40e 100644 > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) > } > > v->arch.hvm_vcpu.guest_cr[0] = value; > + if ( !nestedhvm_vmswitch_in_progress(v) && > + nestedhvm_vcpu_in_guestmode(v) ) > + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; > hvm_update_guest_cr(v, 0); > > if ( (value ^ old_value) & X86_CR0_PG ) { @@ -1899,6 +1902,9 @@ int > hvm_set_cr4(unsigned long value) } > > v->arch.hvm_vcpu.guest_cr[4] = value; > + if ( !nestedhvm_vmswitch_in_progress(v) && > + nestedhvm_vcpu_in_guestmode(v) ) > + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; > hvm_update_guest_cr(v, 4); > hvm_memory_event_cr4(value, old_cr); > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c > index 9ca8632..279a00a 100644 > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu *v, > unsigned int cr) > vmx_update_cpu_exec_control(v); > } > + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + > if ( !nestedhvm_vmswitch_in_progress(v) ) + { + > /* + * We get here when L2 changed cr0 in a way that > did + not change + * any of L1''s shadowed bits (see > nvmx_n2_vmexit_handler), + * but did change L0 shadowed > bits. So we first + calculate the + * effective cr0 > value that L1 would like to write into the + * hardware. > It consists of the L2-owned bits from + the new + * > value combined with the L1-owned bits from L1''s + guest cr0. + > */ + v->arch.hvm_vcpu.guest_cr[0] &= + > ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + > v->arch.hvm_vcpu.guest_cr[0] |= + > __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & + > __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + > } + /* nvcpu.guest_cr[0] is what L2 write to cr0 > actually. */ + __vmwrite(CR0_READ_SHADOW, > v->arch.hvm_vcpu.nvcpu.guest_cr[0]); + } + else + > __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); + > if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) ) > { > if ( v != current ) > @@ -1144,7 +1168,6 @@ static void vmx_update_guest_cr(struct vcpu *v, > unsigned int cr) > v->arch.hvm_vcpu.hw_cr[0] > v->arch.hvm_vcpu.guest_cr[0] | hw_cr0_mask; > __vmwrite(GUEST_CR0, v->arch.hvm_vcpu.hw_cr[0]); > - __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); > > /* Changing CR0 can change some bits in real CR4. */ > vmx_update_guest_cr(v, 4); > @@ -1169,6 +1192,31 @@ static void vmx_update_guest_cr(struct vcpu *v, > unsigned int cr) > v->arch.hvm_vcpu.hw_cr[4] = HVM_CR4_HOST_MASK; > if ( paging_mode_hap(v->domain) ) > v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_PAE; > + + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + > if ( !nestedhvm_vmswitch_in_progress(v) ) + { + > /* + * We get here when L2 changed cr4 in a way that > did + not change + * any of L1''s shadowed bits (see > nvmx_n2_vmexit_handler), + * but did change L0 shadowed > bits. So we first + calculate the + * effective cr4 > value that L1 would like to write into the + * hardware. > It consists of the L2-owned bits from + the new + * > value combined with the L1-owned bits from L1''s + guest cr4. + > */ + v->arch.hvm_vcpu.guest_cr[4] &= + > ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + > v->arch.hvm_vcpu.guest_cr[4] |= + > __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR4) & + > __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR4_GUEST_HOST_MASK); + > } + /* nvcpu.guest_cr[4] is what L2 write to cr4 > actually. */ + __vmwrite(CR4_READ_SHADOW, > v->arch.hvm_vcpu.nvcpu.guest_cr[4]); + } + else + > __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); + > v->arch.hvm_vcpu.hw_cr[4] |= v->arch.hvm_vcpu.guest_cr[4]; > if ( v->arch.hvm_vmx.vmx_realmode ) > v->arch.hvm_vcpu.hw_cr[4] |= X86_CR4_VME; @@ -1188,7 > +1236,6 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned > +int cr) > v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_SMEP; > } __vmwrite(GUEST_CR4, v->arch.hvm_vcpu.hw_cr[4]); - > __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]); > break; default: BUG(); > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c > index 2b2de77..6b2d51e 100644 > --- a/xen/arch/x86/hvm/vmx/vvmx.c > +++ b/xen/arch/x86/hvm/vmx/vvmx.c > @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct vcpu > *v) > vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), > vmcs_gstate_field); > + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); > + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); > hvm_set_cr0(__get_vvmcs(vvmcs, GUEST_CR0)); > hvm_set_cr4(__get_vvmcs(vvmcs, GUEST_CR4)); > hvm_set_cr3(__get_vvmcs(vvmcs, GUEST_CR3)); diff --git > a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h > index e8b8cd7..ab5349a 100644 > --- a/xen/include/asm-x86/hvm/vcpu.h > +++ b/xen/include/asm-x86/hvm/vcpu.h > @@ -100,6 +100,9 @@ struct nestedvcpu { > */ > bool_t nv_ioport80; > bool_t nv_ioportED; > + > + /* L2''s control-resgister, just as the L2 sees them. */ > + unsigned long guest_cr[5]; > }; > > #define vcpu_nestedhvm(v) ((v)->arch.hvm_vcpu.nvcpu)Best regards, Yang
>>> On 08.10.13 at 09:29, Yang Zhang <yang.z.zhang@intel.com> wrote: > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) > } > > v->arch.hvm_vcpu.guest_cr[0] = value; > + if ( !nestedhvm_vmswitch_in_progress(v) && > + nestedhvm_vcpu_in_guestmode(v) ) > + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; > hvm_update_guest_cr(v, 0); > > if ( (value ^ old_value) & X86_CR0_PG ) { > @@ -1899,6 +1902,9 @@ int hvm_set_cr4(unsigned long value) > } > > v->arch.hvm_vcpu.guest_cr[4] = value; > + if ( !nestedhvm_vmswitch_in_progress(v) && > + nestedhvm_vcpu_in_guestmode(v) ) > + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; > hvm_update_guest_cr(v, 4);Considering the redundancy - wouldn''t all of the above now become the body of a rather desirable helper function?> --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) > vmx_update_cpu_exec_control(v); > } > > + if ( nestedhvm_vcpu_in_guestmode(v) ) > + { > + if ( !nestedhvm_vmswitch_in_progress(v) ) > + { > + /* > + * We get here when L2 changed cr0 in a way that did not change > + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), > + * but did change L0 shadowed bits. So we first calculate the > + * effective cr0 value that L1 would like to write into the > + * hardware. It consists of the L2-owned bits from the new > + * value combined with the L1-owned bits from L1''s guest cr0. > + */ > + v->arch.hvm_vcpu.guest_cr[0] &> + ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); > + v->arch.hvm_vcpu.guest_cr[0] |> + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & > + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); > + } > + /* nvcpu.guest_cr[0] is what L2 write to cr0 actually. */ > + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[0]); > + } > + else > + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]);Please re-phrase this into if ( !nestedhvm_vcpu_in_guestmode(v) ) ... else if ( !nestedhvm_vmswitch_in_progress(v) ) For one that''ll put the "normal" (non-nested) case first. And second it''ll reduce indentation on the main portion of your additions (at once taking care of the otherwise over-long lines in there). I''m btw also mildly concerned that the moving ahead of this VMCS write might have other side effects. I did check that we don''t read the shadow value other than in debugging and nested code, but I''m nevertheless not quite certain that this is indeed benign.> --- a/xen/arch/x86/hvm/vmx/vvmx.c > +++ b/xen/arch/x86/hvm/vmx/vvmx.c > @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct vcpu *v) > vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), > vmcs_gstate_field); > > + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); > + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW);Given that the only time where these get read is in vmx_update_guest_cr() (for writing CR<n>_READ_SHADOW), are the writes above really needed? And if they are, aren''t there other updates to these two fields still missing?> --- a/xen/include/asm-x86/hvm/vcpu.h > +++ b/xen/include/asm-x86/hvm/vcpu.h > @@ -100,6 +100,9 @@ struct nestedvcpu { > */ > bool_t nv_ioport80; > bool_t nv_ioportED; > + > + /* L2''s control-resgister, just as the L2 sees them. */ > + unsigned long guest_cr[5];Considering that this touches code common with nested SVM, I''d expect the SVM maintainers to have to approve of the change in any case. In particular I wonder whether this addition isn''t obsoleting SVM''s ns_cr0. Jan
On 10/08/2013 04:31 AM, Jan Beulich wrote:>>>> On 08.10.13 at 09:29, Yang Zhang <yang.z.zhang@intel.com> wrote: >> --- a/xen/arch/x86/hvm/hvm.c >> +++ b/xen/arch/x86/hvm/hvm.c >> @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) >> } >> >> v->arch.hvm_vcpu.guest_cr[0] = value; >> + if ( !nestedhvm_vmswitch_in_progress(v) && >> + nestedhvm_vcpu_in_guestmode(v) ) >> + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; >> hvm_update_guest_cr(v, 0); >> >> if ( (value ^ old_value) & X86_CR0_PG ) { >> @@ -1899,6 +1902,9 @@ int hvm_set_cr4(unsigned long value) >> } >> >> v->arch.hvm_vcpu.guest_cr[4] = value; >> + if ( !nestedhvm_vmswitch_in_progress(v) && >> + nestedhvm_vcpu_in_guestmode(v) ) >> + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; >> hvm_update_guest_cr(v, 4); > Considering the redundancy - wouldn''t all of the above now > become the body of a rather desirable helper function? > >> --- a/xen/arch/x86/hvm/vmx/vmx.c >> +++ b/xen/arch/x86/hvm/vmx/vmx.c >> @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) >> vmx_update_cpu_exec_control(v); >> } >> >> + if ( nestedhvm_vcpu_in_guestmode(v) ) >> + { >> + if ( !nestedhvm_vmswitch_in_progress(v) ) >> + { >> + /* >> + * We get here when L2 changed cr0 in a way that did not change >> + * any of L1''s shadowed bits (see nvmx_n2_vmexit_handler), >> + * but did change L0 shadowed bits. So we first calculate the >> + * effective cr0 value that L1 would like to write into the >> + * hardware. It consists of the L2-owned bits from the new >> + * value combined with the L1-owned bits from L1''s guest cr0. >> + */ >> + v->arch.hvm_vcpu.guest_cr[0] &>> + ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); >> + v->arch.hvm_vcpu.guest_cr[0] |>> + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & >> + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); >> + } >> + /* nvcpu.guest_cr[0] is what L2 write to cr0 actually. */ >> + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.nvcpu.guest_cr[0]); >> + } >> + else >> + __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]); > Please re-phrase this into > > if ( !nestedhvm_vcpu_in_guestmode(v) ) > ... > else if ( !nestedhvm_vmswitch_in_progress(v) ) > > For one that''ll put the "normal" (non-nested) case first. And second > it''ll reduce indentation on the main portion of your additions (at once > taking care of the otherwise over-long lines in there). > > I''m btw also mildly concerned that the moving ahead of this VMCS > write might have other side effects. I did check that we don''t read > the shadow value other than in debugging and nested code, but > I''m nevertheless not quite certain that this is indeed benign. > >> --- a/xen/arch/x86/hvm/vmx/vvmx.c >> +++ b/xen/arch/x86/hvm/vmx/vvmx.c >> @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct vcpu *v) >> vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), >> vmcs_gstate_field); >> >> + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); >> + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); > Given that the only time where these get read is in > vmx_update_guest_cr() (for writing CR<n>_READ_SHADOW), > are the writes above really needed? And if they are, aren''t there > other updates to these two fields still missing? > >> --- a/xen/include/asm-x86/hvm/vcpu.h >> +++ b/xen/include/asm-x86/hvm/vcpu.h >> @@ -100,6 +100,9 @@ struct nestedvcpu { >> */ >> bool_t nv_ioport80; >> bool_t nv_ioportED; >> + >> + /* L2''s control-resgister, just as the L2 sees them. */ >> + unsigned long guest_cr[5];This should be prefixed with nv_: all members of this structure are. In addition, struct hvm_vcpu has exact same member.> Considering that this touches code common with nested SVM, I''d > expect the SVM maintainers to have to approve of the change in > any case. > > In particular I wonder whether this addition isn''t obsoleting > SVM''s ns_cr0. >I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would then be updated in paths where it currently is not. For example in nsvm_vmcb_prepare4vmrun(): /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 = nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc = hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. -boris
>>> On 08.10.13 at 17:46, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote: > On 10/08/2013 04:31 AM, Jan Beulich wrote: >>>>> On 08.10.13 at 09:29, Yang Zhang <yang.z.zhang@intel.com> wrote: >>> --- a/xen/include/asm-x86/hvm/vcpu.h >>> +++ b/xen/include/asm-x86/hvm/vcpu.h >>> @@ -100,6 +100,9 @@ struct nestedvcpu { >>> */ >>> bool_t nv_ioport80; >>> bool_t nv_ioportED; >>> + >>> + /* L2''s control-resgister, just as the L2 sees them. */ >>> + unsigned long guest_cr[5]; > > This should be prefixed with nv_: all members of this structure are. In > addition, struct hvm_vcpu has exact same member.If anything, I think the pointless nv_ prefixes should be dropped from the other members'' names. Jan
Jan Beulich wrote on 2013-10-08:>>>> On 08.10.13 at 09:29, Yang Zhang <yang.z.zhang@intel.com> wrote: >> --- a/xen/arch/x86/hvm/hvm.c >> +++ b/xen/arch/x86/hvm/hvm.c >> @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) >> } >> >> v->arch.hvm_vcpu.guest_cr[0] = value; >> + if ( !nestedhvm_vmswitch_in_progress(v) && >> + nestedhvm_vcpu_in_guestmode(v) ) >> + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; >> hvm_update_guest_cr(v, 0); >> >> if ( (value ^ old_value) & X86_CR0_PG ) { @@ -1899,6 +1902,9 @@ >> int hvm_set_cr4(unsigned long value) } >> >> v->arch.hvm_vcpu.guest_cr[4] = value; >> + if ( !nestedhvm_vmswitch_in_progress(v) && >> + nestedhvm_vcpu_in_guestmode(v) ) >> + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; >> hvm_update_guest_cr(v, 4); > > Considering the redundancy - wouldn''t all of the above now become the > body of a rather desirable helper function?Sure.> >> --- a/xen/arch/x86/hvm/vmx/vmx.c >> +++ b/xen/arch/x86/hvm/vmx/vmx.c >> @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu >> *v, > unsigned int cr) >> vmx_update_cpu_exec_control(v); >> } >> + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + >> if ( !nestedhvm_vmswitch_in_progress(v) ) + { + >> /* + * We get here when L2 changed cr0 in a way that >> + did not change + * any of L1''s shadowed bits (see >> nvmx_n2_vmexit_handler), + * but did change L0 shadowed >> bits. So we first calculate the + * effective cr0 value >> that L1 would like to write into the + * hardware. It >> consists of the L2-owned bits from + the new + * value >> combined with the L1-owned bits from L1''s guest cr0. + >> */ + v->arch.hvm_vcpu.guest_cr[0] &= + >> ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + >> v->arch.hvm_vcpu.guest_cr[0] |= + >> __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, GUEST_CR0) & + >> __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + >> } + /* nvcpu.guest_cr[0] is what L2 write to cr0 >> actually. */ + __vmwrite(CR0_READ_SHADOW, >> v->arch.hvm_vcpu.nvcpu.guest_cr[0]); + } + else + >> __vmwrite(CR0_READ_SHADOW, > v->arch.hvm_vcpu.guest_cr[0]); > > Please re-phrase this into > > if ( !nestedhvm_vcpu_in_guestmode(v) ) > ... > else if ( !nestedhvm_vmswitch_in_progress(v) ) > For one that''ll put the "normal" (non-nested) case first. And second > it''ll reduce indentation on the main portion of your additions (at > once taking care of the otherwise over-long lines in there).Yes, this looks much tidily.> > I''m btw also mildly concerned that the moving ahead of this VMCS write > might have other side effects. I did check that we don''t read the > shadow value other than in debugging and nested code, but I''m > nevertheless not quite certain that this is indeed benign.It should be ok. Guest_cr[] is same to this VMCS. Hypervisor will use it instead touch the VMCS when he wants to check guest''s CR value. So here just change the order of writes VMCS should be ok.> >> --- a/xen/arch/x86/hvm/vmx/vvmx.c >> +++ b/xen/arch/x86/hvm/vmx/vvmx.c >> @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct >> vcpu > *v) >> vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), >> vmcs_gstate_field); >> + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); >> + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); > > Given that the only time where these get read is in > vmx_update_guest_cr() (for writing CR<n>_READ_SHADOW), are the writes > above really needed? And if they are, aren''t there other updates to > these two fields still missing? >Not sure whether I am understand your meaning. There are two places that L0 need to update CR_READ_SHADOW for L2 and this patch already handled the two cases: 1. The first place is during virtual vmentry. If L2 writing the CR causes vmexit, and L0 inject it to L1. Then L1 will write the CR_READ_SHADOW in virtual vmcs. So during virtual vmentry, L0 need to read it from virtual vmcs and set it in hardware. Here is handling this case. 2. The second place is that when L2 writing the CR directly and it didn''t change any L1''s shadow bits. Then L0 will not inject it to L1. Instead, L0 will handle the vmexit by itself. First chunk of this patch did this.>> --- a/xen/include/asm-x86/hvm/vcpu.h >> +++ b/xen/include/asm-x86/hvm/vcpu.h >> @@ -100,6 +100,9 @@ struct nestedvcpu { >> */ >> bool_t nv_ioport80; >> bool_t nv_ioportED; >> + >> + /* L2''s control-resgister, just as the L2 sees them. */ >> + unsigned long guest_cr[5]; > > Considering that this touches code common with nested SVM, I''d expect > the SVM maintainers to have to approve of the change in any case. > > In particular I wonder whether this addition isn''t obsoleting SVM''s ns_cr0. > > JanBest regards, Yang
Boris Ostrovsky wrote on 2013-10-08:> On 10/08/2013 04:31 AM, Jan Beulich wrote: >>>>> On 08.10.13 at 09:29, Yang Zhang <yang.z.zhang@intel.com> wrote: >>> --- a/xen/arch/x86/hvm/hvm.c >>> +++ b/xen/arch/x86/hvm/hvm.c >>> @@ -1817,6 +1817,9 @@ int hvm_set_cr0(unsigned long value) >>> } >>> >>> v->arch.hvm_vcpu.guest_cr[0] = value; >>> + if ( !nestedhvm_vmswitch_in_progress(v) && >>> + nestedhvm_vcpu_in_guestmode(v) ) >>> + v->arch.hvm_vcpu.nvcpu.guest_cr[0] = value; >>> hvm_update_guest_cr(v, 0); >>> >>> if ( (value ^ old_value) & X86_CR0_PG ) { @@ -1899,6 +1902,9 @@ >>> int hvm_set_cr4(unsigned long value) } >>> >>> v->arch.hvm_vcpu.guest_cr[4] = value; >>> + if ( !nestedhvm_vmswitch_in_progress(v) && >>> + nestedhvm_vcpu_in_guestmode(v) ) >>> + v->arch.hvm_vcpu.nvcpu.guest_cr[4] = value; >>> hvm_update_guest_cr(v, 4); >> Considering the redundancy - wouldn''t all of the above now become >> the body of a rather desirable helper function? >> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c >>> @@ -1096,6 +1096,30 @@ static void vmx_update_guest_cr(struct vcpu >>> *v, > unsigned int cr) >>> vmx_update_cpu_exec_control(v); >>> } >>> + if ( nestedhvm_vcpu_in_guestmode(v) ) + { + >>> if ( !nestedhvm_vmswitch_in_progress(v) ) + { + >>> /* + * We get here when L2 changed cr0 in a way >>> that + did not change + * any of L1''s shadowed bits >>> (see nvmx_n2_vmexit_handler), + * but did change L0 >>> shadowed bits. So we first calculate the + * effective >>> cr0 value that L1 would like to write + into the + * >>> hardware. It consists of the L2-owned bits from + the new + >>> * value combined with the L1-owned bits from L1''s guest cr0. + >>> */ + v->arch.hvm_vcpu.guest_cr[0] &= + >>> ~__get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, >>> CR0_GUEST_HOST_MASK); + v->arch.hvm_vcpu.guest_cr[0] |>>> + __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, >>> GUEST_CR0) & + >>> __get_vvmcs(vcpu_nestedhvm(v).nv_vvmcx, CR0_GUEST_HOST_MASK); + >>> } + /* nvcpu.guest_cr[0] is what L2 write to cr0 >>> actually. */ + __vmwrite(CR0_READ_SHADOW, >>> v->arch.hvm_vcpu.nvcpu.guest_cr[0]); + } + else + >>> __vmwrite(CR0_READ_SHADOW, + v->arch.hvm_vcpu.guest_cr[0]); >> Please re-phrase this into >> >> if ( !nestedhvm_vcpu_in_guestmode(v) ) >> ... >> else if ( !nestedhvm_vmswitch_in_progress(v) ) >> For one that''ll put the "normal" (non-nested) case first. And second >> it''ll reduce indentation on the main portion of your additions (at >> once taking care of the otherwise over-long lines in there). >> >> I''m btw also mildly concerned that the moving ahead of this VMCS >> write might have other side effects. I did check that we don''t read >> the shadow value other than in debugging and nested code, but I''m >> nevertheless not quite certain that this is indeed benign. >> >>> --- a/xen/arch/x86/hvm/vmx/vvmx.c >>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c >>> @@ -1042,6 +1042,8 @@ static void load_shadow_guest_state(struct >>> vcpu > *v) >>> vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmcs_gstate_field), >>> vmcs_gstate_field); >>> + nvcpu->guest_cr[0] = __get_vvmcs(vvmcs, CR0_READ_SHADOW); >>> + nvcpu->guest_cr[4] = __get_vvmcs(vvmcs, CR4_READ_SHADOW); >> Given that the only time where these get read is in >> vmx_update_guest_cr() (for writing CR<n>_READ_SHADOW), are the >> writes above really needed? And if they are, aren''t there other >> updates to these two fields still missing? >> >>> --- a/xen/include/asm-x86/hvm/vcpu.h >>> +++ b/xen/include/asm-x86/hvm/vcpu.h >>> @@ -100,6 +100,9 @@ struct nestedvcpu { >>> */ >>> bool_t nv_ioport80; >>> bool_t nv_ioportED; >>> + >>> + /* L2''s control-resgister, just as the L2 sees them. */ >>> + unsigned long guest_cr[5]; > > This should be prefixed with nv_: all members of this structure are. > In addition, struct hvm_vcpu has exact same member. > >> Considering that this touches code common with nested SVM, I''d >> expect the SVM maintainers to have to approve of the change in any case. >> >> In particular I wonder whether this addition isn''t obsoleting SVM''s >> ns_cr0. >> > > I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would then > be updated in paths where it currently is not. > > For example in nsvm_vmcb_prepare4vmrun(): > > /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 > nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); > v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc > hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here.I am not familiar with SVM code. If you think this change may impact the nested SVM. Then I will move the change to VMX specific code. Best regards, Yang
On 10/09/2013 03:28 AM, Zhang, Yang Z wrote:> Boris Ostrovsky wrote on 2013-10-08: >> On 10/08/2013 04:31 AM, Jan Beulich wrote: >> >>> Considering that this touches code common with nested SVM, I''d >>> expect the SVM maintainers to have to approve of the change in any case. >>> >>> In particular I wonder whether this addition isn''t obsoleting SVM''s >>> ns_cr0. >>> >> I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would then >> be updated in paths where it currently is not. >> >> For example in nsvm_vmcb_prepare4vmrun(): >> >> /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 >> nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); >> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc >> hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. > I am not familiar with SVM code. If you think this change may impact the nested SVM. Then I will move the change to VMX specific code.No, it doesn''t affect SVM code. I was responding to Jan''s suggestion to replace SVM''s ns_cr0 with the new guest_cr[0]. -boris
Boris Ostrovsky wrote on 2013-10-09:> On 10/09/2013 03:28 AM, Zhang, Yang Z wrote: >> Boris Ostrovsky wrote on 2013-10-08: >>> On 10/08/2013 04:31 AM, Jan Beulich wrote: >>> >>>> Considering that this touches code common with nested SVM, I''d >>>> expect the SVM maintainers to have to approve of the change in any case. >>>> >>>> In particular I wonder whether this addition isn''t obsoleting >>>> SVM''s ns_cr0. >>>> >>> I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would >>> then be updated in paths where it currently is not. >>> >>> For example in nsvm_vmcb_prepare4vmrun(): >>> >>> /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 >>> nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); >>> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc >>> hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. >> I am not familiar with SVM code. If you think this change may impact >> the > nested SVM. Then I will move the change to VMX specific code. > > No, it doesn''t affect SVM code. I was responding to Jan''s suggestion > to replace SVM''s ns_cr0 with the new guest_cr[0].So is it ok to change the code according Jan''s suggestion? Best regards, Yang
On 10/09/2013 08:31 PM, Zhang, Yang Z wrote:> Boris Ostrovsky wrote on 2013-10-09: >> On 10/09/2013 03:28 AM, Zhang, Yang Z wrote: >>> Boris Ostrovsky wrote on 2013-10-08: >>>> On 10/08/2013 04:31 AM, Jan Beulich wrote: >>>> >>>>> Considering that this touches code common with nested SVM, I''d >>>>> expect the SVM maintainers to have to approve of the change in any case. >>>>> >>>>> In particular I wonder whether this addition isn''t obsoleting >>>>> SVM''s ns_cr0. >>>>> >>>> I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would >>>> then be updated in paths where it currently is not. >>>> >>>> For example in nsvm_vmcb_prepare4vmrun(): >>>> >>>> /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 >>>> nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); >>>> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc >>>> hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. >>> I am not familiar with SVM code. If you think this change may impact >>> the >> nested SVM. Then I will move the change to VMX specific code. >> >> No, it doesn''t affect SVM code. I was responding to Jan''s suggestion >> to replace SVM''s ns_cr0 with the new guest_cr[0]. > So is it ok to change the code according Jan''s suggestion?No. My point was that there may be unintended consequences to the change and you should leave ns_cr0 alone. -boris
Boris Ostrovsky wrote on 2013-10-10:> On 10/09/2013 08:31 PM, Zhang, Yang Z wrote: >> Boris Ostrovsky wrote on 2013-10-09: >>> On 10/09/2013 03:28 AM, Zhang, Yang Z wrote: >>>> Boris Ostrovsky wrote on 2013-10-08: >>>>> On 10/08/2013 04:31 AM, Jan Beulich wrote: >>>>> >>>>>> Considering that this touches code common with nested SVM, I''d >>>>>> expect the SVM maintainers to have to approve of the change in any >>>>>> case. >>>>>> >>>>>> In particular I wonder whether this addition isn''t obsoleting >>>>>> SVM''s ns_cr0. >>>>>> >>>>> I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would >>>>> then be updated in paths where it currently is not. >>>>> >>>>> For example in nsvm_vmcb_prepare4vmrun(): >>>>> >>>>> /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 >>>>> nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); >>>>> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc >>>>> hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. >>>> I am not familiar with SVM code. If you think this change may >>>> impact the >>> nested SVM. Then I will move the change to VMX specific code. >>> >>> No, it doesn''t affect SVM code. I was responding to Jan''s >>> suggestion to replace SVM''s ns_cr0 with the new guest_cr[0]. >> So is it ok to change the code according Jan''s suggestion? > > No. My point was that there may be unintended consequences to the > change and you should leave ns_cr0 alone.Hi Jan, Is it better to move the change to VMX specific code or keep them as it is now? Best regards, Yang
>>> On 11.10.13 at 03:01, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote: > Boris Ostrovsky wrote on 2013-10-10: >> On 10/09/2013 08:31 PM, Zhang, Yang Z wrote: >>> Boris Ostrovsky wrote on 2013-10-09: >>>> On 10/09/2013 03:28 AM, Zhang, Yang Z wrote: >>>>> Boris Ostrovsky wrote on 2013-10-08: >>>>>> On 10/08/2013 04:31 AM, Jan Beulich wrote: >>>>>> >>>>>>> Considering that this touches code common with nested SVM, I''d >>>>>>> expect the SVM maintainers to have to approve of the change in any >>>>>>> case. >>>>>>> >>>>>>> In particular I wonder whether this addition isn''t obsoleting >>>>>>> SVM''s ns_cr0. >>>>>>> >>>>>> I am not sure whether ns_cr0 (replaced with nv_guest_cr[0]) would >>>>>> then be updated in paths where it currently is not. >>>>>> >>>>>> For example in nsvm_vmcb_prepare4vmrun(): >>>>>> >>>>>> /* CR0 */ svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0]; cr0 >>>>>> nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb); >>>>>> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0; rc >>>>>> hvm_set_cr0(cr0); <------ nv_guest_cr[0] will get set here. >>>>> I am not familiar with SVM code. If you think this change may >>>>> impact the >>>> nested SVM. Then I will move the change to VMX specific code. >>>> >>>> No, it doesn''t affect SVM code. I was responding to Jan''s >>>> suggestion to replace SVM''s ns_cr0 with the new guest_cr[0]. >>> So is it ok to change the code according Jan''s suggestion? >> >> No. My point was that there may be unintended consequences to the >> change and you should leave ns_cr0 alone. > > Is it better to move the change to VMX specific code or keep them as it is > now?Keep them as they''re now as long as the SVM folks don''t have an issue with them (which apparently they don''t as long as you don''t try to do the field replacement I had thought might be possible). Jan