<suravee.suthikulpanit@amd.com>
2013-Jul-04 19:36 UTC
[PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Fix assertion in __virt_to_maddr when starting nested SVM guest in debug mode. Investigation has shown that svm_vmsave/svm_vmload make use of __pa() with invalid address. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> --- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/include/asm-x86/hvm/svm/svm.h | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index acd2d49..944569a 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1809,7 +1809,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, goto inject; } - svm_vmload(nv->nv_vvmcx); + nestedsvm_vmload(nv->nv_vvmcxaddr); /* State in L1 VMCB is stale now */ v->arch.hvm_svm.vmcb_in_sync = 0; @@ -1845,7 +1845,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, goto inject; } - svm_vmsave(nv->nv_vvmcx); + nestedsvm_vmsave(nv->nv_vvmcxaddr); __update_guest_eip(regs, inst_len); return; diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h index 64e7e25..909e8a1 100644 --- a/xen/include/asm-x86/hvm/svm/svm.h +++ b/xen/include/asm-x86/hvm/svm/svm.h @@ -55,6 +55,20 @@ static inline void svm_vmsave(void *vmcb) : : "a" (__pa(vmcb)) : "memory" ); } +static inline void nestedsvm_vmload(uint64_t vmcb) +{ + asm volatile ( + ".byte 0x0f,0x01,0xda" /* vmload */ + : : "a" (vmcb) : "memory" ); +} + +static inline void nestedsvm_vmsave(uint64_t vmcb) +{ + asm volatile ( + ".byte 0x0f,0x01,0xdb" /* vmsave */ + : : "a" (vmcb) : "memory" ); +} + static inline void svm_invlpga(unsigned long vaddr, uint32_t asid) { asm volatile ( -- 1.7.10.4
Andrew Cooper
2013-Jul-04 19:42 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
On 04/07/13 20:36, suravee.suthikulpanit@amd.com wrote:> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > > Fix assertion in __virt_to_maddr when starting nested SVM guest > in debug mode. Investigation has shown that svm_vmsave/svm_vmload > make use of __pa() with invalid address. > > Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > --- > xen/arch/x86/hvm/svm/svm.c | 4 ++-- > xen/include/asm-x86/hvm/svm/svm.h | 14 ++++++++++++++ > 2 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c > index acd2d49..944569a 100644 > --- a/xen/arch/x86/hvm/svm/svm.c > +++ b/xen/arch/x86/hvm/svm/svm.c > @@ -1809,7 +1809,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, > goto inject; > } > > - svm_vmload(nv->nv_vvmcx); > + nestedsvm_vmload(nv->nv_vvmcxaddr); > /* State in L1 VMCB is stale now */ > v->arch.hvm_svm.vmcb_in_sync = 0; > > @@ -1845,7 +1845,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, > goto inject; > } > > - svm_vmsave(nv->nv_vvmcx); > + nestedsvm_vmsave(nv->nv_vvmcxaddr); > > __update_guest_eip(regs, inst_len); > return; > diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h > index 64e7e25..909e8a1 100644 > --- a/xen/include/asm-x86/hvm/svm/svm.h > +++ b/xen/include/asm-x86/hvm/svm/svm.h > @@ -55,6 +55,20 @@ static inline void svm_vmsave(void *vmcb) > : : "a" (__pa(vmcb)) : "memory" ); > } > > +static inline void nestedsvm_vmload(uint64_t vmcb)unsigned long if this is actually an address. But more importantly, if virt_to_maddr() fails an assertion because the virtual address is not a persistent mapping, what is going to happen when the virtual mapping (potentially) changes while the vvmcx is in use? ~Andrew> +{ > + asm volatile ( > + ".byte 0x0f,0x01,0xda" /* vmload */ > + : : "a" (vmcb) : "memory" ); > +} > + > +static inline void nestedsvm_vmsave(uint64_t vmcb) > +{ > + asm volatile ( > + ".byte 0x0f,0x01,0xdb" /* vmsave */ > + : : "a" (vmcb) : "memory" ); > +} > + > static inline void svm_invlpga(unsigned long vaddr, uint32_t asid) > { > asm volatile (
Tim Deegan
2013-Jul-04 21:48 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
At 20:42 +0100 on 04 Jul (1372970576), Andrew Cooper wrote:> On 04/07/13 20:36, suravee.suthikulpanit@amd.com wrote: > > From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > > > > Fix assertion in __virt_to_maddr when starting nested SVM guest > > in debug mode. Investigation has shown that svm_vmsave/svm_vmload > > make use of __pa() with invalid address. > > > > Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> > > --- > > xen/arch/x86/hvm/svm/svm.c | 4 ++-- > > xen/include/asm-x86/hvm/svm/svm.h | 14 ++++++++++++++ > > 2 files changed, 16 insertions(+), 2 deletions(-) > > > > diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c > > index acd2d49..944569a 100644 > > --- a/xen/arch/x86/hvm/svm/svm.c > > +++ b/xen/arch/x86/hvm/svm/svm.c > > @@ -1809,7 +1809,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, > > goto inject; > > } > > > > - svm_vmload(nv->nv_vvmcx); > > + nestedsvm_vmload(nv->nv_vvmcxaddr); > > /* State in L1 VMCB is stale now */ > > v->arch.hvm_svm.vmcb_in_sync = 0; > > > > @@ -1845,7 +1845,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, > > goto inject; > > } > > > > - svm_vmsave(nv->nv_vvmcx); > > + nestedsvm_vmsave(nv->nv_vvmcxaddr); > > > > __update_guest_eip(regs, inst_len); > > return; > > diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h > > index 64e7e25..909e8a1 100644 > > --- a/xen/include/asm-x86/hvm/svm/svm.h > > +++ b/xen/include/asm-x86/hvm/svm/svm.h > > @@ -55,6 +55,20 @@ static inline void svm_vmsave(void *vmcb) > > : : "a" (__pa(vmcb)) : "memory" ); > > } > > > > +static inline void nestedsvm_vmload(uint64_t vmcb) > > unsigned long if this is actually an address.IIUC this is a physical address, so paddr_t is the correct type. Also, it might be nicer to call these svm_vm{save,load}_by_paddr() or similar to make it clear what they do.> But more importantly, if virt_to_maddr() fails an assertion because the > virtual address is not a persistent mapping, what is going to happen > when the virtual mapping (potentially) changes while the vvmcx is in use?I think the virtual mapping is ok from that point of view -- it''s mapped with map_domain_page_global(). I worry that we might run out of mapping slots if we keep a lot of these permanent mappings around, though. Cheers, Tim
Egger, Christoph
2013-Jul-05 07:47 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
On 04.07.13 23:48, Tim Deegan wrote:> At 20:42 +0100 on 04 Jul (1372970576), Andrew Cooper wrote: >> On 04/07/13 20:36, suravee.suthikulpanit@amd.com wrote: >>> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> >>> >>> Fix assertion in __virt_to_maddr when starting nested SVM guest >>> in debug mode. Investigation has shown that svm_vmsave/svm_vmload >>> make use of __pa() with invalid address. >>> >>> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> >>> --- >>> xen/arch/x86/hvm/svm/svm.c | 4 ++-- >>> xen/include/asm-x86/hvm/svm/svm.h | 14 ++++++++++++++ >>> 2 files changed, 16 insertions(+), 2 deletions(-) >>> >>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c >>> index acd2d49..944569a 100644 >>> --- a/xen/arch/x86/hvm/svm/svm.c >>> +++ b/xen/arch/x86/hvm/svm/svm.c >>> @@ -1809,7 +1809,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb, >>> goto inject; >>> } >>> >>> - svm_vmload(nv->nv_vvmcx); >>> + nestedsvm_vmload(nv->nv_vvmcxaddr); >>> /* State in L1 VMCB is stale now */ >>> v->arch.hvm_svm.vmcb_in_sync = 0; >>> >>> @@ -1845,7 +1845,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb, >>> goto inject; >>> } >>> >>> - svm_vmsave(nv->nv_vvmcx); >>> + nestedsvm_vmsave(nv->nv_vvmcxaddr); >>> >>> __update_guest_eip(regs, inst_len); >>> return; >>> diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h >>> index 64e7e25..909e8a1 100644 >>> --- a/xen/include/asm-x86/hvm/svm/svm.h >>> +++ b/xen/include/asm-x86/hvm/svm/svm.h >>> @@ -55,6 +55,20 @@ static inline void svm_vmsave(void *vmcb) >>> : : "a" (__pa(vmcb)) : "memory" ); >>> } >>> >>> +static inline void nestedsvm_vmload(uint64_t vmcb) >> >> unsigned long if this is actually an address. > > IIUC this is a physical address, so paddr_t is the correct type.Right.> Also, it might be nicer to call these svm_vm{save,load}_by_paddr() or similar > to make it clear what they do.I agree. But why did the assertion never trigger when called from elsewhere in svm_vm{load,save}?>> But more importantly, if virt_to_maddr() fails an assertion because the >> virtual address is not a persistent mapping, what is going to happen >> when the virtual mapping (potentially) changes while the vvmcx is in use? > > I think the virtual mapping is ok from that point of view -- it''s mapped > with map_domain_page_global(). I worry that we might run out of mapping > slots if we keep a lot of these permanent mappings around, though.The number of mappings = number of guest hypervisors * number of virtual cpus per guest hypervisor number of guest hypervisors = number of domains excluding all domains where nestedhvm is not used even when turned on Christoph
Jan Beulich
2013-Jul-05 07:53 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
>>> On 04.07.13 at 23:48, Tim Deegan <tim@xen.org> wrote: > At 20:42 +0100 on 04 Jul (1372970576), Andrew Cooper wrote: >> On 04/07/13 20:36, suravee.suthikulpanit@amd.com wrote: >> > +static inline void nestedsvm_vmload(uint64_t vmcb) >> >> unsigned long if this is actually an address. > > IIUC this is a physical address, so paddr_t is the correct type. Also, > it might be nicer to call these svm_vm{save,load}_by_paddr() or similar > to make it clear what they do.So would I think. And the existing functions then could simply wrap the new ones. However, looking at the call sites of svm_vmexit_do_vm(), I don''t think this is a host physical address in all cases: At least the uses from svm_vmexit_do_vm*() in svm.c suggest that these are GPAs, and hence can''t be passed to vmload/vmsave without translation.>> But more importantly, if virt_to_maddr() fails an assertion because the >> virtual address is not a persistent mapping, what is going to happen >> when the virtual mapping (potentially) changes while the vvmcx is in use? > > I think the virtual mapping is ok from that point of view -- it''s mapped > with map_domain_page_global().And anyway, the virtual mapping isn''t being used in the resulting code.> I worry that we might run out of mapping > slots if we keep a lot of these permanent mappings around, though.Afaict there''s a single such mapping per vCPU, so not that much to worry about I think. Jan
Jan Beulich
2013-Jul-05 07:54 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
>>> On 05.07.13 at 09:47, "Egger, Christoph" <chegger@amazon.de> wrote: > But why did the assertion never trigger when called from elsewhere > in svm_vm{load,save}?Because ordinary VMCBs get allocated from the Xen heap. Jan
Suravee Suthikulanit
2013-Jul-05 21:38 UTC
Re: [PATCH 1/1] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr
On 7/5/2013 2:53 AM, Jan Beulich wrote:>>>> On 04.07.13 at 23:48, Tim Deegan <tim@xen.org> wrote: >> At 20:42 +0100 on 04 Jul (1372970576), Andrew Cooper wrote: >>> On 04/07/13 20:36, suravee.suthikulpanit@amd.com wrote: >>>> +static inline void nestedsvm_vmload(uint64_t vmcb) >>> unsigned long if this is actually an address. >> IIUC this is a physical address, so paddr_t is the correct type. Also, >> it might be nicer to call these svm_vm{save,load}_by_paddr() or similar >> to make it clear what they do. > So would I think. And the existing functions then could simply > wrap the new ones. > > However, looking at the call sites of svm_vmexit_do_vm(), I don''t > think this is a host physical address in all cases: At least the uses > from svm_vmexit_do_vm*() in svm.c suggest that these are GPAs, > and hence can''t be passed to vmload/vmsave without translation. > >>> But more importantly, if virt_to_maddr() fails an assertion because the >>> virtual address is not a persistent mapping, what is going to happen >>> when the virtual mapping (potentially) changes while the vvmcx is in use? >> I think the virtual mapping is ok from that point of view -- it''s mapped >> with map_domain_page_global(). > And anyway, the virtual mapping isn''t being used in the resulting > code. > >> I worry that we might run out of mapping >> slots if we keep a lot of these permanent mappings around, though. > Afaict there''s a single such mapping per vCPU, so not that much to > worry about I think. > > Jan > >Thank you all for comments. I am sending out V2 in a separate thread. Suravee