Dong, Eddie
2010-Aug-18 08:27 UTC
RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
> + > +/* The exitcode is in native SVM/VMX format. The forced exitcode > + * is in generic format. > + */Introducing a 3rd format of exitcode is over-complicated IMO.> +enum nestedhvm_vmexits > +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, > + uint64_t exitcode) > +{I doubt about the necessary of this kind of wrapper. In single layer virtualization, SVM and VMX have its own handler for each VM exit. Only when certain common function is invoked, the control goes from SVM/VMX to common one, because they have quit many differences and the savings by wrapping that function is really small, however we pay with additional complexity in both SVM and VMX side as well as readability and performance. Further more, it may limit the flexibility to implement something new for both side. Back to the nested virtualization. I am not fully convinced we need a common handler for the VM_entry/exit, at least not for now. It is basically same situation with above single layer virtualization. Rather we prefer to jump from SVM/VMX to common code when certain common service is requested. Will that be easier?> + } > + > + /* host state has been restored */ > + } > + > + nestedsvm_vcpu_clgi(v);This is SVM specific, it is better to be called from SVM code itself.> + > + /* Prepare for running the guest. Do some final SVM/VMX > + * specific tweaks if necessary to make it work. > + */ > + rc = hvm_nestedhvm_vcpu_vmexit(v, regs, exitcode); > + hvm->nh_hostflags.fields.forcevmexit = 0; > + if (rc) { > + hvm->nh_hostflags.fields.vmentry = 0; > + return NESTEDHVM_VMEXIT_FATALERROR; > + }Thx, Eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Aug-18 08:36 UTC
Re: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
On 18/08/2010 09:27, "Dong, Eddie" <eddie.dong@intel.com> wrote:>> +enum nestedhvm_vmexits >> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, >> + uint64_t exitcode) >> +{ > > I doubt about the necessary of this kind of wrapper. > > In single layer virtualization, SVM and VMX have its own handler for each VM > exit. Only when certain common function is invoked, the control goes from > SVM/VMX to common one, because they have quit many differences and the savings > by wrapping that function is really small, however we pay with additional > complexity in both SVM and VMX side as well as readability and performance. > Further more, it may limit the flexibility to implement something new for both > side. > > Back to the nested virtualization. I am not fully convinced we need a common > handler for the VM_entry/exit, at least not for now. It is basically same > situation with above single layer virtualization. Rather we prefer to jump > from SVM/VMX to common code when certain common service is requested. > > Will that be easier?I''m sure there ahs to be conversion-and-demux anyway in SVM-VMX-specific code. At which point you may as well break out to individual common handler functions just where that makes sense, as you say. Also I agree this model fits better with what we do in the non-nested case. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dong, Eddie
2010-Aug-19 02:46 UTC
RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
Keir Fraser wrote:> On 18/08/2010 09:27, "Dong, Eddie" <eddie.dong@intel.com> wrote: > >>> +enum nestedhvm_vmexits >>> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, >>> + uint64_t exitcode) +{ >> >> I doubt about the necessary of this kind of wrapper. >> >> In single layer virtualization, SVM and VMX have its own handler for >> each VM exit. Only when certain common function is invoked, the >> control goes from SVM/VMX to common one, because they have quit many >> differences and the savings by wrapping that function is really >> small, however we pay with additional complexity in both SVM and VMX >> side as well as readability and performance. Further more, it may >> limit the flexibility to implement something new for both side. >> >> Back to the nested virtualization. I am not fully convinced we need >> a common handler for the VM_entry/exit, at least not for now. It is >> basically same situation with above single layer virtualization. >> Rather we prefer to jump from SVM/VMX to common code when certain >> common service is requested. >> >> Will that be easier? > > I''m sure there ahs to be conversion-and-demux anyway in > SVM-VMX-specific code. At which point you may as well break out to > individual common handler functions just where that makes sense, as > you say. Also I agree this model fits better with what we do in the > non-nested case. >Sounds reasonable :) Moving those 2 generic entries to vendor specific code makes it easier to me in reading, rebasing and new chance for vendor specific optimization in future. After that, we may revise the necessity of rest APIs in patch 4/5/7. Thx, Eddie _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2010-Aug-19 10:38 UTC
Re: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
On Thursday 19 August 2010 04:46:50 Dong, Eddie wrote:> Keir Fraser wrote: > > On 18/08/2010 09:27, "Dong, Eddie" <eddie.dong@intel.com> wrote: > >>> +enum nestedhvm_vmexits > >>> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, > >>> + uint64_t exitcode) +{ > >> > >> I doubt about the necessary of this kind of wrapper. > >> > >> In single layer virtualization, SVM and VMX have its own handler for > >> each VM exit. Only when certain common function is invoked, the > >> control goes from SVM/VMX to common one, because they have quit many > >> differences and the savings by wrapping that function is really > >> small, however we pay with additional complexity in both SVM and VMX > >> side as well as readability and performance. Further more, it may > >> limit the flexibility to implement something new for both side. > >> > >> Back to the nested virtualization. I am not fully convinced we need > >> a common handler for the VM_entry/exit, at least not for now. It is > >> basically same situation with above single layer virtualization. > >> Rather we prefer to jump from SVM/VMX to common code when certain > >> common service is requested. > >> > >> Will that be easier? > > > > I''m sure there ahs to be conversion-and-demux anyway in > > SVM-VMX-specific code. At which point you may as well break out to > > individual common handler functions just where that makes sense, as > > you say. Also I agree this model fits better with what we do in the > > non-nested case.I see the arch specific code as the backend and the hvm code as the frontend. Not the other way around. The vmentry/vmexit code is invoked from the arch-specific exit code. That''s not do-able the other way around due to the way the hardware works. The vmentry/vmexit calls out to arch specific code where access to the vmcb/vmcs is needed. Where I need Eddie''s help is in finding the nuances in the common vmentry/vmexit code that prevents him to make the VMX specific code working from the algorithm point of view. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Christoph Egger
2010-Aug-19 10:44 UTC
Re: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
On Thursday 19 August 2010 12:38:09 Christoph Egger wrote:> On Thursday 19 August 2010 04:46:50 Dong, Eddie wrote: > > Keir Fraser wrote: > > > On 18/08/2010 09:27, "Dong, Eddie" <eddie.dong@intel.com> wrote: > > >>> +enum nestedhvm_vmexits > > >>> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs, > > >>> + uint64_t exitcode) +{ > > >> > > >> I doubt about the necessary of this kind of wrapper. > > >> > > >> In single layer virtualization, SVM and VMX have its own handler for > > >> each VM exit. Only when certain common function is invoked, the > > >> control goes from SVM/VMX to common one, because they have quit many > > >> differences and the savings by wrapping that function is really > > >> small, however we pay with additional complexity in both SVM and VMX > > >> side as well as readability and performance. Further more, it may > > >> limit the flexibility to implement something new for both side. > > >> > > >> Back to the nested virtualization. I am not fully convinced we need > > >> a common handler for the VM_entry/exit, at least not for now. It is > > >> basically same situation with above single layer virtualization. > > >> Rather we prefer to jump from SVM/VMX to common code when certain > > >> common service is requested. > > >> > > >> Will that be easier? > > > > > > I''m sure there ahs to be conversion-and-demux anyway in > > > SVM-VMX-specific code. At which point you may as well break out to > > > individual common handler functions just where that makes sense, as > > > you say. Also I agree this model fits better with what we do in the > > > non-nested case. > > I see the arch specific code as the backend and the hvm code as the > frontend. Not the other way around. > > The vmentry/vmexit code is invoked from the arch-specific exit code. > That''s not do-able the other way around due to the way the hardware works. > The vmentry/vmexit calls out to arch specific code where access to the > vmcb/vmcs is needed. > > Where I need Eddie''s help is in finding the nuances in the common > vmentry/vmexit code that prevents him to make the VMX specific code > working from the algorithm point of view.Err.. just to make it clear: The need in help from Eddie is not limited to the common vmentry/vmexit code. This also includes the interfaces where they don''t fit to VMX, etc. Christoph -- ---to satisfy European Law for business letters: Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach b. Muenchen Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel