Keir, I have a question about pv kernel ring arrangement: arch/x86/x86_32/traps.c void hypercall_page_initialise(struct domain *d, void *hypercall_page) { memset(hypercall_page, 0xCC, PAGE_SIZE); if ( is_hvm_domain(d) ) hvm_hypercall_page_initialise(d, hypercall_page); else if ( supervisor_mode_kernel ) hypercall_page_initialise_ring0_kernel(hypercall_page); else hypercall_page_initialise_ring1_kernel(hypercall_page); } arch/x86/x86_64/traps.c void hypercall_page_initialise(struct domain *d, void *hypercall_page) { memset(hypercall_page, 0xCC, PAGE_SIZE); if ( is_hvm_domain(d) ) hvm_hypercall_page_initialise(d, hypercall_page); else if ( !is_pv_32bit_domain(d) ) hypercall_page_initialise_ring3_kernel(hypercall_page); else hypercall_page_initialise_ring1_kernel(hypercall_page); } My question: 1. for x86_32 hypervisor, what''s the purpose and advantage/disadvantage of supervisor_mode_kernel pv which runs at ring0? 2. for x86_64 hypervisor, why no supervisor_mode_kernel pv? seems pv can also do so ... Thanks, Jinsong _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, 2011-05-30 at 18:23 +0100, Liu, Jinsong wrote:> Keir, > > I have a question about pv kernel ring arrangement: > > arch/x86/x86_32/traps.c > void hypercall_page_initialise(struct domain *d, void *hypercall_page) > { > memset(hypercall_page, 0xCC, PAGE_SIZE); > if ( is_hvm_domain(d) ) > hvm_hypercall_page_initialise(d, hypercall_page); > else if ( supervisor_mode_kernel ) > hypercall_page_initialise_ring0_kernel(hypercall_page); > else > hypercall_page_initialise_ring1_kernel(hypercall_page); > } > > arch/x86/x86_64/traps.c > void hypercall_page_initialise(struct domain *d, void *hypercall_page) > { > memset(hypercall_page, 0xCC, PAGE_SIZE); > if ( is_hvm_domain(d) ) > hvm_hypercall_page_initialise(d, hypercall_page); > else if ( !is_pv_32bit_domain(d) ) > hypercall_page_initialise_ring3_kernel(hypercall_page); > else > hypercall_page_initialise_ring1_kernel(hypercall_page); > } > > My question: > 1. for x86_32 hypervisor, what''s the purpose and advantage/disadvantage of supervisor_mode_kernel pv which runs at ring0?supervisor_mode_kernel was a proof of concept project about 5 years ago to run a Xen PV kernel on a thin "hypervisor" shim. It provides no actual virtualisation features (i.e. multiple domains) and there is no protection between the kernel and the hypervisor shim. It was mostly a stunt to see what the minimum amount of scaffolding to support a PV kernel might be, it was kind of the skanky opposite approach to pvops I guess. IOW the disadvantages far outweigh the advantages.> 2. for x86_64 hypervisor, why no supervisor_mode_kernel pv? seems pv can also do so ...I don''t recall if supervisor_mode_kernel ever worked for 64 (and has since bit-rotted and be removed) or if it was never written for 64 bit in the first place. Either way it doesn''t exist now and there would be very little point in writing it. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell wrote:> On Mon, 2011-05-30 at 18:23 +0100, Liu, Jinsong wrote: >> Keir, >> >> I have a question about pv kernel ring arrangement: >> >> arch/x86/x86_32/traps.c >> void hypercall_page_initialise(struct domain *d, void >> *hypercall_page) { memset(hypercall_page, 0xCC, PAGE_SIZE); >> if ( is_hvm_domain(d) ) >> hvm_hypercall_page_initialise(d, hypercall_page); >> else if ( supervisor_mode_kernel ) >> hypercall_page_initialise_ring0_kernel(hypercall_page); >> else hypercall_page_initialise_ring1_kernel(hypercall_page); >> } >> >> arch/x86/x86_64/traps.c >> void hypercall_page_initialise(struct domain *d, void >> *hypercall_page) { memset(hypercall_page, 0xCC, PAGE_SIZE); >> if ( is_hvm_domain(d) ) >> hvm_hypercall_page_initialise(d, hypercall_page); >> else if ( !is_pv_32bit_domain(d) ) >> hypercall_page_initialise_ring3_kernel(hypercall_page); >> else hypercall_page_initialise_ring1_kernel(hypercall_page); >> } >> >> My question: >> 1. for x86_32 hypervisor, what''s the purpose and >> advantage/disadvantage of supervisor_mode_kernel pv which runs at >> ring0? > > supervisor_mode_kernel was a proof of concept project about 5 years > ago to run a Xen PV kernel on a thin "hypervisor" shim. It provides no > actual virtualisation features (i.e. multiple domains) and there is no > protection between the kernel and the hypervisor shim. It was mostly a > stunt to see what the minimum amount of scaffolding to support a PV > kernel might be, it was kind of the skanky opposite approach to pvops > I guess. IOW the disadvantages far outweigh the advantages. > >> 2. for x86_64 hypervisor, why no supervisor_mode_kernel pv? seems pv >> can also do so ... > > I don''t recall if supervisor_mode_kernel ever worked for 64 (and has > since bit-rotted and be removed) or if it was never written for 64 bit > in the first place. Either way it doesn''t exist now and there would be > very little point in writing it. > > Ian.I see, thanks Ian! We just meet a little issue need confirm pv kernel ring ... :) Jinsong _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, 2011-05-31 at 10:21 +0100, Liu, Jinsong wrote:> > We just meet a little issue need confirm pv kernel ring ... :)How do you mean? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> From: Ian Campbell > Sent: Tuesday, May 31, 2011 5:10 PM > > On Mon, 2011-05-30 at 18:23 +0100, Liu, Jinsong wrote: > > Keir, > > > > I have a question about pv kernel ring arrangement: > > > > arch/x86/x86_32/traps.c > > void hypercall_page_initialise(struct domain *d, void *hypercall_page) > > { > > memset(hypercall_page, 0xCC, PAGE_SIZE); > > if ( is_hvm_domain(d) ) > > hvm_hypercall_page_initialise(d, hypercall_page); > > else if ( supervisor_mode_kernel ) > > hypercall_page_initialise_ring0_kernel(hypercall_page); > > else > > hypercall_page_initialise_ring1_kernel(hypercall_page); > > } > > > > arch/x86/x86_64/traps.c > > void hypercall_page_initialise(struct domain *d, void *hypercall_page) > > { > > memset(hypercall_page, 0xCC, PAGE_SIZE); > > if ( is_hvm_domain(d) ) > > hvm_hypercall_page_initialise(d, hypercall_page); > > else if ( !is_pv_32bit_domain(d) ) > > hypercall_page_initialise_ring3_kernel(hypercall_page); > > else > > hypercall_page_initialise_ring1_kernel(hypercall_page); > > } > > > > My question: > > 1. for x86_32 hypervisor, what's the purpose and advantage/disadvantage of > supervisor_mode_kernel pv which runs at ring0? > > supervisor_mode_kernel was a proof of concept project about 5 years ago > to run a Xen PV kernel on a thin "hypervisor" shim. It provides no > actual virtualisation features (i.e. multiple domains) and there is no > protection between the kernel and the hypervisor shim. It was mostly a > stunt to see what the minimum amount of scaffolding to support a PV > kernel might be, it was kind of the skanky opposite approach to pvops I > guess. IOW the disadvantages far outweigh the advantages.Time to remove it then, since no one actually uses it today and nobody knows whether it still works given its limited value?> > > 2. for x86_64 hypervisor, why no supervisor_mode_kernel pv? seems pv can > also do so ... > > I don't recall if supervisor_mode_kernel ever worked for 64 (and has > since bit-rotted and be removed) or if it was never written for 64 bit > in the first place. Either way it doesn't exist now and there would be > very little point in writing it. >It's a reasonable result. Once the basic verification is done on 32bit Xen, there's no point to do same experiment on 64bit when people all understand the architecture well. :-) Thanks Kevin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel