This patch changes P2M code to works with 1GB page now. Signed-off-by: Wei Huang <wei.huang2@amd.com> Acked-by: Dongxiao Xu <dongxiao.xu@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2010-Feb-23 10:07 UTC
Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
At 17:18 +0000 on 22 Feb (1266859128), Wei Huang wrote:> This patch changes P2M code to works with 1GB page now. > > Signed-off-by: Wei Huang <wei.huang2@amd.com> > Acked-by: Dongxiao Xu <dongxiao.xu@intel.com>> @@ -1064,6 +1093,19 @@ > if ( unlikely(d->is_dying) ) > goto out_fail; > > + /* Because PoD does not have cache list for 1GB pages, it has to remap > + * 1GB region to 2MB chunks for a retry. */ > + if ( order == 18 ) > + { > + gfn_aligned = (gfn >> order) << order; > + for( i = 0; i < (1 << order); i += (1 << 9) ) > + set_p2m_entry(d, gfn_aligned + i, _mfn(POPULATE_ON_DEMAND_MFN), 9, > + p2m_populate_on_demand);I think you only need one set_p2m_entry call here - it will split the 1GB entry without needing another 511 calls. Was the decision not to implement populate-on-demand for 1GB pages based on not thinking it''s a good idea or not wanting to do the work? :) How much performance do PoD guests lose by not having it?> + audit_p2m(d); > + p2m_unlock(p2md); > + return 0; > + } > + > /* If we''re low, start a sweep */ > if ( order == 9 && page_list_empty(&p2md->pod.super) ) > p2m_pod_emergency_sweep_super(d); > @@ -1196,6 +1238,7 @@ > l1_pgentry_t *p2m_entry; > l1_pgentry_t entry_content; > l2_pgentry_t l2e_content; > + l3_pgentry_t l3e_content; > int rv=0; > > if ( tb_init_done ) > @@ -1222,18 +1265,44 @@ > goto out; > #endif > /* > + * Try to allocate 1GB page table if this feature is supported. > + * > * When using PAE Xen, we only allow 33 bits of pseudo-physical > * address in translated guests (i.e. 8 GBytes). This restriction > * comes from wanting to map the P2M table into the 16MB RO_MPT hole > * in Xen''s address space for translated PV guests. > * When using AMD''s NPT on PAE Xen, we are restricted to 4GB. > */Please move this comment closer to the code it describes. Also maybe a BUG_ON(CONFIG_PAGING_LEVELS == 3) in the order-18 case would be useful, since otherwise it looks like order-18 allocations are exempt from the restriction. Actually, I don''t see where you enforce that - do you? Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Huang2, Wei
2010-Feb-23 16:37 UTC
RE: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
I was hoping that someone else would pick up the 1GB PoD task. :P I can try to implement this feature if deemed necessary. -Wei -----Original Message----- From: Tim Deegan [mailto:Tim.Deegan@citrix.com] Sent: Tuesday, February 23, 2010 4:07 AM To: Huang2, Wei Cc: ''xen-devel@lists.xensource.com''; Keir Fraser; Xu, Dongxiao Subject: Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page At 17:18 +0000 on 22 Feb (1266859128), Wei Huang wrote:> This patch changes P2M code to works with 1GB page now. > > Signed-off-by: Wei Huang <wei.huang2@amd.com> > Acked-by: Dongxiao Xu <dongxiao.xu@intel.com>> @@ -1064,6 +1093,19 @@ > if ( unlikely(d->is_dying) ) > goto out_fail; > > + /* Because PoD does not have cache list for 1GB pages, it has to remap > + * 1GB region to 2MB chunks for a retry. */ > + if ( order == 18 ) > + { > + gfn_aligned = (gfn >> order) << order; > + for( i = 0; i < (1 << order); i += (1 << 9) ) > + set_p2m_entry(d, gfn_aligned + i, _mfn(POPULATE_ON_DEMAND_MFN), 9, > + p2m_populate_on_demand);I think you only need one set_p2m_entry call here - it will split the 1GB entry without needing another 511 calls. Was the decision not to implement populate-on-demand for 1GB pages based on not thinking it''s a good idea or not wanting to do the work? :) How much performance do PoD guests lose by not having it?> + audit_p2m(d); > + p2m_unlock(p2md); > + return 0; > + } > + > /* If we''re low, start a sweep */ > if ( order == 9 && page_list_empty(&p2md->pod.super) ) > p2m_pod_emergency_sweep_super(d); > @@ -1196,6 +1238,7 @@ > l1_pgentry_t *p2m_entry; > l1_pgentry_t entry_content; > l2_pgentry_t l2e_content; > + l3_pgentry_t l3e_content; > int rv=0; > > if ( tb_init_done ) > @@ -1222,18 +1265,44 @@ > goto out; > #endif > /* > + * Try to allocate 1GB page table if this feature is supported. > + * > * When using PAE Xen, we only allow 33 bits of pseudo-physical > * address in translated guests (i.e. 8 GBytes). This restriction > * comes from wanting to map the P2M table into the 16MB RO_MPT hole > * in Xen''s address space for translated PV guests. > * When using AMD''s NPT on PAE Xen, we are restricted to 4GB. > */Please move this comment closer to the code it describes. Also maybe a BUG_ON(CONFIG_PAGING_LEVELS == 3) in the order-18 case would be useful, since otherwise it looks like order-18 allocations are exempt from the restriction. Actually, I don''t see where you enforce that - do you? Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2010-Feb-24 09:13 UTC
Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
At 16:37 +0000 on 23 Feb (1266943030), Huang2, Wei wrote:> I was hoping that someone else would pick up the 1GB PoD task. :P I > can try to implement this feature if deemed necessary.I think it will be OK - PoD is only useful with balloon drivers, which currently don''t even maintain 2MB superpages, so it''s probably not worth engineering up 1GB PoD. Tim.> -Wei > > -----Original Message----- > From: Tim Deegan [mailto:Tim.Deegan@citrix.com] > Sent: Tuesday, February 23, 2010 4:07 AM > To: Huang2, Wei > Cc: ''xen-devel@lists.xensource.com''; Keir Fraser; Xu, Dongxiao > Subject: Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page > > At 17:18 +0000 on 22 Feb (1266859128), Wei Huang wrote: > > This patch changes P2M code to works with 1GB page now. > > > > Signed-off-by: Wei Huang <wei.huang2@amd.com> > > Acked-by: Dongxiao Xu <dongxiao.xu@intel.com> > > > > @@ -1064,6 +1093,19 @@ > > if ( unlikely(d->is_dying) ) > > goto out_fail; > > > > + /* Because PoD does not have cache list for 1GB pages, it has to remap > > + * 1GB region to 2MB chunks for a retry. */ > > + if ( order == 18 ) > > + { > > + gfn_aligned = (gfn >> order) << order; > > + for( i = 0; i < (1 << order); i += (1 << 9) ) > > + set_p2m_entry(d, gfn_aligned + i, _mfn(POPULATE_ON_DEMAND_MFN), 9, > > + p2m_populate_on_demand); > > I think you only need one set_p2m_entry call here - it will split the > 1GB entry without needing another 511 calls. > > Was the decision not to implement populate-on-demand for 1GB pages based > on not thinking it''s a good idea or not wanting to do the work? :) > How much performance do PoD guests lose by not having it? > > > + audit_p2m(d); > > + p2m_unlock(p2md); > > + return 0; > > + } > > + > > /* If we''re low, start a sweep */ > > if ( order == 9 && page_list_empty(&p2md->pod.super) ) > > p2m_pod_emergency_sweep_super(d); > > @@ -1196,6 +1238,7 @@ > > l1_pgentry_t *p2m_entry; > > l1_pgentry_t entry_content; > > l2_pgentry_t l2e_content; > > + l3_pgentry_t l3e_content; > > int rv=0; > > > > if ( tb_init_done ) > > @@ -1222,18 +1265,44 @@ > > goto out; > > #endif > > /* > > + * Try to allocate 1GB page table if this feature is supported. > > + * > > * When using PAE Xen, we only allow 33 bits of pseudo-physical > > * address in translated guests (i.e. 8 GBytes). This restriction > > * comes from wanting to map the P2M table into the 16MB RO_MPT hole > > * in Xen''s address space for translated PV guests. > > * When using AMD''s NPT on PAE Xen, we are restricted to 4GB. > > */ > > Please move this comment closer to the code it describes. > > Also maybe a BUG_ON(CONFIG_PAGING_LEVELS == 3) in the order-18 case > would be useful, since otherwise it looks like order-18 allocations are > exempt from the restriction. > > Actually, I don''t see where you enforce that - do you? > > Tim. > > > -- > Tim Deegan <Tim.Deegan@citrix.com> > Principal Software Engineer, XenServer Engineering > Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) > >-- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2010-Feb-25 10:30 UTC
Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
On Wed, Feb 24, 2010 at 9:13 AM, Tim Deegan <Tim.Deegan@citrix.com> wrote:> At 16:37 +0000 on 23 Feb (1266943030), Huang2, Wei wrote: >> I was hoping that someone else would pick up the 1GB PoD task. :P I >> can try to implement this feature if deemed necessary. > > I think it will be OK - PoD is only useful with balloon drivers, which > currently don''t even maintain 2MB superpages, so it''s probably not worth > engineering up 1GB PoD.Agreed. As long as everything still functions properly when PoD is turned on, not having 1G PoD entries shouldn''t be a big priority. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel