Change page allocation code in Xen tools. The allocation request now starts with 1GB; if that fails, then falls back to 2MB and then 4KB. Signed-off-by: Wei Huang <wei.huang2@amd.com> Acked-by: Dongxiao Xu <dongxiao.xu@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2010-Feb-23 09:38 UTC
Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote:> Change page allocation code in Xen tools. The allocation request now > starts with 1GB; if that fails, then falls back to 2MB and then 4KB. > > Signed-off-by: Wei Huang <wei.huang2@amd.com> > Acked-by: Dongxiao Xu <dongxiao.xu@intel.com> >Content-Description: 2-xen-hap-fix-tools.patch> # HG changeset patch > # User huangwei@huangwei.amd.com > # Date 1266853449 21600 > # Node ID c9b45664b423e11003358944bb8e6e976e735301 > # Parent 1d166c5703256ab97225c6ae46ac87dd5bd07e89 > fix the tools to support 1GB. Create 1GB pages if possible; otherwise falls back to 2MB then 4KB. > > diff -r 1d166c570325 -r c9b45664b423 tools/libxc/xc_hvm_build.c > --- a/tools/libxc/xc_hvm_build.c Mon Feb 22 09:44:04 2010 -0600 > +++ b/tools/libxc/xc_hvm_build.c Mon Feb 22 09:44:09 2010 -0600 > @@ -19,8 +19,10 @@ > > #include <xen/libelf/libelf.h> > > -#define SUPERPAGE_PFN_SHIFT 9 > -#define SUPERPAGE_NR_PFNS (1UL << SUPERPAGE_PFN_SHIFT) > +#define SUPERPAGE_2MB_SHIFT 9 > +#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT) > +#define SUPERPAGE_1GB_SHIFT 18 > +#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT) > > #define SPECIALPAGE_BUFIOREQ 0 > #define SPECIALPAGE_XENSTORE 1 > @@ -117,6 +119,8 @@ > uint64_t v_start, v_end; > int rc; > xen_capabilities_info_t caps; > + unsigned long stat_normal_pages = 0, stat_2mb_pages = 0, > + stat_1gb_pages = 0; > int pod_mode = 0; > > > @@ -166,35 +170,43 @@ > > /* > * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000. > - * We allocate pages in batches of no more than 8MB to ensure that > - * we can be preempted and hence dom0 remains responsive. > + * > + * We attempt to allocate 1GB pages if possible. It falls back on 2MB > + * pages if 1GB allocation fails. 4KB pages will be used eventually if > + * both fail. > + * > + * Under 2MB mode, we allocate pages in batches of no more than 8MB to > + * ensure that we can be preempted and hence dom0 remains responsive. > */ > rc = xc_domain_memory_populate_physmap( > xc_handle, dom, 0xa0, 0, 0, &page_array[0x00]); > cur_pages = 0xc0; > + stat_normal_pages = 0xc0; > while ( (rc == 0) && (nr_pages > cur_pages) ) > { > /* Clip count to maximum 8MB extent. */ITYM 1GB here.> unsigned long count = nr_pages - cur_pages; > - if ( count > 2048 ) > - count = 2048; > + unsigned long max_pages = SUPERPAGE_1GB_NR_PFNS; > > - /* Clip partial superpage extents to superpage boundaries. */ > - if ( ((cur_pages & (SUPERPAGE_NR_PFNS-1)) != 0) && > - (count > (-cur_pages & (SUPERPAGE_NR_PFNS-1))) ) > - count = -cur_pages & (SUPERPAGE_NR_PFNS-1); /* clip s.p. tail */ > - else if ( ((count & (SUPERPAGE_NR_PFNS-1)) != 0) && > - (count > SUPERPAGE_NR_PFNS) ) > - count &= ~(SUPERPAGE_NR_PFNS - 1); /* clip non-s.p. tail */ > + if ( count > max_pages ) > + count = max_pages; > + > + /* Take care the corner cases of super page tails */ > + if ( ((cur_pages & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) && > + (count > (-cur_pages & (SUPERPAGE_1GB_NR_PFNS-1))) ) > + count = -cur_pages & (SUPERPAGE_1GB_NR_PFNS-1); > + else if ( ((count & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) && > + (count > SUPERPAGE_1GB_NR_PFNS) ) > + count &= ~(SUPERPAGE_1GB_NR_PFNS - 1);This logic is overkill since you allocate at most one 1GB page in each pass. In fact, given that you test for <1GB immediately below, I think you can just drop this ''tails'' chunk entirely.> > - /* Attempt to allocate superpage extents. */ > - if ( ((count | cur_pages) & (SUPERPAGE_NR_PFNS - 1)) == 0 ) > + /* Attemp to allocate 1GB super page */ > + if ( ((count | cur_pages) & (SUPERPAGE_1GB_NR_PFNS - 1)) == 0 ) > { > long done; > - xen_pfn_t sp_extents[count >> SUPERPAGE_PFN_SHIFT]; > + xen_pfn_t sp_extents[count >> SUPERPAGE_1GB_SHIFT]; > struct xen_memory_reservation sp_req = { > - .nr_extents = count >> SUPERPAGE_PFN_SHIFT, > - .extent_order = SUPERPAGE_PFN_SHIFT, > + .nr_extents = count >> SUPERPAGE_1GB_SHIFT, > + .extent_order = SUPERPAGE_1GB_SHIFT, > .domid = dom > }; > > @@ -203,11 +215,12 @@ > > set_xen_guest_handle(sp_req.extent_start, sp_extents); > for ( i = 0; i < sp_req.nr_extents; i++ ) > - sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_PFN_SHIFT)]; > + sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)]; > done = xc_memory_op(xc_handle, XENMEM_populate_physmap, &sp_req); > if ( done > 0 ) > { > - done <<= SUPERPAGE_PFN_SHIFT; > + stat_1gb_pages += done; > + done <<= SUPERPAGE_1GB_SHIFT; > if ( pod_mode && target_pages > cur_pages ) > { > int d = target_pages - cur_pages; > @@ -218,12 +231,60 @@ > } > } > > + if ( count != 0 ) > + { > + max_pages = 2048;Call this (SUPERPAGE_2MB_NR_PFNS * 4)? Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Tim Deegan
2010-Feb-23 13:00 UTC
Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote:> Change page allocation code in Xen tools. The allocation request now > starts with 1GB; if that fails, then falls back to 2MB and then 4KB.Can we have an equivalent patch for the save/restore path please? That took a while to catch up when 2MB superpages were introduced. Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Huang2, Wei
2010-Feb-23 16:22 UTC
RE: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
Tim, Thanks for the comments. I will fix them according to your comments, along with making save/restore work. Best, -Wei -----Original Message----- From: Tim Deegan [mailto:Tim.Deegan@citrix.com] Sent: Tuesday, February 23, 2010 3:39 AM To: Huang2, Wei Cc: ''xen-devel@lists.xensource.com''; Keir Fraser; Xu, Dongxiao Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote:> Change page allocation code in Xen tools. The allocation request now > starts with 1GB; if that fails, then falls back to 2MB and then 4KB. > > Signed-off-by: Wei Huang <wei.huang2@amd.com> > Acked-by: Dongxiao Xu <dongxiao.xu@intel.com> >Content-Description: 2-xen-hap-fix-tools.patch> # HG changeset patch > # User huangwei@huangwei.amd.com > # Date 1266853449 21600 > # Node ID c9b45664b423e11003358944bb8e6e976e735301 > # Parent 1d166c5703256ab97225c6ae46ac87dd5bd07e89 > fix the tools to support 1GB. Create 1GB pages if possible; otherwise falls back to 2MB then 4KB. > > diff -r 1d166c570325 -r c9b45664b423 tools/libxc/xc_hvm_build.c > --- a/tools/libxc/xc_hvm_build.c Mon Feb 22 09:44:04 2010 -0600 > +++ b/tools/libxc/xc_hvm_build.c Mon Feb 22 09:44:09 2010 -0600 > @@ -19,8 +19,10 @@ > > #include <xen/libelf/libelf.h> > > -#define SUPERPAGE_PFN_SHIFT 9 > -#define SUPERPAGE_NR_PFNS (1UL << SUPERPAGE_PFN_SHIFT) > +#define SUPERPAGE_2MB_SHIFT 9 > +#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT) > +#define SUPERPAGE_1GB_SHIFT 18 > +#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT) > > #define SPECIALPAGE_BUFIOREQ 0 > #define SPECIALPAGE_XENSTORE 1 > @@ -117,6 +119,8 @@ > uint64_t v_start, v_end; > int rc; > xen_capabilities_info_t caps; > + unsigned long stat_normal_pages = 0, stat_2mb_pages = 0, > + stat_1gb_pages = 0; > int pod_mode = 0; > > > @@ -166,35 +170,43 @@ > > /* > * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000. > - * We allocate pages in batches of no more than 8MB to ensure that > - * we can be preempted and hence dom0 remains responsive. > + * > + * We attempt to allocate 1GB pages if possible. It falls back on 2MB > + * pages if 1GB allocation fails. 4KB pages will be used eventually if > + * both fail. > + * > + * Under 2MB mode, we allocate pages in batches of no more than 8MB to > + * ensure that we can be preempted and hence dom0 remains responsive. > */ > rc = xc_domain_memory_populate_physmap( > xc_handle, dom, 0xa0, 0, 0, &page_array[0x00]); > cur_pages = 0xc0; > + stat_normal_pages = 0xc0; > while ( (rc == 0) && (nr_pages > cur_pages) ) > { > /* Clip count to maximum 8MB extent. */ITYM 1GB here.> unsigned long count = nr_pages - cur_pages; > - if ( count > 2048 ) > - count = 2048; > + unsigned long max_pages = SUPERPAGE_1GB_NR_PFNS; > > - /* Clip partial superpage extents to superpage boundaries. */ > - if ( ((cur_pages & (SUPERPAGE_NR_PFNS-1)) != 0) && > - (count > (-cur_pages & (SUPERPAGE_NR_PFNS-1))) ) > - count = -cur_pages & (SUPERPAGE_NR_PFNS-1); /* clip s.p. tail */ > - else if ( ((count & (SUPERPAGE_NR_PFNS-1)) != 0) && > - (count > SUPERPAGE_NR_PFNS) ) > - count &= ~(SUPERPAGE_NR_PFNS - 1); /* clip non-s.p. tail */ > + if ( count > max_pages ) > + count = max_pages; > + > + /* Take care the corner cases of super page tails */ > + if ( ((cur_pages & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) && > + (count > (-cur_pages & (SUPERPAGE_1GB_NR_PFNS-1))) ) > + count = -cur_pages & (SUPERPAGE_1GB_NR_PFNS-1); > + else if ( ((count & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) && > + (count > SUPERPAGE_1GB_NR_PFNS) ) > + count &= ~(SUPERPAGE_1GB_NR_PFNS - 1);This logic is overkill since you allocate at most one 1GB page in each pass. In fact, given that you test for <1GB immediately below, I think you can just drop this ''tails'' chunk entirely.> > - /* Attempt to allocate superpage extents. */ > - if ( ((count | cur_pages) & (SUPERPAGE_NR_PFNS - 1)) == 0 ) > + /* Attemp to allocate 1GB super page */ > + if ( ((count | cur_pages) & (SUPERPAGE_1GB_NR_PFNS - 1)) == 0 ) > { > long done; > - xen_pfn_t sp_extents[count >> SUPERPAGE_PFN_SHIFT]; > + xen_pfn_t sp_extents[count >> SUPERPAGE_1GB_SHIFT]; > struct xen_memory_reservation sp_req = { > - .nr_extents = count >> SUPERPAGE_PFN_SHIFT, > - .extent_order = SUPERPAGE_PFN_SHIFT, > + .nr_extents = count >> SUPERPAGE_1GB_SHIFT, > + .extent_order = SUPERPAGE_1GB_SHIFT, > .domid = dom > }; > > @@ -203,11 +215,12 @@ > > set_xen_guest_handle(sp_req.extent_start, sp_extents); > for ( i = 0; i < sp_req.nr_extents; i++ ) > - sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_PFN_SHIFT)]; > + sp_extents[i] = page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)]; > done = xc_memory_op(xc_handle, XENMEM_populate_physmap, &sp_req); > if ( done > 0 ) > { > - done <<= SUPERPAGE_PFN_SHIFT; > + stat_1gb_pages += done; > + done <<= SUPERPAGE_1GB_SHIFT; > if ( pod_mode && target_pages > cur_pages ) > { > int d = target_pages - cur_pages; > @@ -218,12 +231,60 @@ > } > } > > + if ( count != 0 ) > + { > + max_pages = 2048;Call this (SUPERPAGE_2MB_NR_PFNS * 4)? Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2010-Feb-23 17:19 UTC
RE: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
I''ll bet save/restore/migration for 1GB pages will be a fun challenge. At least please set the "no_migrate" flag automatically for any domain that uses 1GB pages unless/until live migration is supported.> -----Original Message----- > From: Tim Deegan [mailto:Tim.Deegan@citrix.com] > Sent: Tuesday, February 23, 2010 6:01 AM > To: Wei Huang > Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- > devel@lists.xensource.com; Fraser > Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page > > At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote: > > Change page allocation code in Xen tools. The allocation request now > > starts with 1GB; if that fails, then falls back to 2MB and then 4KB. > > Can we have an equivalent patch for the save/restore path please? That > took a while to catch up when 2MB superpages were introduced. > > Tim. > > -- > Tim Deegan <Tim.Deegan@citrix.com> > Principal Software Engineer, XenServer Engineering > Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Feb-23 18:09 UTC
Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
The use of 1GB extents is just a (small-ish) performance win. If you can only back with 4kB or 2MB extents on the target host it doesn''t matter that much. -- Keir On 23/02/2010 17:19, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:> I''ll bet save/restore/migration for 1GB pages will be a fun challenge. > > At least please set the "no_migrate" flag automatically for > any domain that uses 1GB pages unless/until live migration > is supported. > >> -----Original Message----- >> From: Tim Deegan [mailto:Tim.Deegan@citrix.com] >> Sent: Tuesday, February 23, 2010 6:01 AM >> To: Wei Huang >> Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- >> devel@lists.xensource.com; Fraser >> Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page >> >> At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote: >>> Change page allocation code in Xen tools. The allocation request now >>> starts with 1GB; if that fails, then falls back to 2MB and then 4KB. >> >> Can we have an equivalent patch for the save/restore path please? That >> took a while to catch up when 2MB superpages were introduced. >> >> Tim. >> >> -- >> Tim Deegan <Tim.Deegan@citrix.com> >> Principal Software Engineer, XenServer Engineering >> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2010-Feb-23 20:09 UTC
RE: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
Not sure what your point is. I was just suggesting that if save/restore/migration doesn''t work anyway, setting no_migrate should result in a nicer error message than otherwise, and serves as a clear "TO DO" marker for developers looking for a "fun" project.> -----Original Message----- > From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] > Sent: Tuesday, February 23, 2010 11:10 AM > To: Dan Magenheimer; Tim Deegan; Wei Huang > Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- > devel@lists.xensource.com > Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page > > The use of 1GB extents is just a (small-ish) performance win. If you > can > only back with 4kB or 2MB extents on the target host it doesn''t matter > that > much. > > -- Keir > > On 23/02/2010 17:19, "Dan Magenheimer" <dan.magenheimer@oracle.com> > wrote: > > > I''ll bet save/restore/migration for 1GB pages will be a fun > challenge. > > > > At least please set the "no_migrate" flag automatically for > > any domain that uses 1GB pages unless/until live migration > > is supported. > > > >> -----Original Message----- > >> From: Tim Deegan [mailto:Tim.Deegan@citrix.com] > >> Sent: Tuesday, February 23, 2010 6:01 AM > >> To: Wei Huang > >> Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- > >> devel@lists.xensource.com; Fraser > >> Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host > page > >> > >> At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote: > >>> Change page allocation code in Xen tools. The allocation request > now > >>> starts with 1GB; if that fails, then falls back to 2MB and then > 4KB. > >> > >> Can we have an equivalent patch for the save/restore path please? > That > >> took a while to catch up when 2MB superpages were introduced. > >> > >> Tim. > >> > >> -- > >> Tim Deegan <Tim.Deegan@citrix.com> > >> Principal Software Engineer, XenServer Engineering > >> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Feb-23 20:54 UTC
Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page
Save/restore/migrate will work just fine as is. You just won''t get 1GB pages at the receiving end. -- Keir On 23/02/2010 20:09, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:> Not sure what your point is. I was just suggesting that if > save/restore/migration doesn''t work anyway, setting no_migrate > should result in a nicer error message than otherwise, and serves > as a clear "TO DO" marker for developers looking for a > "fun" project. > >> -----Original Message----- >> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >> Sent: Tuesday, February 23, 2010 11:10 AM >> To: Dan Magenheimer; Tim Deegan; Wei Huang >> Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- >> devel@lists.xensource.com >> Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host page >> >> The use of 1GB extents is just a (small-ish) performance win. If you >> can >> only back with 4kB or 2MB extents on the target host it doesn''t matter >> that >> much. >> >> -- Keir >> >> On 23/02/2010 17:19, "Dan Magenheimer" <dan.magenheimer@oracle.com> >> wrote: >> >>> I''ll bet save/restore/migration for 1GB pages will be a fun >> challenge. >>> >>> At least please set the "no_migrate" flag automatically for >>> any domain that uses 1GB pages unless/until live migration >>> is supported. >>> >>>> -----Original Message----- >>>> From: Tim Deegan [mailto:Tim.Deegan@citrix.com] >>>> Sent: Tuesday, February 23, 2010 6:01 AM >>>> To: Wei Huang >>>> Cc: Keir@acsinet12.oracle.com; Xu, Dongxiao; xen- >>>> devel@lists.xensource.com; Fraser >>>> Subject: Re: [Xen-devel] [PATCH][2/4] Enable 1GB for Xen HVM host >> page >>>> >>>> At 17:18 +0000 on 22 Feb (1266859100), Wei Huang wrote: >>>>> Change page allocation code in Xen tools. The allocation request >> now >>>>> starts with 1GB; if that fails, then falls back to 2MB and then >> 4KB. >>>> >>>> Can we have an equivalent patch for the save/restore path please? >> That >>>> took a while to catch up when 2MB superpages were introduced. >>>> >>>> Tim. >>>> >>>> -- >>>> Tim Deegan <Tim.Deegan@citrix.com> >>>> Principal Software Engineer, XenServer Engineering >>>> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >> >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel