Allocate the memory for the HVM based on the scheme and the selection of nodes. -dulloor Signed-off-by : Dulloor <dulloor@gmail.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2010-Jul-05 09:55 UTC
Re: [Xen-devel] [XEN][vNUMA][PATCH 7/9] Build NUMA HVM
What''s this line for:>+ if (nr_pages > target_pages) > { >- PERROR("Could not allocate memory."); >- goto error_out; >+ pod_mode = 1; >+ mem_flags |= XENMEMF_populate_on_demand; >+ IPRINTF("I SHOULDN''T BE HERE !!\n");It''s not clear what this patch does to the PoD logic... does it still need some work, or should I try harder to grok it? Have you tested it in PoD mode? -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
WIth the NUMA allocator, pod is simply disabled as of now. This debug statement seeped through when testing that. Will take care of it :) However, I did test PoD for any regressions. -dulloor On Mon, Jul 5, 2010 at 2:55 AM, George Dunlap <George.Dunlap@eu.citrix.com> wrote:> What''s this line for: > >>+ if (nr_pages > target_pages) >> { >>- PERROR("Could not allocate memory."); >>- goto error_out; >>+ pod_mode = 1; >>+ mem_flags |= XENMEMF_populate_on_demand; >>+ IPRINTF("I SHOULDN''T BE HERE !!\n"); > > It''s not clear what this patch does to the PoD logic... does it still > need some work, or should I try harder to grok it? Have you tested it > in PoD mode? > > -George >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
George Dunlap
2010-Jul-06 10:09 UTC
Re: [Xen-devel] [XEN][vNUMA][PATCH 7/9] Build NUMA HVM
You mean, if NUMA is on, then PoD is disabled, but if NUMA is off, PoD still works? Or do you mean, this patch will break PoD functionality if accepted? -George On Tue, Jul 6, 2010 at 7:07 AM, Dulloor <dulloor@gmail.com> wrote:> WIth the NUMA allocator, pod is simply disabled as of now. This debug > statement seeped through when testing that. Will take care of it :) > However, I did test PoD for any regressions. > > -dulloor > > On Mon, Jul 5, 2010 at 2:55 AM, George Dunlap > <George.Dunlap@eu.citrix.com> wrote: >> What''s this line for: >> >>>+ if (nr_pages > target_pages) >>> { >>>- PERROR("Could not allocate memory."); >>>- goto error_out; >>>+ pod_mode = 1; >>>+ mem_flags |= XENMEMF_populate_on_demand; >>>+ IPRINTF("I SHOULDN''T BE HERE !!\n"); >> >> It''s not clear what this patch does to the PoD logic... does it still >> need some work, or should I try harder to grok it? Have you tested it >> in PoD mode? >> >> -George >> > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Jul 6, 2010 at 3:09 AM, George Dunlap <George.Dunlap@eu.citrix.com> wrote:> You mean, if NUMA is on, then PoD is disabled, but if NUMA is off, PoD > still works?Yes. Only if we choose a NUMA allocation strategy for a guest, PoD is disabled for it. Otherwise things are the same as now. I plan to take care of PoD with NUMA allocation once this series is checked-in.> > Or do you mean, this patch will break PoD functionality if accepted? > > -George > > On Tue, Jul 6, 2010 at 7:07 AM, Dulloor <dulloor@gmail.com> wrote: >> WIth the NUMA allocator, pod is simply disabled as of now. This debug >> statement seeped through when testing that. Will take care of it :) >> However, I did test PoD for any regressions. >> >> -dulloor >> >> On Mon, Jul 5, 2010 at 2:55 AM, George Dunlap >> <George.Dunlap@eu.citrix.com> wrote: >>> What''s this line for: >>> >>>>+ if (nr_pages > target_pages) >>>> { >>>>- PERROR("Could not allocate memory."); >>>>- goto error_out; >>>>+ pod_mode = 1; >>>>+ mem_flags |= XENMEMF_populate_on_demand; >>>>+ IPRINTF("I SHOULDN''T BE HERE !!\n"); >>> >>> It''s not clear what this patch does to the PoD logic... does it still >>> need some work, or should I try harder to grok it? Have you tested it >>> in PoD mode? >>> >>> -George >>> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Allocate the memory for the HVM based on the scheme and the selection of nodes. Also, disable PoD for NUMA allocation schemes. -dulloor Signed-off-by : Dulloor <dulloor@gmail.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dulloor wrote:> Allocate the memory for the HVM based on the scheme and the selection > of nodes. Also, disable PoD for NUMA allocation schemes. >Sorry for the delay, finally I found some time to play a bit with the code. To me it looks quite matured, so sometimes it is hard to see why things were done in a certain way, although it mostly gets clearer later. Some general comments: 1. I didn''t manage to get striping to work. I tried several settings, it all ended up with an almost endless loop of: xc: info: PHYSICAL MEMORY ALLOCATION (NODE {7,6,4,5}): 4KB PAGES: 0x00000000000000c0 2MB PAGES: 0x0000000000000000 1GB PAGES: 0x0000000000000000 and then stopped creating the guest. I didn''t investigate, though. 2. I don''t like the limitation imposed on the guest''s NUMA layout. Requiring the number of nodes and the number of VCPUs to be a power of 2 is too restrictive in my eyes. My older code could cope with a wild combination of memory, nodes and VCPUSs. I remember testing a rather big matrix, including things like 3.5 GB of memory over 3 nodes and 5 VCPUs. As your patch 6&7 touch my work anyway, I''d also volunteer to fix this by basically rebasing my code onto your foundation. I left out the SLIT part for the first round, but I suppose this could be easily added at the end. I started to hack on this already and moved the "hole-punching" (VGA hole and PCI hole) from libxc into hvmloader. I then removed the limitation check and tried some setups, although there seems to be still an issue with the memory layout, as the guest Linux kernel crashes early (although the same guest setup works with QEMU). 3. Is that really necessary to clutter the hvm_info_table with such much information? Until now it is really small and static. I''d prefer to simply enter the values really needed: vCPU->vnode mapping, vnode memory size and SLIT information. AFAIK there is no compatibility promise for this interface between hvmloader and the Xen tools, so we could even make the arrays here statically declared at compile-time. Regards, Andre. -- Andre Przywara AMD-Operating System Research Center (OSRC), Dresden, Germany Tel: +49 351 448-3567-12 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel