Hi Dan, Sorry for the late reply since we were busy in our college academics. We would be doing this project (support for hugepages in tmem) and soon will be submit our design to this community. We have gone through the code of hugepages or superpages support in Xen. We found that whenever a domain requests a page of order = 0 or order = 9, it tries to allocate super page or singleton page. No code exists for order > 0 and order < 9. In our study, we also came across a point that if a domain requests a 2 MB page and it gets one, then the domain will not receive 4 KB page during its stay , which means that a single domain cannot have simultaneous support to both normal page and super pages. Is it really so? Some part of the code says that if it’s not possible to allocate super page, then it allocates a linked list consisting of 512 4kB pages i.e. from PoD(1GB->2M or 2M->4k). The performance is improved in case of huge pages due to its contiguity. But in the case above, does it mean that the performance is degraded? I think that in design consideration we need to solve this problem of splitting. According to code such splitting is done in HAP, so what exactly happens in shadow mode? -- With Regards, Ashwin Vasani B.E. (Fourth Year) Computer Engineering, Pune Institute of Computer Technology. +91 9960405802 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi Ash --> We have gone through the code of hugepages or superpages support > in Xen. We found that whenever a domain requests a page of order = 0 or > order = 9, it tries to allocate super page or singleton page. No code > exists for order > 0 and order < 9.I believe that is correct, although I think there is also code for 1GB pages, at least for HVM domains.> In our study, we also came across a point that if a domain > requests a > 2 MB page and it gets one, then the domain will not receive 4 KB page > during > its stay , which means that a single domain cannot have simultaneous > support > to both normal page and super pages. Is it really so?This doesn''t sound correct, though I am not an expert in the superpage code.> Some part of the code says that if it’s not possible to allocate > super > page, > > then it allocates a linked list consisting of 512 4kB pages i.e. from > PoD(1GB->2M > > or 2M->4k). The performance is improved in case of huge pages due to > its > contiguity. But in the case above, does it mean that the performance is > degraded?PoD (populate-on-demand) is used only for HVM (fully-virtualized domains) but, yes, if the 2MB guest-physical page is emulated in Xen by 512 4KB host-physical pages, there is a performance degradation.> I think that in design consideration we need to solve this problem of > splitting. > > According to code such splitting is done in HAP, so what exactly > happens in shadow mode?I think Tim Deegan is much more of an expert in this area than I am. I have cc''ed him. Note that tmem is primarily used in PV domains. It can be used in an HVM domain, but requires additional patches (PV-on-HVM patches from Stefano Stabellini) in the guest and, although I got this working and tested once, I do not know if it is still working. In any case my knowledge of memory code supporting HVM domains is very limited. My thoughts about working on 2MB pages for tmem were somewhat different: (1) Change the in-guest balloon driver to only relinquish/reclaim 2MB pages... this is a patch that Jeremy Fitzhardinge has worked on but I don''t know the status. (2) change tmem''s memory allocation to obtain only contiguous 2MB physical pages from the Xen TLSF memory allocator and (3) tmem would manage ephemeral tmem 4KB pages inside of the physical 2MB pages in a way such that the function tmem_ensure_avail_pages() would be able to easily evict a physical 2MB page (including all ephemeral 4KB tmem pages inside of it). I have not thought through a complete design for this... it may be very difficult or nearly impossible. But this is a rough description of what I was thinking about when I said a few weeks ago that re-working tmem to work with 2MB pages would be a good project. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, At 01:51 +0100 on 25 Oct (1287971515), Dan Magenheimer wrote:> > I think that in design consideration we need to solve this problem of > > splitting. > > > > According to code such splitting is done in HAP, so what exactly > > happens in shadow mode? > > I think Tim Deegan is much more of an expert in this area than > I am. I have cc''ed him.Shadow mode always uses 4k mappings for everything. It should be possible to update it to use 2M mappings when the underlying memory is both contiguous and all the same type, but it would take a bit of cunning to do it without making refcounting very expensive. HAP uses 2M mappings when it can - even the PoD code tries not to fragment into 4k mappings. But if you''re running PoD you''re also using balloon drivers, and they always operate in 4K pages. AFAIK, all existing balloon drivers will tend to fragment memory. Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, XenServer Engineering Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 11:10 +0000 on 25 Jan (1295953836), ashwin wasani wrote:> Hi, > Can we modify the PS bit of PDE using structure page_info ?Please don''t top-post; it makes it harder to follow the thread of conversation. No, you can''t change any pagetable bits using struct page_info. The page_info structures correspond to physical/machine addresses, and pagetables to virtual addresses. Tim.> On Mon, Oct 25, 2010 at 2:37 PM, Tim Deegan <Tim.Deegan@citrix.com<mailto:Tim.Deegan@citrix.com>> wrote: > Hi, > > At 01:51 +0100 on 25 Oct (1287971515), Dan Magenheimer wrote: > > > I think that in design consideration we need to solve this problem of > > > splitting. > > > > > > According to code such splitting is done in HAP, so what exactly > > > happens in shadow mode? > > > > I think Tim Deegan is much more of an expert in this area than > > I am. I have cc''ed him. > > Shadow mode always uses 4k mappings for everything. It should be > possible to update it to use 2M mappings when the underlying memory is > both contiguous and all the same type, but it would take a bit of > cunning to do it without making refcounting very expensive. > > HAP uses 2M mappings when it can - even the PoD code tries not to > fragment into 4k mappings. But if you''re running PoD you''re also using > balloon drivers, and they always operate in 4K pages. AFAIK, all > existing balloon drivers will tend to fragment memory. > > Cheers, > > Tim. > > -- > Tim Deegan <Tim.Deegan@citrix.com<mailto:Tim.Deegan@citrix.com>> > Principal Software Engineer, XenServer Engineering > Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) > > > > -- > With Regards, > Ashwin Vasani > B.E. (Fourth Year) > Computer Engineering, > Pune Institute of Computer Technology. > +91 9960405802-- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel