I think I asked this a while back already, but is it possible to map the same page of memory to multiple pfn''s in a HVM DomU? That would resolve the problems that occur when Windows hibernates my ballooned out memory... Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 21/05/2011 10:41, "James Harper" <james.harper@bendigoit.com.au> wrote:> I think I asked this a while back already, but is it possible to map the > same page of memory to multiple pfn''s in a HVM DomU? That would resolve > the problems that occur when Windows hibernates my ballooned out > memory...I''m pretty sure it''s not currently possible, but might not be hard to add support for it. Tim will know better than me. -- Keir> Thanks > > James > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > On 21/05/2011 10:41, "James Harper" <james.harper@bendigoit.com.au>wrote:> > > I think I asked this a while back already, but is it possible to mapthe> > same page of memory to multiple pfn''s in a HVM DomU? That wouldresolve> > the problems that occur when Windows hibernates my ballooned out > > memory... > > I''m pretty sure it''s not currently possible, but might not be hard toadd> support for it. Tim will know better than me. >I think that was the answer last time :) Should there be a performance impact if Windows tries to touch a page that I have previously given pack with decrease_reservation? Will that invoke the PoD sweep? James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 11:17 +0100 on 21 May (1305976662), James Harper wrote:> > > > On 21/05/2011 10:41, "James Harper" <james.harper@bendigoit.com.au> > wrote: > > > > > I think I asked this a while back already, but is it possible to map > the > > > same page of memory to multiple pfn''s in a HVM DomU? That would > resolve > > > the problems that occur when Windows hibernates my ballooned out > > > memory... > > > > I''m pretty sure it''s not currently possible, but might not be hard to > add > > support for it. Tim will know better than me. > > > > I think that was the answer last time :)The memory-sharing code would allow this, AFAICS, but it''s not super-mature just yet, and it relies on being able to undo the sharing on writes, which defeats the purpose of ballooning. Since I''m already tinkering in the p2m code I can look into it again. Just allowing aliasing will break live migration (and probably save/restore) since there''s no easy way for the dom0 tools to know which frames alias each other and all the accesses are done by PFN. Is there no way to intercept this on the driver side? I think I mean: are you writing out whole frames or does Windows compress them first? Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message-----[snip]> > Should there be a performance impact if Windows tries to touch a > page that I have previously given pack with decrease_reservation? > Will that invoke the PoD sweep? >No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweep should only be invoked if the cache is exhausted when trying to fix up a PoD entry. Paul _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 09:42 +0100 on 23 May (1306143771), Paul Durrant wrote:> > -----Original Message----- > [snip] > > > > Should there be a performance impact if Windows tries to touch a > > page that I have previously given pack with decrease_reservation? > > Will that invoke the PoD sweep? > > > > No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweep should > only be invoked if the cache is exhausted when trying to fix up a PoD > entry.But yes, there will be a performance impact because all accesses to the missing page will be emulated by sending an ioreq to qemu, so it will run _very_ slowly. Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: Tim Deegan > Sent: 23 May 2011 10:13 > To: Paul Durrant > Cc: James Harper; Keir Fraser; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] map memory holes with same page > > At 09:42 +0100 on 23 May (1306143771), Paul Durrant wrote: > > > -----Original Message----- > > [snip] > > > > > > Should there be a performance impact if Windows tries to touch a > > > page that I have previously given pack with > decrease_reservation? > > > Will that invoke the PoD sweep? > > > > > > > No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweep > should > > only be invoked if the cache is exhausted when trying to fix up a > PoD > > entry. > > But yes, there will be a performance impact because all accesses to > the missing page will be emulated by sending an ioreq to qemu, so it > will run _very_ slowly. >Good point. That would explain why a hibernate would be slow vs. a crashdump (if it''s going through and trying to compress ballooned out pages). Paul _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > At 09:42 +0100 on 23 May (1306143771), Paul Durrant wrote: > > > -----Original Message----- > > [snip] > > > > > > Should there be a performance impact if Windows tries to touch a > > > page that I have previously given pack with decrease_reservation? > > > Will that invoke the PoD sweep? > > > > > > > No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweepshould> > only be invoked if the cache is exhausted when trying to fix up aPoD> > entry. > > But yes, there will be a performance impact because all accesses tothe> missing page will be emulated by sending an ioreq to qemu, so it will > run _very_ slowly. >Thanks. That matches what I''m seeing. For hibernate, it appears that Windows compresses pages into a buffer and the buffer is what I pass to Dom0. I guess the only way to solve this is to stop every access hitting an emulation. Either I can map a common read-only page into every hole (which doesn''t sound like a workable solution based on feedback so far), or Xen could keep a common read-only page and map it into a hole every time it is accessed (and then move the page when another hole is accessed), which would reduce the problem to emulation on the first time a different hole is accessed... James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 11:29 +0100 on 23 May (1306150145), James Harper wrote:> I guess the only way to solve this is to stop every access hitting an > emulation. Either I can map a common read-only page into every hole > (which doesn''t sound like a workable solution based on feedback so far), > or Xen could keep a common read-only page and map it into a hole every > time it is accessed (and then move the page when another hole is > accessed), which would reduce the problem to emulation on the first time > a different hole is accessed...That''s pretty ugly, though. Is there really no way to tell Windows not to bother hibernating your ballooned-out memory? Surely there must be equivalent cases in real hardware: GART framebuffers and so on? What happens when Windows tries to load all your ballooned memory back in on resume, btw? Will it uncompress all those frames back onto non-existent RAM? (i.e. would we have to have this scratch frame be writable - and if so would we need to properly discard writes to correctly emulate missing RAM?) Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, May 23, Tim Deegan wrote:> At 09:42 +0100 on 23 May (1306143771), Paul Durrant wrote: > > > -----Original Message----- > > [snip] > > > > > > Should there be a performance impact if Windows tries to touch a > > > page that I have previously given pack with decrease_reservation? > > > Will that invoke the PoD sweep? > > > > > > > No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweep should > > only be invoked if the cache is exhausted when trying to fix up a PoD > > entry. > > But yes, there will be a performance impact because all accesses to the > missing page will be emulated by sending an ioreq to qemu, so it will > run _very_ slowly.Isnt that what I just "fixed" for kdump, the new get_mem_type hvmop? Could that be reused for this Windows issue? Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Surprisingly, resume from hibernate is very fast. Maybe windows knows the pages are already scrubbed or something... I''ve asked the "don''t hibernate these pages" question on ntdev but I think they''re sick of my absurd questions :) Sent from my iPhone On 23/05/2011, at 20:37, "Tim Deegan" <Tim.Deegan@citrix.com> wrote:> At 11:29 +0100 on 23 May (1306150145), James Harper wrote: >> I guess the only way to solve this is to stop every access hitting an >> emulation. Either I can map a common read-only page into every hole >> (which doesn''t sound like a workable solution based on feedback so far), >> or Xen could keep a common read-only page and map it into a hole every >> time it is accessed (and then move the page when another hole is >> accessed), which would reduce the problem to emulation on the first time >> a different hole is accessed... > > That''s pretty ugly, though. Is there really no way to tell Windows not > to bother hibernating your ballooned-out memory? Surely there must be > equivalent cases in real hardware: GART framebuffers and so on? > > What happens when Windows tries to load all your ballooned memory back > in on resume, btw? Will it uncompress all those frames back onto > non-existent RAM? (i.e. would we have to have this scratch frame be > writable - and if so would we need to properly discard writes to > correctly emulate missing RAM?) > > Cheers, > > Tim. > > -- > Tim Deegan <Tim.Deegan@citrix.com> > Principal Software Engineer, Xen Platform Team > Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
At 12:03 +0100 on 23 May (1306152214), Olaf Hering wrote:> On Mon, May 23, Tim Deegan wrote: > > > At 09:42 +0100 on 23 May (1306143771), Paul Durrant wrote: > > > > -----Original Message----- > > > [snip] > > > > > > > > Should there be a performance impact if Windows tries to touch a > > > > page that I have previously given pack with decrease_reservation? > > > > Will that invoke the PoD sweep? > > > > > > > > > > No. The p2m entry will be ''invalid'', not ''PoD''. IIRC the sweep should > > > only be invoked if the cache is exhausted when trying to fix up a PoD > > > entry. > > > > But yes, there will be a performance impact because all accesses to the > > missing page will be emulated by sending an ioreq to qemu, so it will > > run _very_ slowly. > > Isnt that what I just "fixed" for kdump, the new get_mem_type hvmop? > Could that be reused for this Windows issue?Unfortunately, hibernate tries to compress the pages before handing them over to the driver, so there isn''t a point where the driver can avoid the access. If there were such a point, the balloon driver could work around it from its own records, without even needing the hypercall. Cheers, Tim. -- Tim Deegan <Tim.Deegan@citrix.com> Principal Software Engineer, Xen Platform Team Citrix Systems UK Ltd. (Company #02937203, SL9 0BG) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > > But yes, there will be a performance impact because all accessesto the> > > missing page will be emulated by sending an ioreq to qemu, so itwill> > > run _very_ slowly. > > > > Isnt that what I just "fixed" for kdump, the new get_mem_type hvmop? > > Could that be reused for this Windows issue? > > Unfortunately, hibernate tries to compress the pages before handingthem> over to the driver, so there isn''t a point where the driver can avoid > the access. If there were such a point, the balloon driver could work > around it from its own records, without even needing the hypercall. >I hypercall might be faster than wading through a list of pfn''s. Especially in the dump driver where you are supposed to trust as little of the system as possible. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel