In one situation, where several domains have been created using overbooked resources, would it be nice to have some resouce reclamation capability which requires the cooperation between Xen and supported GuestOS? I believe Xen has the similar primitives for requesting resources from GuestOS side. And it seems reasonable and practical to add the reverse functionality rescinding resource usage back from GuestOS. Such approach may not be possible considering UML or VMWare, but with Xen, it should be possible since modification on GuestOS can be made. Thanks! Xuxian --- Xuxian Jiang (765)494-2957 Department of Computer Sciences Purdue University http://www.cs.purdue.edu/homes/jiangx --- ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> In one situation, where several domains have been created using overbooked > resources, would it be nice to have some resouce reclamation capability > which requires the cooperation between Xen and supported GuestOS?Yes; we have something like this for memory (via the balloon driver), although this is currently initiated totally by the guest OS. In our ''grand vision'', there''ll be an economic aspect to use/reservation of resources (of any sort) and hence guestOSes will have an incentive to reduce their use to the minimum acceptable level.> I believe Xen has the similar primitives for requesting resources from > GuestOS side. And it seems reasonable and practical to add the reverse > functionality rescinding resource usage back from GuestOS. Such approach > may not be possible considering UML or VMWare, but with Xen, it should be > possible since modification on GuestOS can be made.Modification is not even required with temporally multiplexed resources (aka CPU, network tx shaping, etc) /unless/ the guestOS is re-exporting ''guarantees'' or reservations to overlying processes. For spatially multiplexed things (mainly memory, disk storage, but also e.g. receive packet filters) it''s a bit tricker, but can be achieved with sufficient guestOS co-operation. We''d be keen to have people work on this area (since it''d allow overbooking / yield management, and fit into our economic model once we''ve that all working). cheers, S. ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> I believe Xen has the similar primitives for requesting resources from > GuestOS side. And it seems reasonable and practical to add the reverse > functionality rescinding resource usage back from GuestOS. Such approach > may not be possible considering UML or VMWare, but with Xen, it should be > possible since modification on GuestOS can be made.Absolutely -- that''s one of the big wins of para-virtualization. For example, I''m looking forward to being able to send a message from domain 0 to another domain saying "I wish you to release 4MB of memory as I have another domain that is prepared to pay more for it. You have 100ms to comply, or be terminated with extreme prejudice." ;-)> In one situation, where several domains have been created using overbooked > resources, would it be nice to have some resouce reclamation capability > which requires the cooperation between Xen and supported GuestOS?The general plan is to avoid overbooking "guaranteed" resources to domains, but allow domains some additional resources on a best effort proportional share basis. For example, we''ve got plans to add support for a shared buffer cache using ''unused'' memory, along with a ''swap cache'' to speed up swapping. Volunteers welcome ;-) Best, Ian ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Fri, 10 Oct 2003, Ian Pratt wrote:> > > > I believe Xen has the similar primitives for requesting resources from > > GuestOS side. And it seems reasonable and practical to add the reverse > > functionality rescinding resource usage back from GuestOS. Such approach > > may not be possible considering UML or VMWare, but with Xen, it should be > > possible since modification on GuestOS can be made. > > Absolutely -- that''s one of the big wins of para-virtualization. > > For example, I''m looking forward to being able to send a message > from domain 0 to another domain saying "I wish you to release 4MB > of memory as I have another domain that is prepared to pay more > for it. You have 100ms to comply, or be terminated with extreme > prejudice." ;-) > > > > In one situation, where several domains have been created using overbooked > > resources, would it be nice to have some resouce reclamation capability > > which requires the cooperation between Xen and supported GuestOS? > > The general plan is to avoid overbooking "guaranteed" resources > to domains, but allow domains some additional resources on a best > effort proportional share basis. > > For example, we''ve got plans to add support for a shared buffer > cache using ''unused'' memory, along with a ''swap cache'' to speed > up swapping.Nice thought. Another idea related to this is to provide one memory pool, which can be shared across multiple domains according to some policies, like proportional sharing. Quick check of Xen shows that Xen is providing *very* strong memory protection via prior reservation. Such clear-cut *slicing * of memory may not achieve overall optimal performance. Another questions related to the scalability of Xen approach. I believe the most limiting factor for scalability would be the amount of memory available. Considering fixed memory size, could it be potentially possible to emulate the GuestOS memory with disk files with mmap-similar mechanism. It is not necessary that whole GuestOS memory be emulated, but even partial emulation could provide *nice* and *desirable* tradeoff between performance and scalability. I have experiemented Xen to create 10 domains each with 60M on top of one host node with 750M and afterwards failed to create new one without destroying exiting ones. In some cases, we may want to degrade the performance of new created domains but not obvious *rejection* to create new domains. Thanks! Xuxian --- Xuxian Jiang (765)494-2957 Department of Computer Sciences Purdue University http://www.cs.purdue.edu/homes/jiangx --- ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> Another idea related to this is to provide one memory pool, which can be > shared across multiple domains according to some policies, like > proportional sharing. Quick check of Xen shows that Xen is providing > *very* strong memory protection via prior reservation. Such clear-cut > *slicing * of memory may not achieve overall optimal performance.Our intention is to expose real resources to domains and have them optimise their behaviour accordingly. If you are `charging'' domains for the memory they use, it''s in their interests to use the balloon driver to try to reduce their memory footprint and give pages back the system, or buy more pages if they can usefully exploit it. Our intention would be to run the system at something like 75% memory utilization, making the remaining memory available for use as shared buffer cache and swap cache (on a proportional share basis).> Another questions related to the scalability of Xen approach. I believe > the most limiting factor for scalability would be the amount of memory > available. Considering fixed memory size, could it be potentially possible > to emulate the GuestOS memory with disk files with mmap-similar mechanism. > It is not necessary that whole GuestOS memory be emulated, but even > partial emulation could provide *nice* and *desirable* tradeoff between > performance and scalability.Our goal was to have Xen comfortably support 100 domains. Results in the SOSP paper show that using the balloon driver to swap out all pageable memory its possible to get the memory footprint of quiescent domains down to just 2-3MB. Hence, running 1000 domains is already a possibility.> I have experiemented Xen to create 10 domains each with 60M on top of one > host node with 750M and afterwards failed to create new one without > destroying exiting ones. In some cases, we may want to degrade the > performance of new created domains but not obvious *rejection* to create > new domains.Within Xen, its our view that we want strong admission control rather than going down the rat hole of implementing paging within the VMM. It''s down to domain0 to request other domains to free up some memory if it wants to create a new domain. Ian ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Sat, 11 Oct 2003, Ian Pratt wrote:> > > Another idea related to this is to provide one memory pool, which can be > > shared across multiple domains according to some policies, like > > proportional sharing. Quick check of Xen shows that Xen is providing > > *very* strong memory protection via prior reservation. Such clear-cut > > *slicing * of memory may not achieve overall optimal performance. > > Our intention is to expose real resources to domains and have > them optimise their behaviour accordingly. If you are `charging'' > domains for the memory they use, it''s in their interests to use > the balloon driver to try to reduce their memory footprint and > give pages back the system, or buy more pages if they can > usefully exploit it. Our intention would be to run the system at > something like 75% memory utilization, making the remaining > memory available for use as shared buffer cache and swap cache > (on a proportional share basis).This is exactly what I want. It is able to create strong resource isolation if required, but still has the flexibility to accommodate peak load with dynamic resource allocation from shared pool. Good job, Xen! :-)> > > Another questions related to the scalability of Xen approach. I believe > > the most limiting factor for scalability would be the amount of memory > > available. Considering fixed memory size, could it be potentially possible > > to emulate the GuestOS memory with disk files with mmap-similar mechanism. > > It is not necessary that whole GuestOS memory be emulated, but even > > partial emulation could provide *nice* and *desirable* tradeoff between > > performance and scalability. > > Our goal was to have Xen comfortably support 100 domains. Results > in the SOSP paper show that using the balloon driver to swap out > all pageable memory its possible to get the memory footprint of > quiescent domains down to just 2-3MB. Hence, running 1000 domains > is already a possibility.In some sense, swap-out all pageable memory to disk is the reverse side of mmap''ing disk file as part of memory ( UML adopted this approach?). Based on mmap, every GuestOS can be assigned some amount of exclusive memory, and is still able to create some *memory disk* as complements. Mmap-based approach has the potential to increase memory available (unlimited due to virtual virtual memory?) to GuestOS, though the memory is not uniform in terms of access latency and throughput. The nonuniformity may not be desired, and might complicate the implementatio. Also performance is certain to be hurt. But it may satisfy some *unreasonable* memory operations. Honestly, I don''t know whether such applications exist. But the motication may be similar to original virtual memory idea - emulate memory with disk file though there is potenial *two-level* vitual memory. Anyway, balloon dirver has been proven to be quite effective and achieve the design goal. Mmap-based idea may seem weired and I might be totally wrong. And I just want to bring it up. Any criticism and comments are welcome.> > I have experiemented Xen to create 10 domains each with 60M on top of one > > host node with 750M and afterwards failed to create new one without > > destroying exiting ones. In some cases, we may want to degrade the > > performance of new created domains but not obvious *rejection* to create > > new domains. > > Within Xen, its our view that we want strong admission control > rather than going down the rat hole of implementing paging within > the VMM. It''s down to domain0 to request other domains to free up > some memory if it wants to create a new domain.Just like you mentioned above, resource can be partitioned into two parts, private pool and shared pool. Private pool is reserved to individual domain, and shared pool can be used to shared buffer cache and swap cache. Best effort or proportional policy can be applied to shared pool. Another *weird* thought would be to emulate shared pool with seemingly *unlimited* disk space, but at degraded performance. The above ideas are just for your reference and may not be practical and even could be invalid. I really like Xen work! It is so solid and cool:-) Thanks! Xuxian ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel