Hello, On xensource.com website we can read: " Per VM Resource Guarantees Xen provides superb resource partitioning, for CPU, memory, and block and network I/O. This resource protection model leads to improved security because guests and drivers are DoS-proof. Xen is fully open to scrutiny by the security community and its security is continuously tested. Xen is also the foundation for a Multi-Level Secure system architecture being developed by XenSource, IBM and Intel. ">From my point of view, this paragraph gives the impression that Xen provides QoS mechanisms for managing VM resource allocation atall levels. However, i''m having a hard time figuring out how does Xen provide and allows for management those resource guarantees, particularly regarding block I/O. Any hints? TIA, r _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 11/30/06, Rodrigo Borges Pereira <rbp@netcanvas.com> wrote:> >From my point of view, this paragraph gives the impression that Xen provides QoS mechanisms for managing VM resource allocation at > all levels. However, i''m having a hard time figuring out how does Xen provide and allows for management those resource guarantees, > particularly regarding block I/O.I am also interested in this topic, currently a bit more from the viewpoint of monitoring these things (as can be seen in my posts here some days ago and on xen-devel today), for example to tell when ressources are exhausted or satured in order to drive things like deployment and migrations decisions based on the data. I found in this paper, with a study and examples: http://www.hpl.hp.com/techreports/2006/HPL-2006-77.ps, which is mainly for Net I/O I didn''t checkl if Shareguard and SEDF-DC code is in the Xen unstable code yet. I didn''t come so far yet to find more information especially on Disk I/O. Maybe there can be found some more information on the Xen summit pages: http://xensource.com/xen/xensummit.html. Finally, I think giving guarantees might be easier than the monitoring and measuring I intend - you can "simply" (much simplified, I have no ready solution but think it''s easier) measure I/O throughput, and if a domain can''t get to it''s guaranteed level, (aasumed it''s actually trying to use the ressources), the guarantee isn''t met and measures need to be taken - other domains need to be scheduled down, or a migration to more powerful hardware is needed. Henning _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 2006-11-30 at 21:18 +0100, Henning Sprang wrote:> > Finally, I think giving guarantees might be easier than the monitoring > and measuring I intend - you can "simply" (much simplified, I have no > ready solution but think it''s easier) measure I/O throughput, and if a > domain can''t get to it''s guaranteed level, (aasumed it''s actually > trying to use the ressources), the guarantee isn''t met and measures > need to be taken - other domains need to be scheduled down, or a > migration to more powerful hardware is needed. > > Henning >I''ve been playing with this too. The scenario you describe works well provided there is extra room in ram and under utilized CPU credit from other guests. I saw some work trying to make strides in open source disk QoS but I''m not entirely sure how far its come.. but it seems to me the only (responsible) way to manage this is with san / nas storage where the bulk of the IO is handled elsewhere. Depending on how you access the san/nas, conventional rate limiting can come into play. If you virtualize a san (effectively making it several), then the fine tuning would be on the credit scheduling on the guests''s corresponding attached storage, not on the guest itself. The two would have to work together. However, doing this you (mildly) shoot your nas in the foot from the word go and horribly complicate things. So many variables come into play such as the speed of the disks (SCSI or SATA, does the nas have a SODIMM cache, caching raid controller, etc) that its nearly impossible to throw one "good" solution at it. Henning, do you have a rough diagram of what you were thinking? Best, -Tim> _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tim Post wrote:> > Henning, do you have a rough diagram of what you were thinking?I had some similar thoughts. For the Disk I/O I also came to the conclusion, that, quite different from Network I/O, where it''s more or less some math, and don''t forget it''s influenced by CPU, it''s hard to calculate the maximum possible, and so it''s hard to really know about saturation for sure. The only possible thing would be to really measure maximum outpout for each (hardware/software) configuration. This is really for knowing about saturation - for load balancing a similar way as described in the XenMon papers might be possible. This sound very interesting, but this is currently out of reach for my time-ressources. Henning _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 2006-12-01 at 17:16 +0100, Henning Sprang wrote:> For the Disk I/O I also came to the conclusion, that, quite different > from Network I/O, where it''s more or less some math, and don''t forget > it''s influenced by CPU, it''s hard to calculate the maximum possible, and > so it''s hard to really know about saturation for sure.I think like any other kind of scheduler one would have to be able to assign some kind of weight to each, but in this case it would be reversed from what you would think. Faster drives would be a lower weight, where slower drives would be a heavier weight. Ideally any SAN or NAS could contain both. This is surely one of those cases where talking about it is *much* easier than accomplishing it :)> > The only possible thing would be to really measure maximum outpout for > each (hardware/software) configuration. >Yes, measured:expected ratios could help auto-determine the weights I mentioned above based on the drive type. What''s going to be hard to detect are caching controllers, PCI / SODIMM cache cards (on board or otherwise), etc .. however discover is usually pretty good at picking those up if the latest data is installed. Count on someone resurrecting a RLL controller and drive from their closet just to see what happens. Possible, yes .. easy , hardly.> This is really for knowing about saturation - for load balancing a > similar way as described in the XenMon papers might be possible. > This sound very interesting, but this is currently out of reach for my > time-ressources.I don''t think *anyone* has time to tackle this unless they were paid to work on it full time, or could donate many many hours to an open source initiative. You''d also need some rather expensive NAS/SAN sandboxes. I can tell you how it *should* work .. and hope I didn''t annoy you to the point that you don''t want to make it anymore. I''m an admin, not a programmer - I do much better adapting things other people make and suggesting improvements. Bash -> good. C -> not so good - I get by. I do think we''ll reach a point where it becomes a must have .. and most (GNU/GPL) things worth while are created out of necessity. My hope is IBM or AMD picks this up and runs with it, they have much more money to throw at it than we do :) 10-15K would get it done if spent wisely (remember, you have about a dozen different RAID drivers to patch, to start) .. then you have the generic IDE and SATA if you go that route. It also dawned on me that inode allocation in ext3, jfs, ocfs2 as well as gfs could be scheduled and delayed a few miliseconds .. that would also accomplish this provided you sent dirty pages to somewhere other than the volume being written... But now you''re going and modifying file systems. I think this would be easier in cluster file systems such as gfs or ocfs2, especially ocfs2 .. the voting mechanism may lend well to it. I have yet to play with CLVM enough to mention it. I have not kept up on XenFS and its capabilities.. this may be in the works? Now make the above tolerant if some nitwit hits the re-set button, or a UPS dies under load, etc.. ext3 (and other journaling file systems, I''m not picking on ext3) under the best of circumstances is unpredictable when that happens - (again, depending on the media type and controller). No matter how you slice it, this is major surgery. There''s just too many ideas that can be thrown into it .. and with them come too many egos. Only a funded corporate project management effort could bring a release of anything useful to fruition any time soon, or perhaps a pack of bored Buddhist programmers. Anyone is, of course welcome to prove me wrong with a release :D> > Henning >Best as always, -Tim ------------ Disclaimer : The above post contains run on sentences. The author is not responsible should you find yourself cross eyed, or riddled with sudden unexpected violent urges. Should this happen, please refrain from all interaction with pets, spouses and children until symptoms subside. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello, How could CKRM (http://ckrm.sf.net) relate to this? In one of their presentations, (http://ckrm.sourceforge.net/downloads/ckrm-ols04-slides.pdf), they refer an usage scenario with UML/vserver. Maybe some contribution can derive from this work. br, r> -----Original Message----- > From: henning.sprang@gmail.com > [mailto:henning.sprang@gmail.com] On Behalf Of Henning Sprang > Sent: quinta-feira, 30 de Novembro de 2006 20:18 > To: rbp@netcanvas.com > Cc: xen-users@lists.xensource.com > Subject: Re: [Xen-users] Xen resource guarantees > > On 11/30/06, Rodrigo Borges Pereira <rbp@netcanvas.com> wrote: > > >From my point of view, this paragraph gives the impression > that Xen > > >provides QoS mechanisms for managing VM resource allocation at > > all levels. However, i''m having a hard time figuring out > how does Xen > > provide and allows for management those resource > guarantees, particularly regarding block I/O. > > I am also interested in this topic, currently a bit more from > the viewpoint of monitoring these things (as can be seen in > my posts here some days ago and on xen-devel today), for > example to tell when ressources are exhausted or satured in > order to drive things like deployment and migrations > decisions based on the data. > > I found in this paper, with a study and examples: > http://www.hpl.hp.com/techreports/2006/HPL-2006-77.ps, which > is mainly for Net I/O I didn''t checkl if Shareguard and > SEDF-DC code is in the Xen unstable code yet. > I didn''t come so far yet to find more information especially > on Disk I/O. > > Maybe there can be found some more information on the Xen summit > pages: http://xensource.com/xen/xensummit.html. > > Finally, I think giving guarantees might be easier than the > monitoring and measuring I intend - you can "simply" (much > simplified, I have no ready solution but think it''s easier) > measure I/O throughput, and if a domain can''t get to it''s > guaranteed level, (aasumed it''s actually trying to use the > ressources), the guarantee isn''t met and measures need to be > taken - other domains need to be scheduled down, or a > migration to more powerful hardware is needed. > > Henning_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users