Hi I''ve been thinking about the following: As every domU has it''s own RAM, I cannot give as much memory to each domU, as I wanted to. So I gave them swap. That''s LVM-backed. But now, each domain that exceeds it''s memory needs to swap pages in and out, while other domains might not need their memory. The idea to solve this, is a follows: 1. Give every domU as few memory as possible. (16MB?) 2. Give dom0 a huge fast swap space (several GB?) 3. Create a "RAMDRIVE" in dom0 in that swap. 4. Create domU''s swap partitions on dom0''s ramdrive. The reason would be that dom0 can now "really" swap in pages as *REALLY* needed from/to harddrive. DomU''s would then "only" swap from/to a partition that really is RAM and as such really fast. Seems a little crazy idea, but a way to make swaping more sensfull. What do you think? Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Steffen,> The idea to solve this, is a follows: > 1. Give every domU as few memory as possible. (16MB?) > 2. Give dom0 a huge fast swap space (several GB?) > 3. Create a "RAMDRIVE" in dom0 in that swap. > 4. Create domU''s swap partitions on dom0''s ramdrive. > > The reason would be that dom0 can now "really" swap in pages as *REALLY* > needed from/to harddrive. > DomU''s would then "only" swap from/to a partition that really is RAM and > as such really fast.- You have no file system cache any more. - Swapping in a ramdrive means to double-memcopy and many overhead. - You may swap-to-death Dom0 because the memory needed by tmpfs will swap out other important Dom0 processes. But feel free to try it and write your results here. cu cp _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Crazy is a good word for it .. but would be interesting to see the results. I''d use a CF or USB stick (as large as you can find) to do that with, sounds rather adventurous, but I think would prove to be problematic. Kernel frees swap very quickly after it becomes available, so your basically preventing all services on the dom-u''s from caching idle children, and they''d need to fork for every connection. It would be a significant performance degradation. If you used conventional ramdisks, you''d be able to fry an egg on the northbridge. Good luck however :) --Tim On Thu, 2006-09-07 at 23:26 +0200, SH Solutions wrote:> Hi > > I''ve been thinking about the following: > > As every domU has it''s own RAM, I cannot give as much memory to each domU, > as I wanted to. > So I gave them swap. That''s LVM-backed. > But now, each domain that exceeds it''s memory needs to swap pages in and > out, while other domains might not need their memory. > > The idea to solve this, is a follows: > > 1. Give every domU as few memory as possible. (16MB?) > 2. Give dom0 a huge fast swap space (several GB?) > 3. Create a "RAMDRIVE" in dom0 in that swap. > 4. Create domU''s swap partitions on dom0''s ramdrive. > > The reason would be that dom0 can now "really" swap in pages as *REALLY* > needed from/to harddrive. > DomU''s would then "only" swap from/to a partition that really is RAM and as > such really fast. > > Seems a little crazy idea, but a way to make swaping more sensfull. > > What do you think? > > Regards, > Steffen > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> I''d use a CF or USB stick (as large as you can find) to do > that with, sounds rather adventurous, but I think would prove > to be problematic.Flash Memory is used for such purposes by Windows Vista with great success. Maybe I will give that a try soon. Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 08 September 2006 18:13, Steffen Heil wrote:> Hi > > > I''d use a CF or USB stick (as large as you can find) to do > > that with, sounds rather adventurous, but I think would prove > > to be problematic. > > Flash Memory is used for such purposes by Windows Vista with great success. > Maybe I will give that a try soon. >Flash memory (as used in CF cards and USB sticks) has poor write performance, good luck finding anything > 30MB/sec, while harddrives easily archive 60MB/sec. Furthermore, Flash memory has a limited lifetime regarding to erase/write cycles, usually in the range of 10,000 - 100,000 cycles. So, using flash as swap is a quite stupid idea, unless you''re ok with the degraded performance and the costs of buying a new device every few weeks. /Ernst _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
First off, let me say that I think virtuoso-style RAM oversubscription is a bad idea. That said, if you wanted to implement virtuoso style oversubscription in Xen, clearly the way to do it is with the balloon driver. With no code changes to xen, you can do it like this: boot each domain serially with 512M ram or whatever you want the max ram to be, then use xm mem-set to bring them down to 32M or whatever you want the minimum to be, then boot the next domain and balloon it down, etc.... You might be able to get the same effect by booting everything with memory=32 in the xm config file, and a boot option of maxmem=512M, but I''m not as sure about that. Next, somehow detect what domains need more ram, and xm mem-set them up until you are utilizing all available ram. (maybe iostat on the swap partition? something like that?) I still think it''s a bad idea, but it could be done with nothing more than a re-write of your xendomains startup script and some sort of monitoring process to tweek the balloons. the problems with this setup are unavoidable in a shared-ram setup. The thing is, UNIX is designed to eat all available ram. If you change this, performance will degrade. I guess if you had a nice, fast disk system, and your balloon script was only setup to take ram from systems that had enough ''free'' or ''buff'' ram to satisfy the request it might work okay. (ram used as ''write-through'' cache can be freed almost immediately, whereas ram used to store other inactive memory pages, or write-back cache must be flushed to disk before freeing, a slow process. this would have worked much better back in the day when everyone used write-through- modern ordered metadata write filesystems operate more like write-back caches, so this won''t work as well anymore) The other problem is one of business model; if you give heavy users better service than light users, and you charge the same for both, in a rational market, all the light users would switch to a provider where they didn''t have to subsidize the heavy users. you would be stuck with only the heavy users, with nobody left to subsidize the system. On Thu, 7 Sep 2006, SH Solutions wrote:> Date: Thu, 7 Sep 2006 23:26:35 +0200 > From: SH Solutions <info@sh-solutions.de> > To: xen-users@lists.xensource.com > Subject: [Xen-users] crazy SWAP and RAM idea > > Hi > > I''ve been thinking about the following: > > As every domU has it''s own RAM, I cannot give as much memory to each domU, > as I wanted to. > So I gave them swap. That''s LVM-backed. > But now, each domain that exceeds it''s memory needs to swap pages in and > out, while other domains might not need their memory. > > The idea to solve this, is a follows: > > 1. Give every domU as few memory as possible. (16MB?) > 2. Give dom0 a huge fast swap space (several GB?) > 3. Create a "RAMDRIVE" in dom0 in that swap. > 4. Create domU''s swap partitions on dom0''s ramdrive. > > The reason would be that dom0 can now "really" swap in pages as *REALLY* > needed from/to harddrive. > DomU''s would then "only" swap from/to a partition that really is RAM and as > such really fast. > > Seems a little crazy idea, but a way to make swaping more sensfull. > > What do you think? > > Regards, > Steffen >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 8 Sep 2006, Luke Crawford wrote:> > First off, let me say that I think virtuoso-style RAM oversubscription is a > bad idea.<snip> I''m not going to argue, I build all my xen boxes with lots of ram... but I made a similar proposal in response to a similar thread a while back about using tmpfs and sparse swap files in dom0: http://lists.xensource.com/archives/html/xen-users/2006-05/msg00961.html> the problems with this setup are unavoidable in a shared-ram setup. The thing > is, UNIX is designed to eat all available ram.yes, but UNIX knows the difference between ram and swap, it will _not_ eat all available swap unless it has to, it _will_ make use of whatever "real" memory it has available.> The other problem is one of business model; if you give heavy users better > service than light users, and you charge the same for both, in a rational > market, all the light users would switch to a provider where they didn''t have > to subsidize the heavy users. you would be stuck with only the heavy users, > with nobody left to subsidize the system.yeah, but if there was a real business model behind this, there would be enough RAM. The challenge is interesting and worth thinking about, but should not exist in a real business world... But that isn''t to say people won''t do it. There are idiots and a*holes all over the planet that will take your money and give you trash or trash-service for it. -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> yeah, but if there was a real business model behind this, there would be > enough RAM. The challenge is interesting and worth thinking > about, but should not exist in a real business world... But that isn''t to > say people won''t do it. There are idiots and a*holes all over the planet > that will take your money and give you trash or trash-service for it.I think a real business model for this is possible in theory, but perhaps impractical in practice. If the provider could charge per megabyte-minute, and assuming they had a box many times larger than what any one customer needed, and assuming all customers had peaky usage with different peaks, then the provider could squeeze slightly more out of it''s hardware with a system like this. Of course, at the cost of reliability; if too many other customers ran over their normal utilization, a customer with normal utilization couldn''t burst. This could work well for shell servers for guys like me that occasionally want to compile a kernel but usually don''t use anything heavier than pine, or people like one of my dedicated server customers that runs a normally almost idle blog, but who occasionally get linked by digg. In fact, that might be more realistic than I thought- what if instead of making the ram allocation automatic, you had a simple web/billing interface to the balloon driver? set it up so a customer can buy more ram for a period of several days, up to the amount of ram you have uncommitted. I bet most virtual server providers usually have at least half a server free; with live migration and this billing -> balloon driver interface, they could be making money off that, and allowing users with temporarily high resource needs to get what they want without committing to a long-term plan. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 08 September 2006 11:58 am, Ernst Bachmann wrote:> On Friday 08 September 2006 18:13, Steffen Heil wrote: > > Flash Memory is used for such purposes by Windows Vista with great > > success. Maybe I will give that a try soon. > > So, using flash as swap is a quite stupid idea, unless you''re ok with the > degraded performance and the costs of buying a new device every few weeks.right, i think the "great success" is only "great media buzzing". hopefully no serious server will use that ''feature'' OTOH, it might be a neat trick to reduce startup times, not to ''expand memory'' -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday 08 September 2006 12:28 pm, Luke Crawford wrote:> the problems with this setup are unavoidable in a shared-ram setup. > The thing is, UNIX is designed to eat all available ram. If you change > this, performance will degrade. I guess if you had a nice, fast diski think this is one of the goals of XenFS. (how''s that going? any rough date estimates?) that "eat all available ram" policy is done in part (a big part, i think) by the filesystem code. if it has a tight communication with the hypervisor, it might store the file cache in hypervisor (or dom0? i''m not sure) memory, keeping the memory requirements for domU as low as possible. in the meantime, yes, the balloon driver should be the best bet to reallocate memory from one domU to another. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Remember, Windows utilizes memory quite a bit differently :) Again - worth the effort and the results should be interesting ... but I wouldn''t attempt this on anything in production (Linux based, anyway). Good luck!! :) - Tim On Fri, 2006-09-08 at 18:13 +0200, Steffen Heil wrote:> Hi > > > I''d use a CF or USB stick (as large as you can find) to do > > that with, sounds rather adventurous, but I think would prove > > to be problematic. > > Flash Memory is used for such purposes by Windows Vista with great success. > Maybe I will give that a try soon. > > Regards, > Steffen > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
One of the biggest advantages to using Xen is that malloc()''ing processes that need to spawn children are able to do so in cache. This gives the dom-u performance that a non virtualized server would enjoy. By purposefully having them skid to disk, no matter how fast that disk is, you''re telling your kernel to release the "virtual" memory immediately as it is file based and treated just like dentry. SQL, Web , Email, All services will need to fork upon every connection. This not only slows down everything, it lessens the life of your hardware makes everything considerably warmer and really opens your servers to denial of service attacks. You also risk DB corruption, (not to mention inode corruption [are you using ext3? I hope not, or you''re looking to start grepping for your data using strings you hope exist in the files you lost] ). Just wait until a dom-u is being hammered and dom-0 experiences an unorderly shutdown, hope you''ve polished up on your regex to find your data :) So, you''re in essence, shooting your services in the foot no matter how you go about it, unless you create your own swap system and teach your kernel the difference. Why not just have dom-0 look at slabs, loads , etc on the dom-u''s and balloon as needed, or setup a simple Xen virtualized Open SSI cluster? Why shoot your OS in the foot intentionally when other means exist to accomplish what you want to do? I just don''t get it.. All your doing is not only retarding Xen, but also your guest OS''s and their services .. for what purpose? --Tim On Fri, 2006-09-08 at 10:47 -0700, Tom Brown wrote:> On Fri, 8 Sep 2006, Luke Crawford wrote: > > > > > First off, let me say that I think virtuoso-style RAM oversubscription is a > > bad idea. > > <snip> I''m not going to argue, I build all my xen boxes with lots of > ram... but I made a similar proposal in response to a similar thread a > while back about using tmpfs and sparse swap files in dom0: > > http://lists.xensource.com/archives/html/xen-users/2006-05/msg00961.html > > > the problems with this setup are unavoidable in a shared-ram setup. The thing > > is, UNIX is designed to eat all available ram. > > yes, but UNIX knows the difference between ram and swap, it will _not_ eat > all available swap unless it has to, it _will_ make use of > whatever "real" memory it has available. > > > The other problem is one of business model; if you give heavy users better > > service than light users, and you charge the same for both, in a rational > > market, all the light users would switch to a provider where they didn''t have > > to subsidize the heavy users. you would be stuck with only the heavy users, > > with nobody left to subsidize the system. > > yeah, but if there was a real business model behind this, there would be > enough RAM. The challenge is interesting and worth thinking > about, but should not exist in a real business world... But that isn''t to > say people won''t do it. There are idiots and a*holes all over the planet > that will take your money and give you trash or trash-service for it. > > -Tom > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
:Confused: If you want to have virtuozzo like ramming, then why not use openvz. Have a machine with xen and one with openvz. Have friendly customers on openvz, with lots of burst. Host the not-so-friendly customers on xen, with rigid ram and swap. This is what we recommend to our customers. Making something that it was not originally meant to be is fun. But if you have real customers who are paying you real money, I would say, that''s pure madness. -- :: Lxhelp :: lxhelp.at.lxlabs.com :: http://lxlabs.com :: _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I think we are in agreement. if you want to use burst-ram in production (again, I think it''s a bad idea) use a product that is designed to enable burst ram usage. Xen can be used to enable quick changes to hard ram caps using xm mem-set, but it was not designed to be used for automatic burst ram. On Sun, 10 Sep 2006, Ligesh wrote:> > :Confused: > > If you want to have virtuozzo like ramming, then why not use openvz. Have a machine with xen and one with openvz. Have friendly customers on openvz, with lots of burst. Host the not-so-friendly customers on xen, with rigid ram and swap. This is what we recommend to our customers. > > Making something that it was not originally meant to be is fun. But if you have real customers who are paying you real money, I would say, that''s pure madness. > > > -- > :: Lxhelp :: lxhelp.at.lxlabs.com :: http://lxlabs.com :: > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> One of the biggest advantages to using Xen is that > malloc()''ing processes that need to spawn children are able > to do so in cache. This gives the dom-u performance that a > non virtualized server would enjoy.Could you explain this in more detail, please?> SQL, Web , Email, All services will need to > fork upon every connection.No. Good current software doesn''t. My SQL and Web-Servers are threaded so there is no need to fork, still searching for a way for email...> You also risk DB corruption, (not to mention inode corruption > [are you using ext3? I hope not, or you''re looking to start > grepping for your data using strings you hope exist in the > files you lost] ). Just wait until a dom-u is being hammered > and dom-0 experiences an unorderly shutdown, hope you''ve > polished up on your regex to find your data :)I don''t understand that at all. First, if ext3 (which DOES have journaling) looses any data on unclean shutdown, then it is faulty. And yes, I use it on several machines. And secondly I think that farly depends how you implement domU partitions. Mine are LVM...> Why shoot your OS in the foot intentionally when other means > exist to accomplish what you want to do? I just don''t get > it.. All your doing is not only retarding Xen, but also your > guest OS''s and their services .. > for what purpose?Hey come on. I wrote "crazy idea" myself and I did definitely not plan to take this to production or customer domains... It was an idea and I thought maybe it''s worth some discussion (as I still do). Remember that the main idea here has NOT been to do something as "ram bursts" (if I understand that correcty as automatic changes of domU memory), but to give dom0 a better way to control disk caching instead of relying on every single domain to have it''s own cache. The idea arose from a situation where I had the same (READ-ONLY) partition mounted on several domains which ALL had a lot of that data in cache memory... (Still working on problems with that machine, as I did''t find a way to stop that.) Regrads, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, 10 Sep 2006, Steffen Heil wrote:>> You also risk DB corruption, (not to mention inode corruption >> [are you using ext3? I hope not, or you''re looking to start >> grepping for your data using strings you hope exist in the >> files you lost] ). Just wait until a dom-u is being hammered >> and dom-0 experiences an unorderly shutdown, hope you''ve >> polished up on your regex to find your data :) > > I don''t understand that at all. First, if ext3 (which DOES have journaling) > looses any data on unclean shutdown, then it is faulty. And yes, I use it on > several machines. And secondly I think that farly depends how you implement > domU partitions. Mine are LVM...Unless you are writing to the journal synchronously, (which nobody does; that makes it just as slow as mounting the filesystem synchronously; even slower, as you have two steps) the journal only protects filesystem integrity... not the integrity of files that are open. Essentially, it is a way of ordering meta-data writes (just like softupdates on FreeBSD)) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Steffen, Under you On Sun, 2006-09-10 at 20:33 +0200, Steffen Heil wrote:> Hi > > > One of the biggest advantages to using Xen is that > > malloc()''ing processes that need to spawn children are able > > to do so in cache. This gives the dom-u performance that a > > non virtualized server would enjoy. > > Could you explain this in more detail, please? >When you start any daemon that accepts connections, that daemon will read a configuration file and learn how many idle servers it should launch. It will then malloc() and try to take enough contiguous space in cache to do that. This avoids a child having to fork for every connection.> > SQL, Web , Email, All services will need to > > fork upon every connection. > > No. Good current software doesn''t. My SQL and Web-Servers are threaded so > there is no need to fork, still searching for a way for email... >Right, and they thread children in contiguous blocks of cache as you instruct via your configurations. If you reduce the available ram, and intentionally send them to disk, they won''t find contiguous blocks and won''t cache children. Therefore, they must not only fork, but fork to disk when a connection comes in.> > You also risk DB corruption, (not to mention inode corruption > > [are you using ext3? I hope not, or you''re looking to start > > grepping for your data using strings you hope exist in the > > files you lost] ). Just wait until a dom-u is being hammered > > and dom-0 experiences an unorderly shutdown, hope you''ve > > polished up on your regex to find your data :) > > I don''t understand that at all. First, if ext3 (which DOES have journaling) > looses any data on unclean shutdown, then it is faulty. And yes, I use it on > several machines. And secondly I think that farly depends how you implement > domU partitions. Mine are LVM... >The problem is by using swap you''re using a type of memory that the kernel frees immediately. If for some reason an interruption happens. What is also going to happen is a ''clog'' in I/O that is going to prevent the inodes from syncing as they normally would. You''re putting applications in the swap space that''s normally used for this when the server finds itself under load, and flooding dentry. It has nothing to do with the type or speed of storage, this is a kernel phenomenon on the dom-u itself. You are in essence reducing the size of a funnel and clogging the smaller end with bubble gum.> > Why shoot your OS in the foot intentionally when other means > > exist to accomplish what you want to do? I just don''t get > > it.. All your doing is not only retarding Xen, but also your > > guest OS''s and their services .. > > for what purpose? > > Hey come on. I wrote "crazy idea" myself and I did definitely not plan to > take this to production or customer domains... > It was an idea and I thought maybe it''s worth some discussion (as I still > do). >Don''t take offense at my rather dry personality :) Remember, its hard to convey tone of voice and diction through a mailing list. I''m just extremely curious what need (if other than just to see if it would work) is fueling the temporary insanity you''re experiencing.> Remember that the main idea here has NOT been to do something as "ram > bursts" (if I understand that correcty as automatic changes of domU memory), > but to give dom0 a better way to control disk caching instead of relying on > every single domain to have it''s own cache.Now things are sounding a little more sane :) The previous explanations made it sound like you were trying to turn Xen into Virtuozzo.> > The idea arose from a situation where I had the same (READ-ONLY) partition > mounted on several domains which ALL had a lot of that data in cache > memory... (Still working on problems with that machine, as I did''t find a > way to stop that.)Why would you want to stop that? You can adjust how quickly your kernel frees inactive cache rather easily, and tell your daemons not to keep as many idle children in memory by tweaking the maximum # of connections (or iterations) each child can have in its lifetime. If you need more contiguous blocks of cache available for other things, just split up your over-malloc()''ing services to separate dom-u''s. It sounds like you''re putting way to much gray matter into solving what could be a really simple problem :) HTH --Tim>> Regrads, > Steffen_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users