Luke S Crawford
2009-May-28 21:40 UTC
Re: Distro kernel and ''virtualization server'' vs. ''server that sometimes runs virtual instances'' rant (was: Re: [Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops))
Dan Magenheimer <dan.magenheimer@oracle.com> writes:> > I''ve been selling VPSs using Xen since 2005. > > This puts you squarely in the "data center" category I was > referring to rather than in the "majority of users". I agree > you definitely should have a highly-available, crash-resistant > "dom0". I was just trying to say that, for many users (and > you are clearly not one of those), a distro dom0 is > preferable. And I added that primarily so that Ingo didn''t > reply with "then keep your stinkin'' xen dom0 patches out of > my kernel and roll your own". :-)This is the reason for my rant- the ''Dedicated virtualization server'' role that Xen shines in is something that small companies need, too. This is not just something that specialized service providers need- this ''take many ancient and power hungry servers and run them all on one new server'' is the primary use case I see for Xen, both at 5 and 50,000 employee companies. Many companies, perhaps even most operate their own hardware, even if they use co-location centers rather than their own datacenter, so they face this problem every three years. I may be incorrect, but I see the developers focusing on the ''desktop virtualization'' market that KVM qemu and virtualbox own. I think compromising the primary ''dedicated virtualization server'' role for that is a bad idea.> > I''m not going to say memory overcommit is never useful for anyone; > > but I can say it is never useful for me. 32GiB registered ecc ddr2 > > is around $600. That''s not very many billable hours. > > Up to a certain point in each physical machine, RAM is cheap. > Beyond that, it is very expensive. In a recent TPC-C disclosure > which took the crown for lowest cost-per-transaction, an > HP server required 72GB of RAM, which was 9x8GB DIMMs, which > were $990 each (total nearly $9000!). I think very few IT > shops want to spend several times more on RAM for their server > than on the motherboard+chipset.4GB modules are much cheaper; I''ve got a supermicro board with 16 ram slots a 4gb in each; runs fine, and I think the total cost for the ram was around $1400. Being as the thing only has 8 2.3ghz shanghai opteron cores, I''m shorting my customers a little on the CPU, but the ram wasn''t ridiculous. that was mostly an experiment; my new servers have 32GiB for every 8 cores, simply because my total cost is close to $1200-$1400 each, and while my cost per gigabyte would be slightly lower with 64GiB ram per server, it''s not a dramatic savings for me, and the difference in CPU availability is enough to make it worth it. My point is that if ram is cheap up to 64GiB per box, and you need 128GiB ram, buy two boxes. If you need more ram in a virtual than you can fit on a physical, then unless I am very confused, virtualization is not the correct tool for the job. That ''transcendent memory'' link you sent does look interesting in that it''s a safer way to utilize memory overcommit, in that the pathological case becomes that you give heavy users more resources at the expense of light users, rather than just crashing a user who asked for memory they thought they had. But that''s still why I moved away from FreeBSD jails to Xen; I don''t think it''s possible to get much more efficient in terms of memory usage than a jail-like virtualization system with something like unionfs, but then you are back to giving your light users poor service because their disk cache has been flushed and re-used by your heavy users. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel