Manfred.Herrmann@zipptec.de
2004-Jan-23 17:26 UTC
[Xen-devel] questions about production use
After a couple of months in exploring...testing...TryToUnderstand the alternatives in the "world of virtualization" I have to decide how to build a production server. I would like to use xen as the mainframe-quality partitioning system for high stability and high performance virtual servers. And in addition to that it would be a dream to run vserver or uml for low performance virtual servers within a xen-domain. 1. Is xen in a current release stable for my production server? 2. What do you think about such a system, the pros and cons? 3. Which system is the better try to patch for xen usage, vserver (lot of linux capabilities and security context code) or uml (lot of ptrace and mmap ... like code)? Sorry for that monster mail :-) Manni xxxxxxxxxxxxxxxxxxxxx The following technologies/architectures are on my research list and my stupid thoughts about the differences. xen -> pros: - near to "mainframe" architecture - very high speed - high scalability with very good resource isolation - high stability (because relative low vmm complexity?) - application compatibility very high - open source xen -> cons: - ? production quality, even with limited features (megaraid arrays) - ? practical experience with midrangeserver (4-8CPUs, 8-16GB RAM) - ? always "on" developer community (security fixes) - relative high amount of codelines to patch linux kernels - ? less resource sharing/saver features like sparse files (uml) virtuosso -> pros: (datasheet statements, not verified) - very high speed - high stability - manageability through webapplications (complexity/quality ?) - SMP for virtual servers (i can''t remember ... up to 16 CPUs) - up to 64 GB RAM virtuosso -> cons: - not open source (I don´t know whats going on :-) - customized linux kernels (very complex ???) - fault tolerance between virtual servers (shared host OS) - security between virtual servers (shared host OS) user-mode-linux -> pros: - open source - production quality - a relative large user community - a "official" linux architecture - good manageability (cow-files, sparse-files, flexible device-model) - security between virtual servers (but shared host OS) - fault tolerance between virtual machines (but shared host OS) user-mode-linux -> cons: - a very high context switching rate - relative low performance block devices (no "raw" access) - resource consumption up to 80 percent and more (average ...40 ?) - no SMP for virtual servers - performance bottlenecks with high RAM servers - small host OS patching required (only skas) vmware -> pros: - very stable system - high quality software for datacenter users - SMP (for ESX Server) - installation of different native OS - security between virtual machines (but shared host OS) - fault tolerance between virtual servers (but shared host OS) vmware -> cons: - resource consumption up to 50 percent and more? (average ...30 ?) - special drivers for high performance required ? - very high price (GSX and ESX Server???) - closed source vserver -> pros: - production quality - a relative large user community - very high performance - good manageability, many tools for production use vserver -> cons: - open source - fault tolerance between virtual servers (shared host OS) - security between virtual servers (shared host OS) - relative high amount of codelines to patch linux kernels IBM mainframe Linux ... mainframe to expensive for me :-)
A couple of comments...> The following technologies/architectures are on my research list and my > stupid thoughts about the differences. > > xen -> pros: > - near to "mainframe" architecture > - very high speed > - high scalability with very good resource isolation > - high stability (because relative low vmm complexity?) > - application compatibility very high > - open source > xen -> cons: > - ? production quality, even with limited features (megaraid > arrays)I think we''re pretty stable -- no one has complained about problems for a long time (other than developers doing whacky things with OSes other than the stock Linux). The supported hardware list is growing steadily.> - ? practical experience with midrangeserver (4-8CPUs, > 8-16GB RAM)The biggest machine we''ve run it on is a 4CPU (actually 2x hyperthreaded Xeon). We should scale with number of CPUs very nicely. However, we only have support for 4GB physical ram. We probably wont fix this until the x86_64 port, as PAE36 is such a hack on current x86.> - ? always "on" developer community (security fixes)We track the main 2.4 kernel pretty closely.> - relative high amount of codelines to patch linux kernelsMost of it is in arch/xeno, so it tends not to interfere with other patches.> - ? less resource sharing/saver features like sparse files (uml)CoW block devices are under development.> virtuosso -> pros: (datasheet statements, not verified) > - very high speed > - high stability > - manageability through webapplications (complexity/quality ?) > - SMP for virtual servers (i can''t remember ... up to 16 > CPUs)We''ll get around to this at some point...> - up to 64 GB RAM > virtuosso -> cons: > - not open source (I don´t know whats going on :-) > - customized linux kernels (very complex ???) > - fault tolerance between virtual servers (shared host OS) > - security between virtual servers (shared host OS) > > user-mode-linux -> pros: > - open source > - production quality > - a relative large user community > - a "official" linux architecture > - good manageability (cow-files, sparse-files, flexible device-model) > - security between virtual servers (but shared host OS) > - fault tolerance between virtual machines (but shared host OS) > user-mode-linux -> cons: > - a very high context switching rate > - relative low performance block devices (no "raw" access) > - resource consumption up to 80 percent and more (average ...40 ?) > - no SMP for virtual servers > - performance bottlenecks with high RAM servers > - small host OS patching required (only skas) > > vmware -> pros: > - very stable system > - high quality software for datacenter users > - SMP (for ESX Server) > - installation of different native OS > - security between virtual machines (but shared host OS) > - fault tolerance between virtual servers (but shared host OS) > vmware -> cons: > - resource consumption up to 50 percent and more? (average ...30 ?) > - special drivers for high performance required ?Networking really sucks without the vxnet driver.> - very high price (GSX and ESX Server???) > - closed source > > vserver -> pros: > - production quality > - a relative large user community > - very high performance > - good manageability, many tools for production use > vserver -> cons: > - open source > - fault tolerance between virtual servers (shared host OS) > - security between virtual servers (shared host OS) > - relative high amount of codelines to patch linux kernelsIan ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Fri, Jan 23, 2004 at 06:26:15PM +0100, Manfred.Herrmann@zipptec.de wrote:> After a couple of months in exploring...testing...TryToUnderstand > the alternatives in the "world of virtualization" I have to decide > how to build a production server.We currently have hundreds of UMLs in production use, however we keep a close eye on other technologies. We went through exactly the same exercise as you but 2 years ago so there are more choices now (like xen!). If I were choosing today then I''d probably choose Xen - we may well swap in the future, however we''ve invested hugely in managing our current setup.> xen -> pros: > - near to "mainframe" architecture > - very high speed > - high scalability with very good resource isolation > - high stability (because relative low vmm complexity?) > - application compatibility very high > - open sourceThe most important one as far as I am concerned is IO scheduling. That is one thing which we do notice about UML - one UML doing heavy disk IO can really affect other UMLs. UML (and linux) is good at managing CPU, Memory, Disk space, but not IO. It is likely that 2.6 + the CBQ IO scheduler would fix this.> user-mode-linux -> cons: > - a very high context switching rateUML does impact performance you are right. However its our experience is that UMLs run out of RAM first before they run out of performance. RAM is relatively more expensive than CPU when you are trying to put 32 UMLs on a machine each with a reasonable amount of RAM, whereas a 2.4 GHz P4 is masses of power for your average server!> - relative low performance block devices (no "raw" access)You can map uml partitions direct to block devices on the host if you want. I''ve never tried this though!> - resource consumption up to 80 percent and more (average ...40 ?) > - no SMP for virtual serversYou can run SMP UMLs. Not sure exactly how well it works ;-)> - performance bottlenecks with high RAM serversI''m not aware of this. Do you mean the (usual) HighMem performance problems on the host? Or are you talking about UML itself (which only goes to 512 MB at the moment I think). [snip]> IBM mainframe Linux ... mainframe to expensive for me :-)Yes thats the conclusion we came to ;-) BTW You missed FreeVSD and derivatives. Also Plex86. -- Nick Craig-Wood ncw1@axis.demon.co.uk ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Fri, Jan 23, 2004 at 05:37:38PM +0000, Ian Pratt wrote:> > - ? less resource sharing/saver features like sparse files (uml) > > CoW block devices are under development.If Xen could run using files that would be most interesting to us - it would mean the minimum change to our current setup. Maybe these could be served from Domain0 in some efficient fashion I don''t know... I guess you could do it using the nbd but a more efficient mechanism would be nice. Something like iSCSI but over a Xen transport. The whole business of how you would manage a production host with (say) 32 virtual servers each with different and changing demands (eg virtual server A needs another 2 GB of disk space) for disk space is the part of Xen that worries me. Maybe all this has been implemented since 1.0 and I missed it on the mailing list ;-) -- Nick Craig-Wood ncw1@axis.demon.co.uk ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> If Xen could run using files that would be most interesting to us - it > would mean the minimum change to our current setup.Currently, it''s not possible to use a file as a block device for a guest domain. Hoewver, the new virtual disk management in 1.2 allows file-based disk images to be imported to Xen virtual disks and vice-versa. Virtual disks are allocated within a pool of free space made up of devices and partitions on your system that you have set aside for the purpose. You can extend this free pool when you want. Virtual disks can also be enlarged by allocating more space from the free pool (Currently, they can''t be enlarged whilst mounted by a domain and have that domain see the changes. But that wouldn''t be hard to implement).> Maybe these could be served from Domain0 in some efficient fashion I > don''t know... I guess you could do it using the nbd but a more > efficient mechanism would be nice. Something like iSCSI but over a > Xen transport.Right now, you probably could serve them from domain using something like NBD, but that won''t be necessary once the next gen IO stuff is in place. That will allow lots of cool things, including efficiently serve out files as block devices AND have them appear as ordinary Xen virtual block devices to guests, so your guests don''t have to run any weird network protocols and can easily use them as their root file system. There was a thread related to this a few days ago that you might also find interesting. HTH, Mark ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Fri, 2004-01-23 at 12:26, Manfred.Herrmann@zipptec.de wrote:> The following technologies/architectures are on my research list and > my > stupid thoughts about the differences. > > xen -> pros: > - near to "mainframe" architecture<snip>> IBM mainframe Linux ... mainframe to expensive for me :-)Comparing Xen to an IBM zSeries mainframe is not really fair. The IBM boxes are much more than just virtualization and resource partitioning there''s a whole support structure around these things. There''s a reason they boast 5 9''s of reliability -- because it actually happens. Try that with a single Intel machine and you''ll find that it''s fairly difficult if not impossible. It''s not always that easy though. With that said, if you directly compare a zSeries mainframe to a dual Xeon you''ll find that the dual Xeon is much more capable of handling the workloads you are most familiar with. Kernel compilation times are measured in a few minutes on the Xen guests, whereas our z800 takes on the order of 30 minutes. That''s just one example, the top end of a fully loaded z800 (or even z900/z990) is far too expensive if you just compare performance expectations side by side. Our z800 can scale very well considering that it is only 1 machine.. but for the cost of scaling it up you could have purchased a few hundred Intel machines and done a much better job or more realistically a few dozen to keep support costs lower. All in all we''ve been very happy playing with Xen here. Our z800 is nice, but I''m not sure that we are the right kind of people to use it effectively. It always seems we push it outside of its designed specification. Stephen ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
On Fri, Jan 23, 2004 at 08:26:15PM -0000, Williamson, Mark A wrote:> > If Xen could run using files that would be most interesting to us - it > > would mean the minimum change to our current setup. > > Currently, it''s not possible to use a file as a block device for a guest > domain. Hoewver, the new virtual disk management in 1.2 allows > file-based disk images to be imported to Xen virtual disks and > vice-versa. Virtual disks are allocated within a pool of free space > made up of devices and partitions on your system that you have set aside > for the purpose.Sort of like LVM but run by Xen? Tha sounds very interesting.> You can extend this free pool when you want. Virtual disks can also > be enlarged by allocating more space from the free pool (Currently, > they can''t be enlarged whilst mounted by a domain and have that > domain see the changes. But that wouldn''t be hard to implement).That sounds like just the job. We''d also want to be able to access all the virtual disks from Domain0 for administrative purposes (backup / transfer to a new host etc) but I guess that is possible.> > Maybe these could be served from Domain0 in some efficient fashion I > > don''t know... I guess you could do it using the nbd but a more > > efficient mechanism would be nice. Something like iSCSI but over a > > Xen transport. > > Right now, you probably could serve them from domain using something > like NBD, but that won''t be necessary once the next gen IO stuff is in > place. That will allow lots of cool things, including efficiently serve > out files as block devices AND have them appear as ordinary Xen virtual > block devices to guests, so your guests don''t have to run any weird > network protocols and can easily use them as their root file system.Excellent! How much of this and the above is implemented now? Should I be checking out Xen 1.2 and reading the docs? Cheers Nick -- Nick Craig-Wood ncw1@axis.demon.co.uk ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel