I have been using Xen for a number of years personally and over a year in production here at work. Our network is growing and I am rethinking the way we have stuff laid out. We will also be implementing a few new HA setups for stuff like mail, mysql, OpenLDAP, etc. I would like some opinions from the list if you don''t mind. Currently we have the following machines with current configurations: xen-test: Dell 2900; 8x 300GB in raid6, 8GB ram, 1x Intel 5405 quad core 2.0Ghz, 2x gigabit nics xen1: Dell T710; 8x 2TB in raid6, 24GB ram, 2x Intel 5504 quad core 2.0Ghz, 4x gigabit nics xen2: Dell R610; 3x 160GB in raid5, 32GB ram, 2x Intel 5506 quad core 2.13Ghz, 4x gigabit nics SAN: Dell MD3200i; 12x 2TB in raid6, dual controllers. Currently all machines are running xen 4.0.1 on Centos 5.5. All three machines are setup with bonded interfaces with passing the various vlans over that bonded interface. We have nearly all of our machines running on the machine xen1 while xen2 is sitting virtually unused. The machine xen-test has a couple of machines that are about to come out of testing and be production so I want to migrate them to either xen or xen2. The SAN is nearly unused also. I do have some LUNs on it setup for an eventual migration and one 8TB partition for our backup solution to be migrated to. We currently use backuppc and it currently resides on xen1 using up about 6TB of space and running out quickly so I need to find a solution sooner rather than later. One major feature I have been told to investigate by the higher-ups was a GUI, probably web based, so that in my absence my co-workers can manage machines easier. I want to make the best use of all three of these machines and the SAN possible. Having a way to migrate machines live would be awesome also. Although the mix of local and network storage makes for a little difficulty. I have looked at other solutions like proxmox and a few others but I want to stay with xen as I know it is capable of nearly everything I need to do. I just need guidance on the best way to implement it. We run 95% linux machines in a PV manner and the other 5% is windows 2008R2 servers. Making the domu''s high available between xen dom0''s is not majorly important for most of the servers, only a handful. I plan to make the individual services HA where possible as mentioned in the beginning of this message. Although it would be nice to be able to fail a domu over to another dom0 live with no problems. Any pointers, input, or just general insight any of you may have on the best way to set this up would be much appreciated. -- Donny B. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
If you can invest a week or one and a half in the re-design to make it all fancy: - Upgrade from Centos5 for better overall performance - Use OpenNebula in a small "private cloud" setup, this very very well covers the GUI bit. - Big "partitions" might waste a raid arrays performance (but that depends on too many factors. I like many small bits, others do not, and in general cache sorts it out better than my preferences would ;) - Raid6 definitely kills performance. Consider a Raid10. - Live migration can come along easily with i.e. OpenNebula, - the same goes for load balancing via live migration between hosts, for which there are nice scripts these days. - i have spent some half year on deploying a HA setup on xen. old-style heartbeat + drbd in domUs and there were some caveats mostly that you cannot see a link-down in the host from point of domU. yes, you can bond in dom0 no, that is NOT solving the problem. if you have a double failure, or if the bridge in dom0 has an issue, then your domU will NOT notice that. No, using arp_monitor & bonding in domU was not a solution either, since the arpping did NOT work in CentOS 5.4 It doesn''t get better with DR:BD with is latency-wary more than usual inside a domU. Oh, and did I mention on-the-wire corruption going unnoticed until you finally found any place that has hw offloading that didn''t work. Heartbeat v1 showed a general lack of gracefulness when dealing with such issues. - My personal recommendation would be to get working Xen-Ready NICs from Solarflare if you want to do anything that goes into clustering inside domUs, or need high LAN performance. Alternatively, I wonder if Remus is not the one-and-best-ever solution. But so far I don''t have it working :) As for the redo-everything with a GUI factor, you could also give Oracle VM3 a test ride. Hmmyeah, over the last 6 or 7 years in Xen GUI land, I''d say the thing sticking out has been OpenNebula (oh ok and Eucalyptus when it was still vaporware with screenshots), and dom0 wise I haven''t seen anything that is remotely as good as OVM. Sadly that is being locked down into an appliance now :) Flo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
correction. 2011/10/26 Florian Heigl <florian.heigl@gmail.com>:> sticking out has been OpenNebula (oh ok and Eucalyptus when it was > still vaporware with screenshots), and dom0 wise I haven''t seenI meant to say "Enomalism"? Whatever turned into Enomaly inc and strange applications that will be great if you need to launch 2000 AMI images on different clouds. Like, yeah, daily. -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks for the input. Please see replies inline below. Donny B. On 10/26/2011 3:34 PM, Florian Heigl wrote:> If you can invest a week or one and a half in the re-design to make it > all fancy: > > - Upgrade from Centos5 for better overall performanceUpgrade from Centos 5.5 to what? The only direct upgrade path is to Centos 5.7. I would actually like to move to something newer like Centos 6 or Fedora 15(16).> - Use OpenNebula in a small "private cloud" setup, this very very well > covers the GUI bit.I have never even heard of OpenNebula till now. Looking at it, it appears that it will suit our needs very well. Plus it can use Xen, KVM, and VMWare as a backend is a plus.> - Big "partitions" might waste a raid arrays performance (but that > depends on too many factors. I like many small bits, others do not, > and in general cache sorts it out better than my preferences would ;)All of our domu disks are in an LVM setup. The only reason for the big 6TB or 8TB array is due to the way we have to keep backups. Backuppc which we use for our backups deduplicates and compresses files. So in that 6TB currently we have approximately: * 856 full backups of total size 107743.45GB (prior to pooling and compression), * 655 incr backups of total size 1311.85GB (prior to pooling and compression).> - Raid6 definitely kills performance. Consider a Raid10.Understandable. The Raid6 was chosen for space and resilience. Originally we only had Xen1 and needed the extra space. If we can get a few addon modules for the SAN then we can migrate to a raid10 for performance.> - Live migration can come along easily with i.e. OpenNebula, > - the same goes for load balancing via live migration between hosts, > for which there are nice scripts these days.It does appear as so. I am looking into this further.> - i have spent some half year on deploying a HA setup on xen. > old-style heartbeat + drbd in domUs and there were some caveats > mostly that you cannot see a link-down in the host from point of domU. > yes, you can bond in dom0 > no, that is NOT solving the problem. if you have a double failure, > or if the bridge in dom0 has an > issue, then your domU will NOT notice that. > No, using arp_monitor& bonding in domU was not a solution either, > since the arpping did NOT > work in CentOS 5.4 > > It doesn''t get better with DR:BD with is latency-wary more than > usual inside a domU. > Oh, and did I mention on-the-wire corruption going unnoticed until > you finally found any place that has hw > offloading that didn''t work. > > Heartbeat v1 showed a general lack of gracefulness when dealing > with such issues. >I have looked at DRBD before and liked what I saw but did not care to basically lose half my disk space. Using the SAN as a shared storage medium should help with that though. My reasoning for bonding the interfaces was not totally for failsafe but rather for speed. Although I can not say for a fact that the speed has shown an improvement over a single link.> - My personal recommendation would be to get working Xen-Ready NICs > from Solarflare > if you want to do anything that goes into clustering inside domUs, or > need high LAN performance. > > Alternatively, I wonder if Remus is not the one-and-best-ever > solution. But so far I don''t have it working :) > > As for the redo-everything with a GUI factor, you could also give > Oracle VM3 a test ride. > Hmmyeah, over the last 6 or 7 years in Xen GUI land, I''d say the thing > sticking out has been OpenNebula (oh ok and Eucalyptus when it was > still vaporware with screenshots), and dom0 wise I haven''t seen > anything that is remotely as good as OVM. Sadly that is being locked > down into an appliance now :)I have also looked into enomaly and their offerings but it seems geared more toward KVM now. I do think I will investigate the OpenNebula more. Thanks for all the input.> Flo > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users