I just had a quick question about improving the performance for Xen VM''s. I have installed Xen 3.0.2 on RHEL4 with 1 kernel compiled for all domains. When I access by VM''s the performance of the domain U is slower then taht of domain 0. 1. I wanted to know if there are any configuration changes I can make to improve the performance of domain U? 2. Domain U have no /boot sector in /etc/fstab. It is all a single partition in / . I wanted to know if there are any shortcoming to just having / partition to performance? 3. Also can the domain U be given swap space to improve performance? I am guesssing thats the swap is used from domain 0 which was added during inital installation. 4. Is the performance of teh domain U dropped due to internal networking through bridging? I wanted to know if anyone knows how to fine tune it to improve performance. Thanks, Naha. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have a similar question. When I get some load on my dom0 all the domUs more or less freeze and are impossible to work with (though the dom0 is as responsive as ever (unless I have some really heavy load ofcourse :) ) ). Is there some (easy) way to make sure that the dom0 doesn''t ''steal'' the CPU or is this working as intended? On 9/3/06, saptarshi naha <naha80@gmail.com> wrote:> I just had a quick question about improving the performance for Xen VM''s. > > I have installed Xen 3.0.2 on RHEL4 with 1 kernel compiled for all > domains. When I access by VM''s the performance of the domain U is > slower then taht of domain 0. > > 1. I wanted to know if there are any configuration changes I can make > to improve the performance of domain U? > > 2. Domain U have no /boot sector in /etc/fstab. It is all a single > partition in / . I wanted to know if there are any shortcoming to just > having / partition to performance? > > 3. Also can the domain U be given swap space to improve performance? I > am guesssing thats the swap is used from domain 0 which was added > during inital installation. > > 4. Is the performance of teh domain U dropped due to internal > networking through bridging? I wanted to know if anyone knows how to > fine tune it to improve performance. > > Thanks, > Naha. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, 3. You can pass a logical volume, physical partition ou a file for the swap to the domU''s. There''s no difference between swap LV/partition/file and data LV/partition/file for dom0. Fabrice T. 2006/9/3, saptarshi naha <naha80@gmail.com>:> > I just had a quick question about improving the performance for Xen VM''s. > > I have installed Xen 3.0.2 on RHEL4 with 1 kernel compiled for all > domains. When I access by VM''s the performance of the domain U is > slower then taht of domain 0. > > 1. I wanted to know if there are any configuration changes I can make > to improve the performance of domain U? > > 2. Domain U have no /boot sector in /etc/fstab. It is all a single > partition in / . I wanted to know if there are any shortcoming to just > having / partition to performance? > > 3. Also can the domain U be given swap space to improve performance? I > am guesssing thats the swap is used from domain 0 which was added > during inital installation. > > 4. Is the performance of teh domain U dropped due to internal > networking through bridging? I wanted to know if anyone knows how to > fine tune it to improve performance. > > Thanks, > Naha. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, you can adjust the priority of each domain (including dom0) with the "xm sched-sedf" command. If you use the weight parameter, say you give a domain X a weight of 512 and a domain Y a weight of 1024, then when both domains X and Y try to use full CPU, domain Y will get twice CPU time than domain X. It''s also possible to allow domains to use extra CPU time when other domains are idle (doesn''t hurt anyone since they don''t use CPU ;). Fabrice T. 2006/9/3, Martin Svedin <martin.svedin@gmail.com>:> > I have a similar question. When I get some load on my dom0 all the > domUs more or less freeze and are impossible to work with (though the > dom0 is as responsive as ever (unless I have some really heavy load > ofcourse :) ) ). > > Is there some (easy) way to make sure that the dom0 doesn''t ''steal'' > the CPU or is this working as intended? > > On 9/3/06, saptarshi naha <naha80@gmail.com> wrote: > > I just had a quick question about improving the performance for Xen > VM''s. > > > > I have installed Xen 3.0.2 on RHEL4 with 1 kernel compiled for all > > domains. When I access by VM''s the performance of the domain U is > > slower then taht of domain 0. > > > > 1. I wanted to know if there are any configuration changes I can make > > to improve the performance of domain U? > > > > 2. Domain U have no /boot sector in /etc/fstab. It is all a single > > partition in / . I wanted to know if there are any shortcoming to just > > having / partition to performance? > > > > 3. Also can the domain U be given swap space to improve performance? I > > am guesssing thats the swap is used from domain 0 which was added > > during inital installation. > > > > 4. Is the performance of teh domain U dropped due to internal > > networking through bridging? I wanted to know if anyone knows how to > > fine tune it to improve performance. > > > > Thanks, > > Naha. > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> 1. I wanted to know if there are any configuration changes I > can make to improve the performance of domain U?You will need to expose more of your setup. LVM devices are typically faster than loopback file devices. Other things might help as well. Give us more information.> 2. Domain U have no /boot sector in /etc/fstab. It is all a > single partition in / . I wanted to know if there are any > shortcoming to just having / partition to performance?I am also unsure about that point.> 3. Also can the domain U be given swap space to improve > performance? I am guesssing thats the swap is used from > domain 0 which was added during inital installation.Yes, just pass another device in and "swapon" it in domU. You should use a real partition or LVM device for that. Swap for dom0 is NOT used inside domUs. Every domain has it''s own memory and needs to swap on it''s own. (Yes, in my view this is a shortcoming of Xen, as it can force one domain to swap out things it need frequently while another domain uses memory for things it does not really need. However it seems to be the only way to be fair. Otherwise any domain cound directly affect other domain''s performance.)> 4. Is the performance of teh domain U dropped due to internal > networking through bridging? I wanted to know if anyone knows > how to fine tune it to improve performance.I am also unsure here. I don''t think the bridge itself adds any noticeable load (there are performance benchmarks for netfilter with 3.500.000 concurrent connections and 60.000 connects per second.), BUT the indirection of the virtual network interfaces inside the domUs means that any packet which is ready to send is only written to a buffer and will only be send, when the control is given back to dom0, which is the only domain that can really send... So I think it might be usefull to have a own NIC in the domU using pci passthrough, however that''s not possible in my environment (especially with a lot of domUs), and so I didn''t (or couldn''t) try... Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Somewhere earlier in this thread, naha wrote that he had 1 kernel for all domains. I understood that the kernelconfigurations for dom0 and domU must be different (backend stuff for dom0 and frontend stuff for domU). If both are included, could that lead to bad performance? Thanks, Hans. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks a lot Steffen and Fabrice, considering its a weekened.> > 1. I wanted to know if there are any configuration changes I > > can make to improve the performance of domain U? > > You will need to expose more of your setup. > LVM devices are typically faster than loopback file devices. > Other things might help as well. Give us more information.I am doing LVM and then GNBD for domain U''s. Is there a better way of doing to improve performance? I have dell power edges with scsi drives.> > > 2. Domain U have no /boot sector in /etc/fstab. It is all a > > single partition in / . I wanted to know if there are any > > shortcoming to just having / partition to performance? > > I am also unsure about that point.I am not SURE but maybe it is to prevent boot sector from being corrupted. But then that has NO connection to performance.> > > 3. Also can the domain U be given swap space to improve > > performance? I am guesssing thats the swap is used from > > domain 0 which was added during inital installation. > > Yes, just pass another device in and "swapon" it in domU. You should use a > real partition or LVM device for that. > Swap for dom0 is NOT used inside domUs. Every domain has it''s own memory and > needs to swap on it''s own. > (Yes, in my view this is a shortcoming of Xen, as it can force one domain to > swap out things it need frequently while another domain uses memory for > things it does not really need. However it seems to be the only way to be > fair. Otherwise any domain cound directly affect other domain''s > performance.)Great reply, thnaks. I will create the swap using LVM and GNBD and then pass the swap partition using /dev/sda2 through configuration files. Correct me if am wrong.> > > 4. Is the performance of teh domain U dropped due to internal > > networking through bridging? I wanted to know if anyone knows > > how to fine tune it to improve performance. > > I am also unsure here. I don''t think the bridge itself adds any noticeable > load (there are performance benchmarks for netfilter with 3.500.000 > concurrent connections and 60.000 connects per second.), BUT the indirection > of the virtual network interfaces inside the domUs means that any packet > which is ready to send is only written to a buffer and will only be send, > when the control is given back to dom0, which is the only domain that can > really send...Not SURE, Can giving the MAC and IP address in the Xen configuration file make any difference to internal networking performance? Right now, I have just made the changes in the domain U using "netconfig" and that sticks onto the domains. Also when I do ifconfig in domain 0 there is no MAC and no IP for vif0.1 and vif0.2 (which are my domain U).> So I think it might be usefull to have a own NIC in the domU using pci > passthrough, however that''s not possible in my environment (especially with > a lot of domUs), and so I didn''t (or couldn''t) try...Coz the thread is going so well I am tempted to ask 1 more question. Is it possible to do windows installation on Xen with Xeon processors but without dual core? I search on the intel site and xensource but there is no mention for JUST xeon processor have intel VT techonology. If it is not possible, Is there any way of doing windows virtualization i can try? Thanks a lot, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> > > 2. Domain U have no /boot sector in /etc/fstab. It is all > a single > > > partition in / . I wanted to know if there are any shortcoming to > > > just having / partition to performance? > > > > I am also unsure about that point. > > I am not SURE but maybe it is to prevent boot sector from > being corrupted. But then that has NO connection to performance.No, there isn''t such a thing as a boot sector related to /boot. /boot is sometimes taken as a distinct partition to make sure, it stays intact, even if the filesystem of another partition is broken and therefor keeping the system "somewhat" bootable. You usually have your kernels there... However, that will not have any affect on performance. My "I am unsure" was about having multiple partitions. A lot of people seem to say, that it has advantages to use /var on a own partition for example, but I didn''t try myself.> I will create the swap using LVM and GNBD and then pass the > swap partition using /dev/sda2 through configuration files. > Correct me if am wrong.If you need to make sure, you can live-swap you domU to another host, you need to do that. However, if you use GNBD for live mirror as backup purpose, you should not put swap there. Swap needs really fast access and network-backed devices tend not to be so fast...> Not SURE, Can giving the MAC and IP address in the Xen > configuration file make any difference to internal networking > performance?No. It doesn''t matter how you configure the virtual nic. But it surely makes a difference wether you use the virtual nic with a linux bride or a real pci nic (over pci passthrough).> Is it possible to do windows installation on Xen with Xeon > processors but without dual core? I search on the intel site > and xensource but there is no mention for JUST xeon processor > have intel VT techonology.I don''t have VT and I don''t know about XEON, but AFAIK there is a output while booting Xen that shows processor flags and you should see it there. Moreover, you need to have a VT-enabled bios with Intel processors, so have a look there.> If it is not possible, Is there any way of doing windows > virtualization i can try?Bochs, VMWare, etc. Not with pure Xen without VT. Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hey,> > I will create the swap using LVM and GNBD and then pass the > > swap partition using /dev/sda2 through configuration files. > > Correct me if am wrong. > > If you need to make sure, you can live-swap you domU to another host, you > need to do that. > However, if you use GNBD for live mirror as backup purpose, you should not > put swap there. Swap needs really fast access and network-backed devices > tend not to be so fast...So if I understand correctly, if i use GNBD i should not put swap. Does iSCSI work better then GNBD and would I be able to put swap and improve performance?> > Is it possible to do windows installation on Xen with Xeon > > processors but without dual core? I search on the intel site > > and xensource but there is no mention for JUST xeon processor > > have intel VT techonology. > > I don''t have VT and I don''t know about XEON, but AFAIK there is a output > while booting Xen that shows processor flags and you should see it there. > Moreover, you need to have a VT-enabled bios with Intel processors, so have > a look there. > > > If it is not possible, Is there any way of doing windows > > virtualization i can try? > > Bochs, VMWare, etc. Not with pure Xen without VT.Yeah I check for VMX in the dmesg but it was not there. Also I checked the BIOS to enable any intelVT through that, but there was none. I want to make windows 2k3 run through xen. Also I wanted to know if there are any good xen managment tool available. I got enomalism. Has anyone tried this? Or is there any other good xen management tool? Thanks, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sunday 03 September 2006 8:11 am, saptarshi naha wrote:> Not SURE, Can giving the MAC and IP address in the Xen configuration > file make any difference to internal networking performance?what does get much better I/O performance for DomUs is to make sure Dom0 is always able to run. For that, dedicate a CPU (or at least a hyperthread) exclusively for Dom0. even if your dom0 cpu usage is close to 0%, if it doesn''t have to fight for a CPU each time a DomU has to send an IP packet, you''ll get much higher I/O another point: you say you''re using GNBD to get network storage, but don''t mention if you run the GNBD client on dom0 or domU. i guess you would avoid a lot of domain switching if you run it on dom0 and export the block devices to domU, not just the eth devices. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> So if I understand correctly, if i use GNBD i should not put swap. > Does iSCSI work better then GNBD and would I be able to put > swap and improve performance?No, I am saying that it would be better to have swap on a real local harddrive. Neither GNBD nor iSCSI.> Yeah I check for VMX in the dmesg but it was not there. Also > I checked the BIOS to enable any intelVT through that, but > there was none. I want to make windows 2k3 run through xen.Then you propably don''t have VT support and you won''t get happy with 2k3r2.> Also I wanted to know if there are any good xen managment > tool available. I got enomalism. Has anyone tried this? Or is > there any other good xen management tool?command line xen. :D Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hey all...thanks for the awesome replies.> > > So if I understand correctly, if i use GNBD i should not put swap. > > Does iSCSI work better then GNBD and would I be able to put > > swap and improve performance? > > No, I am saying that it would be better to have swap on a real local > harddrive. Neither GNBD nor iSCSI. >Now the question is with swap being on the LVM(/dev/sda2), will there be any problem with my migration using GNBD? I am running my GNBD on the domain 0 and export the drives and then import on the other server.> > Also I wanted to know if there are any good xen managment > > tool available. I got enomalism. Has anyone tried this? Or is > > there any other good xen management tool? > > command line xen. :DI also love the xm tool :) But want kindda want some other management tool. Regards, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello Naha, Somewhere in the beginning of this thread you wrote that you had 1 kernel for all your domains. I thought that the dom0 kernel should have a configuration that differs from the domU kernels (i.e the setting of CONFIG_XEN_PRIVILEGED_GUEST and the backend/frontend stuff). I don''t know if a dom0 kernel will run as a domU but if it does, I expect it to give performance problems. Regards, Hans. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hans, Yeah I am using the same kernel to run all the domains. Wow i never thought of that could be the problem? Lemme check. Thanks> Somewhere in the beginning of this thread you wrote > that you had 1 kernel for all your domains. > I thought that the dom0 kernel should have a configuration > that differs from the domU kernels (i.e the setting of > CONFIG_XEN_PRIVILEGED_GUEST and the backend/frontend stuff). > > I don''t know if a dom0 kernel will run as a domU but if > it does, I expect it to give performance problems. > > Regards, > Hans. >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, i just got a brand new IBM/Lenovo ThinkVista System with a Pentium D and installed fedora on it to use xen. i enabled the VT Stuff in the bios and the CPU has vmx capabilities. output of cpuinfo : cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Pentium(R) D CPU 3.00GHz stepping : 4 cpu MHz : 2992.740 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl vmx est cid cx16 xtpr lahf_lm bogomips : 7486.51 processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Pentium(R) D CPU 3.00GHz stepping : 4 cpu MHz : 2992.740 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl vmx est cid cx16 xtpr lahf_lm bogomips : 7486.51 if i now try to start a vmx domain i get an error and found the following entry in xend-debug.log : Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 77, in op_create dominfo = self.xd.domain_create(config) File "/usr/lib/python2.4/site-packages/xen/xend/XendDomain.py", line 228, in domain_create dominfo = XendDomainInfo.create(config) File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 194, in create vm.initDomain() File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1268, in initDomain self.info[''device'']) File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 44, in create return findImageHandlerClass(imageConfig)(vm, imageConfig, deviceConfig) File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 74, in __init__ self.configure(imageConfig, deviceConfig) File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 267, in configure raise VmError("Not an HVM capable platform, we stop creating!") VmError: Not an HVM capable platform, we stop creating! for me it looks like xen can''t find the vt extension on this box .. any hints ?? Sven _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> Somewhere in the beginning of this thread you wrote that you > had 1 kernel for all your domains. > I thought that the dom0 kernel should have a configuration > that differs from the domU kernels (i.e the setting of > CONFIG_XEN_PRIVILEGED_GUEST and the backend/frontend stuff). > > I don''t know if a dom0 kernel will run as a domU but if it > does, I expect it to give performance problems.It does work and is even the default for the source distribution. I - for myself - always create different kernels anyway. But please don''t start a new discussion on this here. See thread "Custom Kernel" instead. Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> Now the question is with swap being on the LVM(/dev/sda2), > will there be any problem with my migration using GNBD? I am > running my GNBD on the domain 0 and export the drives and > then import on the other server.Correct. You will not be able to live-migrate such a domain. That''s why I wrote:> If you need to make sure, you can live-swap you domU to another host, youneed to do that [use GNBD]. But I suggest having swap on lvm/sda anyway and if you NEED to migrate, temporary move "swapon" another swap partition (on GNBD), and "swapoff" the (lvm/sda). After migration "swap back". Swap on network-backed devices is inherently slow and you should not use that on systems that need to be performant. Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thnaks Steffen, I am going to give that a try. Also have anyone done HA on xen?> > Now the question is with swap being on the LVM(/dev/sda2), > > will there be any problem with my migration using GNBD? I am > > running my GNBD on the domain 0 and export the drives and > > then import on the other server. > > Correct. You will not be able to live-migrate such a domain. > That''s why I wrote: > > > If you need to make sure, you can live-swap you domU to another host, you > need to do that [use GNBD]. > > But I suggest having swap on lvm/sda anyway and if you NEED to migrate, > temporary move "swapon" another swap partition (on GNBD), and "swapoff" the > (lvm/sda). After migration "swap back". > > Swap on network-backed devices is inherently slow and you should not use > that on systems that need to be performant.Regards, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of > Hans de Hartog > Sent: 03 September 2006 13:30 > To: xen-users@lists.xensource.com > Subject: Re: [Xen-users] Improve performance of domain U > > Hi, > > Somewhere earlier in this thread, naha wrote > that he had 1 kernel for all domains. > I understood that the kernelconfigurations for > dom0 and domU must be different (backend stuff > for dom0 and frontend stuff for domU). > If both are included, could that lead to bad > performance?Not particularly noticable in any system that has a reasonable amount of memory per domain. If you''re trying (as I''ve seen examples of) to cram 32 domains into 256 MB of memory, you WILL need to strip anything that isn''t absolutely critical out of the kernel. But driver that are modules will not use any memory anyway (diskspace will be used, but no memory as such), so if you have modules in the kernel (most people will build this way), then surplus drivers will never get loaded, and thus not consume any resource. The backend/frontend choice is done all automagically based on the domain''s "plug-n-play" mechanisms, so it will not load any of the frontend drivers when it need backend drivers or the other way around. Obviously, if the front/back-end drivers are compiled in fixed into the kernel, a small amount of extra space is needed for the kernel, but I doubt that it''s more than 100K or so, and if we''re still talking about "normal" systems where there goal is to have several megabytes per domain, it would make no noticable differnce. -- Mats> > Thanks, > Hans. > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> Also have anyone done HA on xen?What''s HA ? Regards, Steffen _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Steffen Heil wrote:>> Also have anyone done HA on xen? > > What''s HA ?http://www.linux-ha.org/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I am about to try DRBD with HEATBEAT for High Availability. Has anyone done that? How to layer it? I have LVM then GNBD. How is performance of DRBD? Is there a better method than DRBD to do HA in xen? Thanks ppl, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
saptarshi naha wrote:> Hi, > > I am about to try DRBD with HEATBEAT for High Availability. > Has anyone done that? How to layer it? I have LVM then GNBD. > > How is performance of DRBD? Is there a better method than DRBD to do HA > in xen?Please correct me if I''m wrong. I''m just starting with xen and I just googled for BRDB but my experience with HA could not refrain me from the following remarks: If you''re into HA, the first rule is: eliminate SPOFs (Single Points Of Failure). Your physical box IS a SPOF. So, running more operating systems on a single box (xen) does not give you more availability (on the contrary, the more dom''s you''re running, the more dom''s die if your box dies). Therefore, using DRBD within xen-domains on the same physical box doesn''t give you more availablity either. In general, (IMHO) xen buys you nothing for HA. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hans de Hartog wrote:> saptarshi naha wrote: >> Hi, >> >> I am about to try DRBD with HEATBEAT for High Availability. >> Has anyone done that? How to layer it? I have LVM then GNBD. >> >> How is performance of DRBD? Is there a better method than DRBD to do HA >> in xen? > > Please correct me if I''m wrong. I''m just starting with > xen and I just googled for BRDB but my experience with > HA could not refrain me from the following remarks: > > If you''re into HA, the first rule is: eliminate SPOFs > (Single Points Of Failure). Your physical box IS a SPOF. > So, running more operating systems on a single box (xen) > does not give you more availability (on the contrary, the > more dom''s you''re running, the more dom''s die if your box > dies). > Therefore, using DRBD within xen-domains on the same physical > box doesn''t give you more availablity either.That makes sense :)> In general, (IMHO) xen buys you nothing for HA.However, I thought the advantage of xen for HA was primarily due to easy migration. If things are about to fall over it''s a simple process to move the domains to other hardware. -- Regards, Julian Davison, ICT Technician Christchurch Boys'' High School http://www.cbhs.school.nz/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andrey Khavryuchenko
2006-Sep-05 20:38 UTC
[Xen-users] Re: Improve performance of domain U
Hans, "HdH" == Hans de Hartog wrote: HdH> Therefore, using DRBD within xen-domains on the same physical HdH> box doesn''t give you more availablity either. HdH> In general, (IMHO) xen buys you nothing for HA. Haven''t followed the initial discussion, but.. DomU is much easier to move/migrate that Dom0. This is an important element in HA. -- Andrey V Khavryuchenko Software Development Company http://www.kds.com.ua/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Hans , I thinking of doing HA on 2 physical different servers after running xen. I havent completely figured yest, but I am guessing that both servers should be running xen , 1 being master and other being slave. As soon the master VM fails on 1 server through heartbeat slave VM from other server kicks in and is able to take over. I wanted to know if anyone done DRBD and wanted to know its performance and also layering block devices? On 9/6/06, Hans de Hartog <dehartog@rootsr.com> wrote:> saptarshi naha wrote: > > Hi, > > > > I am about to try DRBD with HEATBEAT for High Availability. > > Has anyone done that? How to layer it? I have LVM then GNBD. > > > > How is performance of DRBD? Is there a better method than DRBD to do HA > > in xen? > > Please correct me if I''m wrong. I''m just starting with > xen and I just googled for BRDB but my experience with > HA could not refrain me from the following remarks: > > If you''re into HA, the first rule is: eliminate SPOFs > (Single Points Of Failure). Your physical box IS a SPOF. > So, running more operating systems on a single box (xen) > does not give you more availability (on the contrary, the > more dom''s you''re running, the more dom''s die if your box > dies). > Therefore, using DRBD within xen-domains on the same physical > box doesn''t give you more availablity either. > In general, (IMHO) xen buys you nothing for HA. >Thanks, Naha _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> In general, (IMHO) xen buys you nothing for HA. > > However, I thought the advantage of xen for HA was > primarily due to easy migration. If things are about > to fall over it''s a simple process to move the domains > to other hardware.You could also think in a way that Heartbeat is installed on the Dom0 (or a special domU), and then heartbeat manages the domu''s, so that they are started on a second physical machine when the first one dies. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wednesday 06 September 2006 05:21, saptarshi naha wrote:> Hi, > > Hans , I thinking of doing HA on 2 physical different servers after > running xen. I havent completely figured yest, but I am guessing that > both servers should be running xen , 1 being master and other being > slave. As soon the master VM fails on 1 server through heartbeat slave > VM from other server kicks in and is able to take over. > > I wanted to know if anyone done DRBD and wanted to know its > performance and also layering block devices? >We''re using DRBD on separate servers and then exporting "disks" to the dom0s for use in domUs via iSCSI. We use LVM on top of DRBD. We haven''t looked at using HA to maintain the domUs yet. Matthew -- Matthew Wild Tel.: +44 (0)1235 445173 M.Wild@rl.ac.uk URL http://www.ukssdc.ac.uk/ UK Solar System Data Centre and World Data Centre - Solar-Terrestrial Physics, Chilton Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Markus Hochholdinger
2006-Oct-12 13:09 UTC
Re: [Xen-users] Improve performance of domain U
Hi, Am Dienstag, 5. September 2006 22:19 schrieb Julian Davison:> Hans de Hartog wrote: > > saptarshi naha wrote: > > If you''re into HA, the first rule is: eliminate SPOFs > > (Single Points Of Failure). Your physical box IS a SPOF. > > So, running more operating systems on a single box (xen) > > does not give you more availability (on the contrary, the > > more dom''s you''re running, the more dom''s die if your box > > dies). > > Therefore, using DRBD within xen-domains on the same physical > > box doesn''t give you more availablity either. > That makes sense :)well, HA is not _only_ SPOF but also the time to recover from a failure.> > In general, (IMHO) xen buys you nothing for HA. > However, I thought the advantage of xen for HA was > primarily due to easy migration. If things are about > to fall over it''s a simple process to move the domains > to other hardware.Yes, and this is the way i do it: - 4 servers overall - 2 servers as gnbd server, each in an extra physical GBit network connected to the 2 dom0 servers - 2 servers as dom0 server and also gnbd client - the 2 gnbd servers are configured equal except ip address - the 2 dom0 servers are configured equal except ip address - domUs where build as follow * sda1 (gnbd server 1) and sdb1 (gnbd server 2) as md0 as rootfs * sda2 (gnbd server 1) and sdb2 (gnbd server 2) as md1 as swap * network eth0 bridged in dom0 to external network The result is, i can do live migration of domUs between dom0 server 1 and server 2. If one gnbd server fails, the raid inside domU degrades but works. If the dom0 fails my domU is running i have to do manual failover (but can also be automated), that means starting my domU on the other dom0 server. In one of my setups i''ve combined dom0 with gnbd server, so i only need two servers. I''ve also a setup where the external network connection (internet) has no SPOF. That is, two network cards bounded in dom0 for failover, connected to two switches, these two switches are connected to two ha firewalls, and each firewall has its own cable to the internet which goes out in different directions of the building the servers are in (well, this is _extreme_). advantages - The only single point of failure is the cpu a domU is running on. - raid1 inside domU makes it possible to resize filesystem without interruption of the domU (you need to block-detach and block-attach for a resized block device to let domU notice this resice). This means: * degrade raid1, fail and remove one block device out of the raid1 * block-detach * resize block device on gnbd server, should be a logical volume * block-attach * rebuild raid1 * degrade raid1, fail and remove the other block device out of the raid1 * block-detach * resize other block device on gnbd server, should be a logical volume * block-attach * rebuild raid1 * grow raid1 * resize filesystem (it is tested and works well) - Disk i/o performance. The gnbd servers are connected over Gbit network. The disks on each gnbd server are raid0 (striped) for full performance. Reliability is get through the other gnb server. In my tests i get ~60MB/s to the filesystem (dd) in a domU. Performance can be boosted with more striped disks (in my cases i''ve done this with two or three cheap sata disks). In one of my setups i''ve the gnbd servers connected with two GBit network cards bounded for double performance because there are 6 slots for disks but now only three are used. Remember, here you have something like a san build out of cheap hardware! (The raid0 is not really a raid0 but physical volumes for logical volume manager where i make striped logical volumes out of in my case.) - You can use the power of both dom0s and have only to start all domUs on one dom0 if the other fails. This means in a dom0 failure the domUs are getting less memory and cpu power but they WORK. And if no failure is there you get all the power you have bought. - You resp. the domU Admins have to care only about there single domU as it would be a single hardware server but have also the advantages of HA. - Clean design. Only commonly used and approved techniques are used here (GBit ethernet network, gnbd, raid1, Xen). OK, gnbd and xen are not in the main line kernel tree but gnbd is now used a long time by redhat and xen (hopefully) will get into the main kernel tree. possibilities - In my setups i do rsync/hardlink backup on the gnbd servers in an extra backup partition, which i resize as i need space. The backups run in different times on each gnbd server, so i''ve backups of different times on the different gnbd servers. I can hold as much backups as much space i have. And i do tape backups from the gnbd servers. Also my backup script automtic backups new logical volumes, so i don''t mind anymore the backups. - You can have more than two gnbd servers for more performance or more availability. - You can have more than two dom0s servers for more cpu performance. - You can make your network also fail safe like i mentioned above. - You can make snapshots (lvm) of the disks of your domUs and can start the domU on a different IP to test things like updates. drawbacks - You can get maximal the cpu power of one hardware server! No HPC! - You have to care about your raid1 inside the domUs. In my case, scripts from the domUs do this and rebuild automatic after a failere recovers. With block devices over networks it can happen more often than block devices on scsi or sata. - I''ve to write my own fenced for gnbd because the shipped ones didn''t fit my needs. - You have to watch your memory consumption on the dom0s if you have to fail over another dom0. Well, there may be more pros and cons for my solution. The only thing i can say, i''ve one setup with four servers running since april 2006 and one setup with two servers (gnbd server and dom0 is the same hardware) since june 2006 without problems. The next setup with four servers is in work (there will be a lot of old hardware migrated to it) and will soon be productive. I''ve also a lot of other xen-Hosts in production use, but they are on a single hardware only. But the backup and raid1 thing i also use within the single xen hosts. -- greetings eMHa _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users