Does anyone have any suggestions on Xen performance tuning as far as documentation or books? I just want a general idea of the tuning parameters. Thank you. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I work as a performance geek and spend a lot of time working on Xen. 80% of xen performance ends up as general Linux/network/driver/ hardware/DB/web tuning (in other words, hard) For the remaining 20% the most useful resources are: 1. This list 2. Running Xen book 3. Definitive guide to Xen (hard) 4. Academic papers and blog entries There is a lot of information around but it isn''t always correct, easily digested, and there are plenty of areas that are not fully understood. Sent from my iPhone On Oct 6, 2009, at 6:31 PM, LoD MoD <lodmod.dod@gmail.com> wrote:> Does anyone have any suggestions on Xen performance tuning as far as > documentation or books? > > I just want a general idea of the tuning parameters. > > Thank you. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello xen-users, How can I to increase performance of network layer? And what is better (easy) for topology of DomU network (bridge, route, NAT )? Thanks. -- Best regards, Igor _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello all xen-users, How can I to increase performance of network layer? And what is better (easy) for topology of DomU network (bridge, route, NAT )? I planned use DomU for VPN (NAT) server. Thanks. -- Best regards, Igor -- реклама ----------------------------------------------------------- http://FREEhost.UA - cтабильные, быстрые, мощные серверы. Домен бесплатно! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, Oct 10, 2009 at 4:53 PM, Igor S. Pelykh <kesha@freenet.lg.ua> wrote:> Hello xen-users, > > How can I to increase performance of network layer? > And what is better (easy) for topology of DomU network (bridge, route, > NAT )?IMHO for network setup it''s easier to use bridge. That you manage your dom0 (network-wise) the same way you manage your L2 or L3 switch. When you need NAT, you can use bridge + NAT (which is what libvirt does with virbr0). Performance-wise, I didn''t have to do any tweaking with RHEL5. domU can easily saturate uplink, with domU <-> domU throughput in the range of 2-3 Gbps. Some people have reported problems (search the list archive) with recent pv_ops dom0 kernel. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Oct 12, 2009 at 7:00 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Sat, Oct 10, 2009 at 4:53 PM, Igor S. Pelykh <kesha@freenet.lg.ua> > wrote: > > Hello xen-users, > > > > How can I to increase performance of network layer? > > And what is better (easy) for topology of DomU network (bridge, route, > > NAT )? > > IMHO for network setup it''s easier to use bridge. That you manage your > dom0 (network-wise) the same way you manage your L2 or L3 switch. When > you need NAT, you can use bridge + NAT (which is what libvirt does > with virbr0). > > Performance-wise, I didn''t have to do any tweaking with RHEL5. domU > can easily saturate uplink, with domU <-> domU throughput in the range > of 2-3 Gbps. Some people have reported problems (search the list > archive) with recent pv_ops dom0 kernel. > > -- > Fajar > >Fajar, Are you sure about your DomU to DomU speeds? What methodology did you use to test this? I''ve done extensive testing in this area and I''ve never seen any numbers that can come near that with or without a pv_ops kernel. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I am facing the same issue. Bridging (the default mode) is better and simple way for network set-up. But Fajar! the strange thing in my case is i don''t find this virbro while running ifconfig in my dom0. instead i have peth0 acting as vif for dom0. Is there any issue with building xen with pv kernel, while using FC11 platform where virtualization is enabled? since i compiled and build xen using a platform where i have enabled virtualization at fc11 installation time. Regards, Fasiha Ashraf --- On Tue, 13/10/09, Fajar A. Nugraha <fajar@fajar.net> wrote: From: Fajar A. Nugraha <fajar@fajar.net> Subject: Re: [Xen-users] Xen Performance To: "Igor S. Pelykh" <kesha@freenet.lg.ua> Cc: xen-users@lists.xensource.com Date: Tuesday, 13 October, 2009, 7:00 AM On Sat, Oct 10, 2009 at 4:53 PM, Igor S. Pelykh <kesha@freenet.lg.ua> wrote:> Hello xen-users, > > How can I to increase performance of network layer? > And what is better (easy) for topology of DomU network (bridge, route, > NAT )?IMHO for network setup it''s easier to use bridge. That you manage your dom0 (network-wise) the same way you manage your L2 or L3 switch. When you need NAT, you can use bridge + NAT (which is what libvirt does with virbr0). Performance-wise, I didn''t have to do any tweaking with RHEL5. domU can easily saturate uplink, with domU <-> domU throughput in the range of 2-3 Gbps. Some people have reported problems (search the list archive) with recent pv_ops dom0 kernel. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users Keep up with people you care about with Yahoo! India Mail. Learn how. http://in.overview.mail.yahoo.com/connectmore _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I am facing the same issue. Bridging (the default mode) is better and simple way for network set-up. But the strange thing in my case is i don''t find this virbro while running ifconfig in my dom0. instead i have peth0 acting as vif for dom0. Is there any issue with building xen with pv kernel, while using FC11 platform where virtualization is enabled? since i compiled and build xen using a platform where i have enabled virtualization at fc11 installation time. McWilliams! i have tested domU <-> domU throughput using netperf-2.4.5 and got a throughput of 0.29Mbps. that is no doubt very poor. Marco observe the same throughput in this case and identified that its because Netperf use setitimer() to send packets at fixed rate. Sending packets at fixed rate is the cause of this poor throughput. I dont know yet how to improve it? Regards, Fasiha Ashraf --- On Tue, 13/10/09, Grant McWilliams <grantmasterflash@gmail.com> wrote: From: Grant McWilliams <grantmasterflash@gmail.com> Subject: Re: [Xen-users] Xen Performance To: "Fajar A. Nugraha" <fajar@fajar.net> Cc: "Igor S. Pelykh" <kesha@freenet.lg.ua>, xen-users@lists.xensource.com Date: Tuesday, 13 October, 2009, 7:17 AM On Mon, Oct 12, 2009 at 7:00 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: On Sat, Oct 10, 2009 at 4:53 PM, Igor S. Pelykh <kesha@freenet.lg.ua> wrote:> Hello xen-users,>> How can I to increase performance of network layer?> And what is better (easy) for topology of DomU network (bridge, route,> NAT )?IMHO for network setup it''s easier to use bridge. That you manage your dom0 (network-wise) the same way you manage your L2 or L3 switch. When you need NAT, you can use bridge + NAT (which is what libvirt does with virbr0). Performance-wise, I didn''t have to do any tweaking with RHEL5. domU can easily saturate uplink, with domU <-> domU throughput in the range of 2-3 Gbps. Some people have reported problems (search the list archive) with recent pv_ops dom0 kernel. -- Fajar Fajar, Are you sure about your DomU to DomU speeds? What methodology did you use to test this? I''ve done extensive testing in this area and I''ve never seen any numbers that can come near that with or without a pv_ops kernel. Grant McWilliams -----Inline Attachment Follows----- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users Try the new Yahoo! India Homepage. Click here. http://in.yahoo.com/trynew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, Oct 12, 2009 at 9:50 PM, Fasiha Ashraf <feehapk@yahoo.co.in> wrote:> I am facing the same issue. Bridging (the default mode) is better and > simple way for network set-up. But the strange thing in my case is i don''t > find this virbro while running ifconfig in my dom0. instead i have peth0 > acting as vif for dom0. > Is there any issue with building xen with pv kernel, while using FC11 > platform where virtualization is enabled? since i compiled and build xen > using a platform where i have enabled virtualization at fc11 installation > time. > McWilliams! i have tested domU <-> domU throughput using netperf-2.4.5 and > got a throughput of 0.29Mbps. that is no doubt very poor. Marco observe the > same throughput in this case and identified that its because Netperf use > setitimer() to send packets at fixed rate. Sending packets at fixed rate is > the cause of this poor throughput. I dont know yet how to improve it? > > Regards, > Fasiha Ashraf >It isn''t just tests like netperf... If you copy a file via rcp from DomU to Dom0 you will get two times the performance as from DomU to DomU. That''s why i was wondering abotu his testing methodology. If you assign a nic to each DomU so the traffic leaves the physical machine, goes through a switch and comes back it''s faster than DomU to DomU. There''s been long discussions as to why. I''ve not re-run my tests with 3.4.1 so I don''t know if that was fixed. Grant McWilliams _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Oct 13, 2009 at 9:17 AM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> On Mon, Oct 12, 2009 at 7:00 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> On Sat, Oct 10, 2009 at 4:53 PM, Igor S. Pelykh <kesha@freenet.lg.ua> >> wrote:>> > How can I to increase performance of network layer?>> Performance-wise, I didn''t have to do any tweaking with RHEL5. domU >> can easily saturate uplink, with domU <-> domU throughput in the range >> of 2-3 Gbps. Some people have reported problems (search the list >> archive) with recent pv_ops dom0 kernel.> Fajar, > Are you sure about your DomU to DomU speeds? What methodology did you use > to test this? > I''ve done extensive testing in this area and I''ve never seen any numbers > that can come near that with or without > a pv_ops kernel.I remembered seeing that much throughput on one of my tests, but I forgot which one. I''ll look into it later. Here''s a recent test from a system that I have access to right now. domU <-> domU performance tested with iperf. Using Xen 3.4.1, dom0 2.6.18-164.el5xen, domU using self-compiled 2.6.29.6-xen Suse kernel. IP address hidden (changed to hostnames) # iperf -c domU1 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to domU1, TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 5] local domU2 port 14092 connected with domU1 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 2.00 GBytes 1.72 Gbits/sec [ 4] local domU2 port 5001 connected with domU1 port 1753 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.57 GBytes 1.35 Gbits/sec This is on a 2.4GHz Opteron 2378. Since domU <-> domU transfer are mostly CPU-bound, faster CPUs should yield higher performance. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > > Fajar, > > Are you sure about your DomU to DomU speeds? What methodology did you > use > > to test this? > > I''ve done extensive testing in this area and I''ve never seen any numbers > > that can come near that with or without > > a pv_ops kernel. > > I remembered seeing that much throughput on one of my tests, but I > forgot which one. I''ll look into it later. > > # iperf -c domU1 -r > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > ------------------------------------------------------------ > Client connecting to domU1, TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 5] local domU2 port 14092 connected with domU1 port 5001 > [ ID] Interval Transfer Bandwidth > [ 5] 0.0-10.0 sec 2.00 GBytes 1.72 Gbits/sec > [ 4] local domU2 port 5001 connected with domU1 port 1753 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 1.57 GBytes 1.35 Gbits/sec > > This is on a 2.4GHz Opteron 2378. Since domU <-> domU transfer are > mostly CPU-bound, faster CPUs should yield higher performance. > > -- > Fajar >Thanks for the numbers I reran iperf tests and this is what I got on xen 3.4.1. This is on an 8 core Intel system with 16GB ram. DomU to DomU - 2.0 Gbits/sec DomU to Dom0 - 3.46 Gbit/sec Dom0 to DomU - 346 Mbits/sec These are similar ratios that I got before but because this system is about 3x faster than the old system the numbers are bigger. Fajar, if you get time could you see if you see something similar on your system? Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Oct 13, 2009 at 3:05 PM, Grant McWilliams <grantmasterflash@gmail.com> wrote:>> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-10.0 sec 1.57 GBytes 1.35 Gbits/sec >> >> This is on a 2.4GHz Opteron 2378. Since domU <-> domU transfer are >> mostly CPU-bound, faster CPUs should yield higher performance.> > Thanks for the numbers I reran iperf tests and this is what I got on xen > 3.4.1. This is on an 8 core Intel system with 16GB ram. > > DomU to DomU - 2.0 Gbits/sec > DomU to Dom0 - 3.46 Gbit/sec > Dom0 to DomU - 346 Mbits/sec > > These are similar ratios that I got before but because this system is about > 3x faster than the old system the numbers are bigger. Fajar, if you get time > could you see if you see something similar on your system?That''s odd. I''ll see if I can get a test Intel box tomorrow to compare the numbers. In the mean time, what does your environment look like? 64bit? what distro and kernel version? If possible, can you test installing RHEL/Centos 5.4 64bit dom0 and update to Gitco''s Xen 3.4.1? That''s what my setup mostly like, and so far network performance (including domU <-> domU) has been great. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Oct 13, 2009 at 5:06 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Tue, Oct 13, 2009 at 3:05 PM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0-10.0 sec 1.57 GBytes 1.35 Gbits/sec > >> > >> This is on a 2.4GHz Opteron 2378. Since domU <-> domU transfer are > >> mostly CPU-bound, faster CPUs should yield higher performance. > > > > > Thanks for the numbers I reran iperf tests and this is what I got on xen > > 3.4.1. This is on an 8 core Intel system with 16GB ram. > > > > DomU to DomU - 2.0 Gbits/sec > > DomU to Dom0 - 3.46 Gbit/sec > > Dom0 to DomU - 346 Mbits/sec > > > > These are similar ratios that I got before but because this system is > about > > 3x faster than the old system the numbers are bigger. Fajar, if you get > time > > could you see if you see something similar on your system? > > That''s odd. I''ll see if I can get a test Intel box tomorrow to compare > the numbers. In the mean time, what does your environment look like? > 64bit? what distro and kernel version? > > If possible, can you test installing RHEL/Centos 5.4 64bit dom0 and > update to Gitco''s Xen 3.4.1? That''s what my setup mostly like, and so > far network performance (including domU <-> domU) has been great. > > -- > Fajar >CentOS 5.4? If my time machine worked :-) . I''m already running CentOS 5.3 with Gitco''s Xen 3.4.1. Here''s another system: CentOS 5.3 Dom0, CentOS 5.3 DomUs on a Dual Core Duo Xeon system (2.8ghz) DomU to DomU - 1.93 Gbits/sec DomU to Dom0 - 2.76 Gbit/sec Dom0 to DomU - 193 Mbits/sec A third system running CentOS 5.3 Dom0, Ubuntu 9.04 DomU with Debian Lenny xenified kernel and CentOS 5.3 DomU. Ghz Core2 Duo (2.2 Ghz) DomU to DomU - 2.89 Gbit/sec DomU to Dom0 - 4.4 Gbit/sec Dom0 to DomU - 257 Mbits/sec None of these summaries are really that accurate because if I do an iperf -c 192.168.0.100 -r the return speed is always in the toilet. Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.191 port 5001 connected with 192.168.0.196 port 57543 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 3.38 GBytes 2.89 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.0.196, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.191 port 38701 connected with 192.168.0.196 port 5001 write2 failed: Broken pipe [ ID] Interval Transfer Bandwidth [ 4] 0.0- 0.0 sec 15.6 KBytes 343 Mbits/sec This is the behavior I observed almost 2 years ago and it still seems to be consistant. Fajar, if you could run these on your systems to see if you''re seeing something different. The one thing that''s always the same is that I''m using CentOS 5.3 as a Dom0. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
U guys mind using iptraf during an xfer to see what #s u get? - Brian On Oct 13, 2009, at 10:54 AM, Grant McWilliams wrote:> On Tue, Oct 13, 2009 at 5:06 AM, Fajar A. Nugraha <fajar@fajar.net> > wrote: > On Tue, Oct 13, 2009 at 3:05 PM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0-10.0 sec 1.57 GBytes 1.35 Gbits/sec > >> > >> This is on a 2.4GHz Opteron 2378. Since domU <-> domU transfer are > >> mostly CPU-bound, faster CPUs should yield higher performance. > > > > > Thanks for the numbers I reran iperf tests and this is what I got > on xen > > 3.4.1. This is on an 8 core Intel system with 16GB ram. > > > > DomU to DomU - 2.0 Gbits/sec > > DomU to Dom0 - 3.46 Gbit/sec > > Dom0 to DomU - 346 Mbits/sec > > > > These are similar ratios that I got before but because this system > is about > > 3x faster than the old system the numbers are bigger. Fajar, if > you get time > > could you see if you see something similar on your system? > > That''s odd. I''ll see if I can get a test Intel box tomorrow to compare > the numbers. In the mean time, what does your environment look like? > 64bit? what distro and kernel version? > > If possible, can you test installing RHEL/Centos 5.4 64bit dom0 and > update to Gitco''s Xen 3.4.1? That''s what my setup mostly like, and so > far network performance (including domU <-> domU) has been great. > > -- > Fajar > > CentOS 5.4? If my time machine worked :-) . I''m already running > CentOS 5.3 with Gitco''s Xen 3.4.1. > > Here''s another system: CentOS 5.3 Dom0, CentOS 5.3 DomUs on a Dual > Core Duo Xeon system (2.8ghz) > > DomU to DomU - 1.93 Gbits/sec > DomU to Dom0 - 2.76 Gbit/sec > Dom0 to DomU - 193 Mbits/sec > > > A third system running CentOS 5.3 Dom0, Ubuntu 9.04 DomU with Debian > Lenny xenified kernel and CentOS 5.3 DomU. Ghz Core2 Duo (2.2 Ghz) > > DomU to DomU - 2.89 Gbit/sec > DomU to Dom0 - 4.4 Gbit/sec > Dom0 to DomU - 257 Mbits/sec > > None of these summaries are really that accurate because if I do an > iperf -c 192.168.0.100 -r the return speed is always in the toilet. > > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 4] local 192.168.0.191 port 5001 connected with 192.168.0.196 > port 57543 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 3.38 GBytes 2.89 Gbits/sec > ------------------------------------------------------------ > Client connecting to 192.168.0.196, TCP port 5001 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 4] local 192.168.0.191 port 38701 connected with 192.168.0.196 > port 5001 > write2 failed: Broken pipe > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 0.0 sec 15.6 KBytes 343 Mbits/sec > > > This is the behavior I observed almost 2 years ago and it still > seems to be consistant. Fajar, if you could run these on your > systems to see if you''re seeing something different. The one thing > that''s always the same is that I''m using CentOS 5.3 as a Dom0. > > > Grant McWilliams > > Some people, when confronted with a problem, think "I know, I''ll use > Windows." > Now they have two problems. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 14, 2009 at 12:54 AM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> CentOS 5.4? If my time machine worked :-) . I''m already running CentOS 5.3 > with Gitco''s Xen 3.4.1. > > Here''s another system: CentOS 5.3 Dom0, CentOS 5.3 DomUs on a Dual Core Duo > Xeon system (2.8ghz) > > DomU to DomU - 1.93 Gbits/sec > DomU to Dom0 - 2.76 Gbit/sec > Dom0 to DomU - 193 Mbits/secAh ... so domU <-> domU is working FINE, right? That is similar with the results I get :D As for dom0 -> domU performance, it is indeed lower, and I''m not sure why. In my case it''s still usable though (about 600-800 Mbps), since I don''t run any service on dom0 that is used by domU. Here''s my dom0 <-> domU result. # iperf -c 192.168.122.1 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.122.1, TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.122.49 port 52890 connected with 192.168.122.1 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 2.72 GBytes 2.34 Gbits/sec [ 4] local 192.168.122.49 port 5001 connected with 192.168.122.1 port 16809 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 747 MBytes 627 Mbits/sec 192.168.122.1 -> dom0''s virbr0, running RHEL5.4 64bit, kernel-xen-2.6.18-164.2.1.el5, Xen 3.4.1. 192.168.122.49 -> domU, kernel-xen-2.6.18-164.2.1.el5 I''m not sure why your dom0 -> domU is about 3 times slower than mine. Perhaps newer kernel version matters. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Oct 13, 2009 at 11:18 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Oct 14, 2009 at 12:54 AM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > > CentOS 5.4? If my time machine worked :-) . I''m already running CentOS > 5.3 > > with Gitco''s Xen 3.4.1. > > > > Here''s another system: CentOS 5.3 Dom0, CentOS 5.3 DomUs on a Dual Core > Duo > > Xeon system (2.8ghz) > > > > DomU to DomU - 1.93 Gbits/sec > > DomU to Dom0 - 2.76 Gbit/sec > > Dom0 to DomU - 193 Mbits/sec > > Ah ... so domU <-> domU is working FINE, right? That is similar with > the results I get : >It isn''t working fine. I said at the end of my message that the numbers aren''t quite right because I didn''t post both halves of the bidirectional test. The second half is 1/4 the speed of the first. Can you post the whole bidirectional test for DomU to DomU like the one below? As soon as I get my Virtual Server back up I''ll post full numbers. I didn''t realize until the end of the test that I was only recording one direction. The reverse direction numbers are 1/4 the speed. Anyway I''ll do more comprehensive testing.> As for dom0 -> domU performance, it is indeed lower, and I''m not sure > why. In my case it''s still usable though (about 600-800 Mbps), since I > don''t run any service on dom0 that is used by domU. Here''s my dom0 <-> > domU result. > > # iperf -c 192.168.122.1 -r > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > ------------------------------------------------------------ > Client connecting to 192.168.122.1, TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 5] local 192.168.122.49 port 52890 connected with 192.168.122.1 port > 5001 > [ ID] Interval Transfer Bandwidth > [ 5] 0.0-10.0 sec 2.72 GBytes 2.34 Gbits/sec > [ 4] local 192.168.122.49 port 5001 connected with 192.168.122.1 port > 16809 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 747 MBytes 627 Mbits/sec >> 192.168.122.1 -> dom0''s virbr0, running RHEL5.4 64bit, > kernel-xen-2.6.18-164.2.1.el5, Xen 3.4.1. > 192.168.122.49 -> domU, kernel-xen-2.6.18-164.2.1.el5 > > I''m not sure why your dom0 -> domU is about 3 times slower than mine. > Perhaps newer kernel version matters. > > -- > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 14, 2009 at 2:06 PM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> On Tue, Oct 13, 2009 at 11:18 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> On Wed, Oct 14, 2009 at 12:54 AM, Grant McWilliams >> <grantmasterflash@gmail.com> wrote:>> > DomU to DomU - 1.93 Gbits/sec>> >> Ah ... so domU <-> domU is working FINE, right? That is similar with >> the results I get : > > It isn''t working fine. I said at the end of my message that the numbers > aren''t quite right because I didn''t post both halves of the bidirectional > test. The second half is 1/4 the speed of the first.I posted the result for self-compiled 2.6.29-xen domU in my second mail on this thread (I got around 1-2 Gbps). Here''s one for domU with RHEL''s kernel-xen-2.6.18-164.2.1.el5 (same dom0): # iperf -c 192.168.122.49 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.122.49, TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.122.144 port 64178 connected with 192.168.122.49 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 2.57 GBytes 2.21 Gbits/sec [ 4] local 192.168.122.144 port 5001 connected with 192.168.122.49 port 45773 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 2.66 GBytes 2.29 Gbits/sec -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 14, 2009 at 2:19 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Wed, Oct 14, 2009 at 2:06 PM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > > On Tue, Oct 13, 2009 at 11:18 PM, Fajar A. Nugraha <fajar@fajar.net> > wrote: > >> On Wed, Oct 14, 2009 at 12:54 AM, Grant McWilliams > >> <grantmasterflash@gmail.com> wrote: > > >> > DomU to DomU - 1.93 Gbits/sec > > >> > >> Ah ... so domU <-> domU is working FINE, right? That is similar with > >> the results I get : > > > > It isn''t working fine. I said at the end of my message that the numbers > > aren''t quite right because I didn''t post both halves of the bidirectional > > test. The second half is 1/4 the speed of the first. > > I posted the result for self-compiled 2.6.29-xen domU in my second > mail on this thread (I got around 1-2 Gbps). Here''s one for domU with > RHEL''s kernel-xen-2.6.18-164.2.1.el5 (same dom0): > > # iperf -c 192.168.122.49 -r > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > ------------------------------------------------------------ > Client connecting to 192.168.122.49, TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 5] local 192.168.122.144 port 64178 connected with 192.168.122.49 port > 5001 > [ ID] Interval Transfer Bandwidth > [ 5] 0.0-10.0 sec 2.57 GBytes 2.21 Gbits/sec > [ 4] local 192.168.122.144 port 5001 connected with 192.168.122.49 port > 45773 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 2.66 GBytes 2.29 Gbits/sec > > -- > Fajar >Looks like the Redhat kernel is faster for return trip tests.. Identical speed both ways. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. On Wed, Oct 14, 2009 at 3:02 AM, Grant McWilliams < grantmasterflash@gmail.com> wrote:> On Wed, Oct 14, 2009 at 2:19 AM, Fajar A. Nugraha <fajar@fajar.net> wrote: > >> On Wed, Oct 14, 2009 at 2:06 PM, Grant McWilliams >> <grantmasterflash@gmail.com> wrote: >> > On Tue, Oct 13, 2009 at 11:18 PM, Fajar A. Nugraha <fajar@fajar.net> >> wrote: >> >> On Wed, Oct 14, 2009 at 12:54 AM, Grant McWilliams >> >> <grantmasterflash@gmail.com> wrote: >> >> >> > DomU to DomU - 1.93 Gbits/sec >> >> >> >> >> Ah ... so domU <-> domU is working FINE, right? That is similar with >> >> the results I get : >> > >> > It isn''t working fine. I said at the end of my message that the numbers >> > aren''t quite right because I didn''t post both halves of the >> bidirectional >> > test. The second half is 1/4 the speed of the first. >> >> I posted the result for self-compiled 2.6.29-xen domU in my second >> mail on this thread (I got around 1-2 Gbps). Here''s one for domU with >> RHEL''s kernel-xen-2.6.18-164.2.1.el5 (same dom0): >> >> # iperf -c 192.168.122.49 -r >> ------------------------------------------------------------ >> Server listening on TCP port 5001 >> TCP window size: 85.3 KByte (default) >> ------------------------------------------------------------ >> ------------------------------------------------------------ >> Client connecting to 192.168.122.49, TCP port 5001 >> TCP window size: 85.3 KByte (default) >> ------------------------------------------------------------ >> [ 5] local 192.168.122.144 port 64178 connected with 192.168.122.49 port >> 5001 >> [ ID] Interval Transfer Bandwidth >> [ 5] 0.0-10.0 sec 2.57 GBytes 2.21 Gbits/sec >> [ 4] local 192.168.122.144 port 5001 connected with 192.168.122.49 port >> 45773 >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-10.0 sec 2.66 GBytes 2.29 Gbits/sec >> >> -- >> Fajar > >Ok more thorough numbers.. Dom0 = CentOS 5.3 x86_64, DomU CentOS 5.3 x86_64 DomU to DomU iperf -c 192.168.1.140 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.1.140, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.1.139 port 38128 connected with 192.168.1.140 port 5001 Waiting for server threads to complete. Interrupt again to force quit. [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 2.44 GBytes * 2.10 Gbits/sec * iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.140 port 5001 connected with 192.168.1.139 port 38128 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 2.44 GBytes 2.10 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.1.139, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.140 port 47926 connected with 192.168.1.139 port 5001 *write2 failed: Broken pipe* [ ID] Interval Transfer Bandwidth [ 4] 0.0- 0.0 sec 15.6 KBytes *387 Mbits/sec* ----------------------------------------------------------------------------------------------------------------------------- Dom0 to DomU iperf -c 192.168.1.140 ------------------------------------------------------------ Client connecting to 192.168.1.140, TCP port 5001 TCP window size: 26.6 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.254 port 55458 connected with 192.168.1.140 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.04 GBytes * 893 Mbits/sec* iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.140 port 5001 connected with 192.168.1.254 port 55459 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.02 GBytes *878 Mbits/sec* I''m also noticing that numbers in the 800-900 Mbit range is consistant for Dom0-DomU traffic. Numbers in the 2.5Gbit range is consistant for traffic one way in DomU-DomU. When I get the low numbers 200-350Mb it''s also saying write2 failed: broken pipe. Maybe something is not right somewhere. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 14, 2009 at 5:15 PM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> [ 4] local 192.168.1.140 port 47926 connected with 192.168.1.139 port 5001 > write2 failed: Broken pipe > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 0.0 sec 15.6 KBytes 387 Mbits/secThat one''s not right. Your transfer is only 15.6 KB, plus the Broken pipe.> I''m also noticing that numbers in the 800-900 Mbit range is consistant for > Dom0-DomU traffic.yes, that''s what I have as well. It''s more than enough for my needs though, so I can live with that.> Numbers in the 2.5Gbit range is consistant for traffic > one way in DomU-DomU. When I get the low numbers 200-350Mb it''s also saying > write2 failed: broken pipe. Maybe something is not right somewhere.Try disabling iptables on both domUs, see if it matters. I experienced something like this when testing Windows <-> Linux with different versions of iperf (iperf-2.0 on Linux vs 1.7 on Windows). -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I am facing the same probe in DomU<->DomU throughput via netper, and following this thread. after doing everything suggested now using iperf with iptables flushed on both DomUs and Dom0 I got these results in a simple an a bidirectional run. [root@F11-G6S2 ~]# iperf -c 10.11.21.215 -t 120 ------------------------------------------------------------ Client connecting to 10.11.21.215, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.11.21.216 port 58883 connected with 10.11.21.215 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-120.5 sec 4.14 MBytes 288 Kbits/sec [root@F11-G5S2 ~]# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 5] local 10.11.21.215 port 5001 connected with 10.11.21.216 port 58883 [ ID] Interval Transfer Bandwidth [ 5] 0.0-120.7 sec 4.14 MBytes 288 Kbits/sec [root@F11-G6S2 ~]# iperf -c 10.11.21.215 -t 120 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.11.21.215, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 5] local 10.11.21.216 port 49735 connected with 10.11.21.215 port 5001 Waiting for server threads to complete. Interrupt again to force quit. [ ID] Interval Transfer Bandwidth [ 5] 0.0-120.5 sec 4.14 MBytes 288 Kbits/sec [ 4] local 10.11.21.215 port 5001 connected with 10.11.21.216 port 49735 [ ID] Interval Transfer Bandwidth [ 4] 0.0-120.7 sec 4.14 MBytes 288 Kbits/sec ------------------------------------------------------------ Client connecting to 10.11.21.216, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 4] local 10.11.21.215 port 33522 connected with 10.11.21.216 port 5001 write2 failed: Broken pipe [ ID] Interval Transfer Bandwidth [ 4] 0.0- 0.0 sec 15.6 KBytes 99.0 Mbits/sec the following 2 are CONFIG_NO_HZ=y, CONFIG_HIGH_RES_TIMERS=y configured.Also my clock source is XEN. Keep up with people you care about with Yahoo! India Mail. Learn how. http://in.overview.mail.yahoo.com/connectmore _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have noticed some problem here in the output of this command [root@HPCNL-SR-2 linux-2.6-xen]# cat /proc/timer_list Timer List Version: v0.4 HRTIMER_MAX_CLOCK_BASES: 2 now at 5047062694034 nsecs cpu: 0 clock 0: .base: e3002554 .index: 0 .resolution: 1 nsecs this should seem something like 999848 nsec .get_time: ktime_get_real .offset: 1255671557273968324 nsecs Fasiha Ashraf From cricket scores to your friends. Try the Yahoo! India Homepage! http://in.yahoo.com/trynew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 2:07 PM, Fasiha Ashraf <feehapk@yahoo.co.in> wrote:> > Hi, > I am facing the same probe in DomU<->DomU throughput via netper, and following this thread. after doing everything suggested now using iperf with iptables flushed on both DomUs and Dom0 I got these results in a simple an a bidirectional run.I''m pretty sure your problem is different. My point in this thread was that domU <-> domU network performance works great if you use 2.6.18-xen dom0 kernel. Grant was having problem with a similar setup due to broken pipe, so I suggested to disable iptables first. Your low throughput was on pv_ops dom0 kernel, right? AFAIK there''s no solution for that (yet). -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 02:30:07PM +0700, Fajar A. Nugraha wrote:> On Fri, Oct 16, 2009 at 2:07 PM, Fasiha Ashraf <feehapk@yahoo.co.in> wrote: > > > > Hi, > > I am facing the same probe in DomU<->DomU throughput via netper, and following this thread. after doing everything suggested now using iperf with iptables flushed on both DomUs and Dom0 I got these results in a simple an a bidirectional run. > > I''m pretty sure your problem is different. My point in this thread was > that domU <-> domU network performance works great if you use > 2.6.18-xen dom0 kernel. Grant was having problem with a similar setup > due to broken pipe, so I suggested to disable iptables first. > > Your low throughput was on pv_ops dom0 kernel, right? AFAIK there''s no > solution for that (yet). >Would be good to hunt that down.. if there actually is such a problem with pv_ops dom0 kernel. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 12:30 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Fri, Oct 16, 2009 at 2:07 PM, Fasiha Ashraf <feehapk@yahoo.co.in> > wrote: > > > > Hi, > > I am facing the same probe in DomU<->DomU throughput via netper, and > following this thread. after doing everything suggested now using iperf with > iptables flushed on both DomUs and Dom0 I got these results in a simple an a > bidirectional run. > > I''m pretty sure your problem is different. My point in this thread was > that domU <-> domU network performance works great if you use > 2.6.18-xen dom0 kernel. Grant was having problem with a similar setup > due to broken pipe, so I suggested to disable iptables first. > > Your low throughput was on pv_ops dom0 kernel, right? AFAIK there''s no > solution for that (yet). > > -- > Fajar >That problem is definitely a different one. I haven''t gotten back to my "problem" as of yet but instead have been dealing with why sudo 1.7.2 suddenly won''t read 40 sudoers file! I think Fajar''s and my viewpoints differ somewhat.. DomU to DomU performance is 1/3 to 2/3 that of DomU to Dom0 even though that the DomU to DomU traffic is probably traversing the same path as DomU to Dom0. Speeds of 400-800Mbits may be adequate I still see it as a problem. I''m going to get some sleep before I get back at it again though otherwise the numbers will make no sense to me. Also if the pv_ops kernel has speed problems in networking wouldn''t this also effect other pv_ops virtualization solutions? I''ve not heard anything but rave reviews of KVMs PV network speeds although I''ve not tested them at all. It will be interesting to put KVM through the same tests. Cheers! Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
<snip> DomU to DomU performance is 1/3 to 2/3 that of DomU to Dom0 even though that the DomU to DomU traffic is probably traversing the same path as DomU to Dom0. <snip> Cheers! Grant McWilliams ______________________________________ Seeing as how each domU nic exists in dom0, as does the bridge, I would argue that traffic between domU and domU take 50% more steps than traffic between domU and dom0. Looking at it the other way, that would be 33% less steps between domU and dom0 than domU and domU. Here is why: Between domU A and domU B, the traffic has to traverse domU A''s vif in dom0, the bridge in dom0, and domU B''s vif in dom0. That is three virtual devices. Between domU A and dom0, the traffic only has to traverse domU A''s vif and the bridge, it has then arrived in dom0, that is only two virtual devices. I could be wrong, and would love to be given a more detailed and technical answer as to how I am if that is the case. Regarding the performance difference, my opinion is that we could always use more performance in all aspects, period. Sure, eventually costs are prohibiting and we have to settle, but more performance wouldn''t hurt no matter how many people think performance is good enough, and only seeing notably better performance can convince some people that any given performance isn''t good enough. Dustin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 7:43 PM, Dustin Henning <Dustin.Henning@prd-inc.com> wrote:> <snip> DomU to DomU performance is 1/3 to 2/3 that of DomU to Dom0 even though that the DomU to DomU traffic is probably traversing the same path as DomU to Dom0. <snip> > > Cheers! > Grant McWilliams > ______________________________________ > > Seeing as how each domU nic exists in dom0, as does the bridge, I would argue that traffic between domU and domU take 50% more steps than traffic between domU and dom0. Looking at it the other way, that would be 33% less steps between domU and dom0 than domU and domU.It would make sense if domU <-> domU is SLOWER than dom0 -> domU ... the thing is it''s not the case :D Grant seems to have other problems (broken pipe) in his domU <-> domU test, so lets ignore that for now. My test case has shown that domU <-> domU and domU -> dom0 got good results. Only dom0 -> domU is considerably worse.> I could be wrong, and would love to be given a more detailed and technical answer as to how I am if that is the case.That would be good, if one is available :) You might be able to get better answer on xen-devel.> Regarding the performance difference, my opinion is that we could always use more performance in all aspects, period.Totally agree. However, my point when replying to Igor''s initial mail is that considering all the requirements and limitations, bridged setup is still on top of my list when choosing the "best" networking setup for Xen. Even when dom0 -> domU is considerably slower. Other setup (like pci passthrough for domU NIC) might be able to provide better throughput, but it''s too complicated for me (not to mention that it prevents live migration from working). Then again, what''s best for me might not be the best for others, depending on situation an priorities. YMMV. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 5:43 AM, Dustin Henning <Dustin.Henning@prd-inc.com>wrote:> <snip> DomU to DomU performance is 1/3 to 2/3 that of DomU to Dom0 even > though that the DomU to DomU traffic is probably traversing the same path as > DomU to Dom0. <snip> > > Cheers! > Grant McWilliams > ______________________________________ > > Seeing as how each domU nic exists in dom0, as does the bridge, I would > argue that traffic between domU and domU take 50% more steps than traffic > between domU and dom0. Looking at it the other way, that would be 33% less > steps between domU and dom0 than domU and domU. Here is why: > > Between domU A and domU B, the traffic has to traverse domU A''s vif in > dom0, the bridge in dom0, and domU B''s vif in dom0. That is three virtual > devices. > > Between domU A and dom0, the traffic only has to traverse domU A''s vif and > the bridge, it has then arrived in dom0, that is only two virtual devices. > > I could be wrong, and would love to be given a more detailed and technical > answer as to how I am if that is the case. Regarding the performance > difference, my opinion is that we could always use more performance in all > aspects, period. Sure, eventually costs are prohibiting and we have to > settle, but more performance wouldn''t hurt no matter how many people think > performance is good enough, and only seeing notably better performance can > convince some people that any given performance isn''t good enough. > > Dustin > > Dustin,This is how it works... :-) This entire thread exists somewhere else since I went through the exact same process the first time I was doing my testing. I understand how Xen works and it''s unfortunate that this limitation will never be replaced. Maybe it will but by then everyone will have migrated to KVM which doens''t seem to suffer from it. It''s very inefficient to pass all traffic to Dom0 then pass it back to another domU. It''s like you always have a router to go through even if the DomUs are on the same physical network. It also seems that it''s the return trip that''s slow (even without the broken pipe) and my testing before (and the resulting message thread before) showed that data going to the Dom0 was fast but data back wasn''t. If we throw enough hardware at it we get wire speed or we assign a nic to each DomU but then we can only get wire speed. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 6:07 AM, Fajar A. Nugraha <fajar@fajar.net> wrote:> > > > > I could be wrong, and would love to be given a more detailed and > technical answer as to how I am if that is the case. > > That would be good, if one is available :) You might be able to get > better answer on xen-devel. > > > Regarding the performance difference, my opinion is that we could always > use more performance in all aspects, period. > > Totally agree. > > However, my point when replying to Igor''s initial mail is that > considering all the requirements and limitations, bridged setup is > still on top of my list when choosing the "best" networking setup for > Xen. Even when dom0 -> domU is considerably slower. > > Other setup (like pci passthrough for domU NIC) might be able to > provide better throughput, but it''s too complicated for me (not to > mention that it prevents live migration from working). Then again, > what''s best for me might not be the best for others, depending on > situation an priorities. YMMV. > > -- > Fajar >Agreed on the bridge setup. You do get the best speed out of it. Also as I''m slowly seeing that if you throw more hardware at the solution the inefficient path of Dom0 to DomU does get faster. On my Dual Core Duo Xeon that speed is 400Mb/sec but on the newer Core2 Duo based Xeon I''m getting about double that. The ratio seems to be the same on DomU - DomU, Domu - Dom0 and Dom0 - DomU telling me that it''s a xen architecture issue. Getting close to wire speed at my slowest link is nice but I''d sure like to see those 2-3 Gb numbers everywhere. :-) I have so many other areas I need to tune that this issue for now is a non-event. I just read yesterday that Redhat is swearing that you can run more KVM VMs on a piece of hardware than you can Xen VMs. I currently have 41 VMs on one 8 core Xeon for a classroom environment (nxclient) and it''s dogging but I have a lot of things I can do to help pick up the pace. Thanks for the input everyone. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Does KVM have similar options that allow for communication directly between VMs? I only used KVM once, with one VM, to determine fully-virtualized performance was too poor (the same as in Xen, but with less driver support). If so, I don''t know why it doesn''t suffer from the same issue, and I should hope Xen could be fixed. Regardless, I am disappointed by RedHat''s decision to jump ship and go to KVM. I think xen makes more sense, and pvops would be ideal. The primary reason I use Xen, though, is because I use Windows and KVM doesn''t have the drivers yet, so I get better performance in Xen. Dustin From: Grant McWilliams [mailto:grantmasterflash@gmail.com] Sent: Friday, October 16, 2009 10:37 To: Dustin.Henning@prd-inc.com Cc: Fajar A. Nugraha; Fasiha Ashraf; xen-users@lists.xensource.com Subject: Re: [Xen-users] Xen Performance On Fri, Oct 16, 2009 at 5:43 AM, Dustin Henning <Dustin.Henning@prd-inc.com> wrote: <snip> DomU to DomU performance is 1/3 to 2/3 that of DomU to Dom0 even though that the DomU to DomU traffic is probably traversing the same path as DomU to Dom0. <snip> Cheers! Grant McWilliams ______________________________________ Seeing as how each domU nic exists in dom0, as does the bridge, I would argue that traffic between domU and domU take 50% more steps than traffic between domU and dom0. Looking at it the other way, that would be 33% less steps between domU and dom0 than domU and domU. Here is why: Between domU A and domU B, the traffic has to traverse domU A''s vif in dom0, the bridge in dom0, and domU B''s vif in dom0. That is three virtual devices. Between domU A and dom0, the traffic only has to traverse domU A''s vif and the bridge, it has then arrived in dom0, that is only two virtual devices. I could be wrong, and would love to be given a more detailed and technical answer as to how I am if that is the case. Regarding the performance difference, my opinion is that we could always use more performance in all aspects, period. Sure, eventually costs are prohibiting and we have to settle, but more performance wouldn''t hurt no matter how many people think performance is good enough, and only seeing notably better performance can convince some people that any given performance isn''t good enough. Dustin Dustin, This is how it works... :-) This entire thread exists somewhere else since I went through the exact same process the first time I was doing my testing. I understand how Xen works and it''s unfortunate that this limitation will never be replaced. Maybe it will but by then everyone will have migrated to KVM which doens''t seem to suffer from it. It''s very inefficient to pass all traffic to Dom0 then pass it back to another domU. It''s like you always have a router to go through even if the DomUs are on the same physical network. It also seems that it''s the return trip that''s slow (even without the broken pipe) and my testing before (and the resulting message thread before) showed that data going to the Dom0 was fast but data back wasn''t. If we throw enough hardware at it we get wire speed or we assign a nic to each DomU but then we can only get wire speed. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 12:08 PM, Dustin Henning <Dustin.Henning@prd-inc.com> wrote:> Does KVM have similar options that allow for communication directly > between VMs? I only used KVM once, with one VM, to determine > fully-virtualized performance was too poor (the same as in Xen, but with > less driver support). If so, I don''t know why it doesn''t suffer from the > same issue, and I should hope Xen could be fixed. Regardless, I am > disappointed by RedHat''s decision to jump ship and go to KVM. I think xen > makes more sense, and pvops would be ideal. The primary reason I use Xen, > though, is because I use Windows and KVM doesn''t have the drivers yet, so I > get better performance in Xen. > Dustin > > Things are different in KVM and there''s less isolation between "DomUs"since they''re basically processes. Theoretically KVM should have much faster Guest to Guest networking then Xen but less isolation. However a PV on Xen should cream KVM in just about every other aspect. I''ve heard and I haven''t finished my testing yet to confirm this that KVM is faster than a Xen HVM doing the same tasks if they both have PV drivers which means KVM may be better solution for virtualizing Windows. I have a contract for Virtualizing 75 Window 2K systems so I''ll be doing a great deal of testing in the coming months to know for sure. The KVM method is attactive because it does things the "Linux Way" though as apposed to being very invasive like Xen. For Enterprise projects I only use Xen and use KVM for development work but I keep my mind open. Most all Enterprise solutions are either ESX or Xen based which tells you something. However, it''s getting harder all the time to stick with Xen because most distributions are dropping it. I''m now using Debian Lenny kernels in my Ubuntu DomUs. It may end up being only Suse that supports Xen at some point. I think Xen is a great product but if KVM can manage to do the same thing and be easier to manage Xen will sadly go away. I''ve sort of given up hope that it will ever have decent Dom0 support in the kernel. By the time that happens KVM will have taken over the entire Linux VM world (which makes the conspiracy theory part of my brain buzz). Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, Oct 17, 2009 at 2:19 AM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> However a PV on Xen should > cream KVM in just about every other aspect.One of the reasons for pv_ops support in Linux kernel was that so when the kernel runs in virtualization solutions (KVM, Xen, VMware, etc) it can automatically switch and use virtualization-friendly instructions, thus eliminating CPU overhead imposed by hardware-assisted virtualization, achieving similar performance to that of Xen PV guest. At least in theory :)> I''ve heard and I haven''t > finished my testing yet to confirm this that KVM is faster than a Xen HVM > doing the same tasks if they both have PV drivers which means KVM may be > better solution for virtualizing Windows.Do share your test results once you have them.> I have a contract for Virtualizing > 75 Window 2K systemsAre they even supported? Mainstream support ended many years ago.> so I''ll be doing a great deal of testing in the coming > months to know for sure.GPLPV does not support Windows 2000 anymore, so it''d be interesting to see how you can work out I/O performance problems. Does HVM have PV drivers for Windows 2000?> However, it''s getting harder all the time to > stick with Xen because most distributions are dropping it.RHEL5 is still supported until 2014, so it should cover existing installations. I''m still having doubts about what to use for new installations in the next year or two, but considering that (on my tests) 2.6.18-xen kernel still outperforms forward-ported or pv_ops kernel I''d probably stick with RHEL5.> I''m now using > Debian Lenny kernels in my Ubuntu DomUs.tried that, had some problems, switched to self-compiled 2.6.29-xen kernel. I''d rather not use pv_ops kernel (for now) since it doesn''t support growing memory beyond initial allocation (yet). Might take a look at sid''s kernel later though.> It may end up being only Suse that > supports Xen at some point. I think Xen is a great product but if KVM can > manage to do the same thing and be easier to manage Xen will sadly go away. > I''ve sort of given up hope that it will ever have decent Dom0 support in the > kernel. By the time that happens KVM will have taken over the entire Linux > VM world (which makes the conspiracy theory part of my brain buzz).I was actually thinking that the best OS for Xen dom0 in the future might be Solaris 11 :D Assuming Oracle doesn''t kill xVM, that is. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I've sort of given up hope that it will ever have decent Dom0 support in the kernel. Isn't dom0 support now being added to pv_ops? Or has that effort stalled for some reason? From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Grant McWilliams Sent: Friday, October 16, 2009 3:20 PM To: Dustin.Henning@prd-inc.com Cc: Fajar A. Nugraha; Fasiha Ashraf; xen-users@lists.xensource.com Subject: Re: [Xen-users] Xen Performance On Fri, Oct 16, 2009 at 12:08 PM, Dustin Henning <Dustin.Henning@prd-inc.com> wrote: Does KVM have similar options that allow for communication directly between VMs? I only used KVM once, with one VM, to determine fully-virtualized performance was too poor (the same as in Xen, but with less driver support). If so, I don't know why it doesn't suffer from the same issue, and I should hope Xen could be fixed. Regardless, I am disappointed by RedHat's decision to jump ship and go to KVM. I think xen makes more sense, and pvops would be ideal. The primary reason I use Xen, though, is because I use Windows and KVM doesn't have the drivers yet, so I get better performance in Xen. Dustin Things are different in KVM and there's less isolation between "DomUs" since they're basically processes. Theoretically KVM should have much faster Guest to Guest networking then Xen but less isolation. However a PV on Xen should cream KVM in just about every other aspect. I've heard and I haven't finished my testing yet to confirm this that KVM is faster than a Xen HVM doing the same tasks if they both have PV drivers which means KVM may be better solution for virtualizing Windows. I have a contract for Virtualizing 75 Window 2K systems so I'll be doing a great deal of testing in the coming months to know for sure. The KVM method is attactive because it does things the "Linux Way" though as apposed to being very invasive like Xen. For Enterprise projects I only use Xen and use KVM for development work but I keep my mind open. Most all Enterprise solutions are either ESX or Xen based which tells you something. However, it's getting harder all the time to stick with Xen because most distributions are dropping it. I'm now using Debian Lenny kernels in my Ubuntu DomUs. It may end up being only Suse that supports Xen at some point. I think Xen is a great product but if KVM can manage to do the same thing and be easier to manage Xen will sadly go away. I've sort of given up hope that it will ever have decent Dom0 support in the kernel. By the time that happens KVM will have taken over the entire Linux VM world (which makes the conspiracy theory part of my brain buzz). Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, Oct 17, 2009 at 3:14 AM, Jeff Sturm <jeff.sturm@eprize.com> wrote:> I''ve sort of given up hope that it will ever have decent Dom0 support in the > kernel. > > > > Isn''t dom0 support now being added to pv_ops? Or has that effort stalled > for some reason?Not failed. Still WIP. See http://wiki.xensource.com/xenwiki/XenParavirtOps (especially the links to Jeremy''s emails) http://marc.info/?l=linux-kernel&m=124396532803026&w=2 (read the whole thread if you have time) -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Oct 16, 2009 at 12:51 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Sat, Oct 17, 2009 at 2:19 AM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > > > However a PV on Xen should > > cream KVM in just about every other aspect. > > One of the reasons for pv_ops support in Linux kernel was that so when > the kernel runs in virtualization solutions (KVM, Xen, VMware, etc) it > can automatically switch and use virtualization-friendly instructions, > thus eliminating CPU overhead imposed by hardware-assisted > virtualization, achieving similar performance to that of Xen PV guest. > At least in theory :) > > > I''ve heard and I haven''t > > finished my testing yet to confirm this that KVM is faster than a Xen HVM > > doing the same tasks if they both have PV drivers which means KVM may be > > better solution for virtualizing Windows. > > Do share your test results once you have them. > > > I have a contract for Virtualizing > > 75 Window 2K systems > > Are they even supported? Mainstream support ended many years ago. >Sorry Win 2k8... Thought one thing, wrote another.> > -- > Fajar > > _______________________________________________ >Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users