yingbin wang
2010-Apr-15 16:28 UTC
[Xen-users] network performance drop heavily in xen 4.0 release
Hi: I report a Bug !!! We have just upgraded to xen4.0+kernel2.6.31.13 recently. however , fond that the network performance drop heavily in dom0 (nearly Reduced by 2/3 vs xen3.4.2+kernel2.6.18.8 ) . our env : hardware : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) compile env and filesystem : Redhat AS 5.4 xm info : ----------------------------------------------------------------- host : r02k08015 release : 2.6.31.13xen version : #1 SMP Tue Apr 13 20:38:51 CST 2010 machine : x86_64 nr_cpus : 16 nr_nodes : 2 cores_per_socket : 4 threads_per_core : 2 cpu_mhz : 2266 hw_caps : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 virt_caps : hvm total_memory : 24539 free_memory : 15596 node_to_cpu : node0:0,2,4,6,8,10,12,14 node1:1,3,5,7,9,11,13,15 node_to_memory : node0:3589 node1:12007 node_to_dma32_mem : node0:2584 node1:0 max_node_id : 1 xen_major : 4 xen_minor : 0 xen_extra : .0 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable xen_commandline : dom0_mem=10240M cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) cc_compile_by : root cc_compile_domain : cc_compile_date : Tue Apr 13 23:04:16 CST 2010 xend_config_format : 4 --------------------------------------------------------------------------- test tool: iperf-2.0.4 command: root@10.250.6.25 : iperf -s root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 network performance: xen4.0+kernel2.6.31.13: [ ID] Interval Transfer Bandwidth [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec xen3.4.2+kernel2.6.18.8: [ ID] Interval Transfer Bandwidth [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. 2 the attachment is the dom0 kernel compile config. Cheers, wyb _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
yingbin wang
2010-Apr-15 16:39 UTC
[Xen-devel] network performance drop heavily in xen 4.0 release
Hi: I report a Bug !!! We have just upgraded to xen4.0+kernel2.6.31.13 recently. however , fond that the network performance drop heavily in dom0 (nearly Reduced by 2/3 vs xen3.4.2+kernel2.6.18.8 ) . our env : hardware : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20) compile env and filesystem : Redhat AS 5.4 xm info : ----------------------------------------------------------------- host : r02k08015 release : 2.6.31.13xen version : #1 SMP Tue Apr 13 20:38:51 CST 2010 machine : x86_64 nr_cpus : 16 nr_nodes : 2 cores_per_socket : 4 threads_per_core : 2 cpu_mhz : 2266 hw_caps : bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 virt_caps : hvm total_memory : 24539 free_memory : 15596 node_to_cpu : node0:0,2,4,6,8,10,12,14 node1:1,3,5,7,9,11,13,15 node_to_memory : node0:3589 node1:12007 node_to_dma32_mem : node0:2584 node1:0 max_node_id : 1 xen_major : 4 xen_minor : 0 xen_extra : .0 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable xen_commandline : dom0_mem=10240M cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) cc_compile_by : root cc_compile_domain : cc_compile_date : Tue Apr 13 23:04:16 CST 2010 xend_config_format : 4 --------------------------------------------------------------------------- test tool: iperf-2.0.4 command: root@10.250.6.25 : iperf -s root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 network performance: xen4.0+kernel2.6.31.13: [ ID] Interval Transfer Bandwidth [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec xen3.4.2+kernel2.6.18.8: [ ID] Interval Transfer Bandwidth [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. 2 the attachment is the dom0 kernel compile config. Cheers, wyb _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Apr-15 16:51 UTC
Re: [Xen-devel] network performance drop heavily in xen 4.0 release
On 15/04/2010 17:39, "yingbin wang" <yingbin.wangyb@gmail.com> wrote:> I report a Bug !!! We have just upgraded to > xen4.0+kernel2.6.31.13 recently. however , fond that the network > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > xen3.4.2+kernel2.6.18.8 ) .How does xen4.0+kernel2.6.18.8 perform? The regression is more likely in the dom0 kernel than Xen itself. And you don''t *have* to upgrade both. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Fajar A. Nugraha
2010-Apr-16 03:30 UTC
Re: [Xen-users] network performance drop heavily in xen 4.0 release
On Thu, Apr 15, 2010 at 11:28 PM, yingbin wang <yingbin.wangyb@gmail.com> wrote:> Hi: > I report a Bug !!! We have just upgraded to > xen4.0+kernel2.6.31.13 recently. however , fond that the network > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > xen3.4.2+kernel2.6.18.8 ) .I highly suspect the new kernel is the culprit here. Try using xen 4.0 with kernel 2.6.18.8. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
yingbin wang
2010-Apr-16 04:24 UTC
Re: [Xen-devel] network performance drop heavily in xen 4.0 release
sorry, I forgot to report the performance of xen4.0+kernel2.6.18.8. xen4.0+kernel2.6.18.8: [ ID] Interval Transfer Bandwidth [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec This combination can only meet some of our needs. the reason why we didn''t use 2.6.18.8: 1. Our storage application is base on NBD, which will goto deadlock on 2.6.18.8 when nbd-client and nbd-server are deployed on the same server. 2. Hard disk frequently offline so we want to upgrade to kernel2.6.31. we also try xen4.0+kernel2.6.31.12(gentoo-xen-kernel patch). the performance is acceptable, but blktap2(support VHD) not work. has anybody solve the problem in xen4.0+kernel2.6.31.13? Cheers, wyb 2010/4/16 Keir Fraser <keir.fraser@eu.citrix.com>:> On 15/04/2010 17:39, "yingbin wang" <yingbin.wangyb@gmail.com> wrote: > >> I report a Bug !!! We have just upgraded to >> xen4.0+kernel2.6.31.13 recently. however , fond that the network >> performance drop heavily in dom0 (nearly Reduced by 2/3 vs >> xen3.4.2+kernel2.6.18.8 ) . > > How does xen4.0+kernel2.6.18.8 perform? The regression is more likely in the > dom0 kernel than Xen itself. And you don''t *have* to upgrade both. > > -- Keir > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
yingbin wang
2010-Apr-16 04:26 UTC
Re: [Xen-users] network performance drop heavily in xen 4.0 release
sorry, I forgot to report the performance of xen4.0+kernel2.6.18.8. xen4.0+kernel2.6.18.8: [ ID] Interval Transfer Bandwidth [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec This combination can only meet some of our needs. the reason why we didn''t use 2.6.18.8: 1. Our storage application is base on NBD, which will goto deadlock on 2.6.18.8 when nbd-client and nbd-server are deployed on the same server. 2. Hard disk frequently offline so we want to upgrade to kernel2.6.31. we also try xen4.0+kernel2.6.31.12(gentoo-xen-kernel patch). the performance is acceptable, but blktap2(support VHD) not work. has anybody solve the problem in xen4.0+kernel2.6.31.13? Cheers, wyb 2010/4/16 Fajar A. Nugraha <fajar@fajar.net>:> On Thu, Apr 15, 2010 at 11:28 PM, yingbin wang <yingbin.wangyb@gmail.com> wrote: >> Hi: >> I report a Bug !!! We have just upgraded to >> xen4.0+kernel2.6.31.13 recently. however , fond that the network >> performance drop heavily in dom0 (nearly Reduced by 2/3 vs >> xen3.4.2+kernel2.6.18.8 ) . > > > I highly suspect the new kernel is the culprit here. Try using xen 4.0 > with kernel 2.6.18.8. > > -- > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2010-Apr-16 04:30 UTC
Re: [Xen-users] network performance drop heavily in xen 4.0 release
On Fri, Apr 16, 2010 at 11:26 AM, yingbin wang <yingbin.wangyb@gmail.com> wrote:> we also try xen4.0+kernel2.6.31.12(gentoo-xen-kernel patch). the > performance is acceptable, but blktap2(support VHD) not work. > has anybody solve the problem in xen4.0+kernel2.6.31.13?http://wiki.xensource.com/xenwiki/XenKernelFeatures says Novell kernel supports blktap2, so you might want to try 2.6.32 + gentoo-xen-kernel patch. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2010-Apr-16 05:56 UTC
Re: [Xen-devel] network performance drop heavily in xen 4.0 release
On Fri, Apr 16, 2010 at 12:24:12PM +0800, yingbin wang wrote:> sorry, I forgot to report the performance of xen4.0+kernel2.6.18.8. > > xen4.0+kernel2.6.18.8: > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec > > This combination can only meet some of our needs. > > the reason why we didn''t use 2.6.18.8: > 1. Our storage application is base on NBD, which will goto deadlock > on 2.6.18.8 when nbd-client and nbd-server are deployed on the same > server. > 2. Hard disk frequently offline >Do you have Xen credit scheduler weights properly configured? http://wiki.xensource.com/xenwiki/XenBestPractices> so we want to upgrade to kernel2.6.31. > we also try xen4.0+kernel2.6.31.12(gentoo-xen-kernel patch). the > performance is acceptable, but blktap2(support VHD) not work. > has anybody solve the problem in xen4.0+kernel2.6.31.13? >You could try pvops dom0 kernel from xen/stable-2.6.32.x branch aswell: http://wiki.xensource.com/xenwiki/XenParavirtOps -- Pasi> Cheers, > wyb > > 2010/4/16 Keir Fraser <keir.fraser@eu.citrix.com>: > > On 15/04/2010 17:39, "yingbin wang" <yingbin.wangyb@gmail.com> wrote: > > > >> I report a Bug !!! We have just upgraded to > >> xen4.0+kernel2.6.31.13 recently. however , fond that the network > >> performance drop heavily in dom0 (nearly Reduced by 2/3 vs > >> xen3.4.2+kernel2.6.18.8 ) . > > > > How does xen4.0+kernel2.6.18.8 perform? The regression is more likely in the > > dom0 kernel than Xen itself. And you don''t *have* to upgrade both. > > > > -- Keir > > > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
yingbin wang
2010-Apr-16 08:48 UTC
[Xen-users] Re: network performance drop heavily in xen 4.0 release
the problem is solved. we closed most of the debug config options. a miracle happened. the performance returned to the level before. we compared the .config of 2.6.18.8 with 2.6.31.13. the differences are the debug options. I think the default .config in 2.6.31.13 should close the debug options or provide a way to turn off. thanks all Cheers, wyb 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>:> Hi: > I report a Bug !!! We have just upgraded to > xen4.0+kernel2.6.31.13 recently. however , fond that the network > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > xen3.4.2+kernel2.6.18.8 ) . > > our env : > hardware : > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > BCM5709 Gigabit Ethernet (rev 20) > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > BCM5709 Gigabit Ethernet (rev 20) > compile env and filesystem : > Redhat AS 5.4 > > xm info : > ----------------------------------------------------------------- > host : r02k08015 > release : 2.6.31.13xen > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > machine : x86_64 > nr_cpus : 16 > nr_nodes : 2 > cores_per_socket : 4 > threads_per_core : 2 > cpu_mhz : 2266 > hw_caps : > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > virt_caps : hvm > total_memory : 24539 > free_memory : 15596 > node_to_cpu : node0:0,2,4,6,8,10,12,14 > node1:1,3,5,7,9,11,13,15 > node_to_memory : node0:3589 > node1:12007 > node_to_dma32_mem : node0:2584 > node1:0 > max_node_id : 1 > xen_major : 4 > xen_minor : 0 > xen_extra : .0 > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : unavailable > xen_commandline : dom0_mem=10240M > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) > cc_compile_by : root > cc_compile_domain : > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > xend_config_format : 4 > --------------------------------------------------------------------------- > > > test tool: iperf-2.0.4 > command: > root@10.250.6.25 : iperf -s > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > network performance: > > xen4.0+kernel2.6.31.13: > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > xen3.4.2+kernel2.6.18.8: > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > 2 the attachment is the dom0 kernel compile config. > > Cheers, > wyb >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
yingbin wang
2010-Apr-16 08:49 UTC
[Xen-devel] Re: network performance drop heavily in xen 4.0 release
the problem is solved. we closed most of the debug config options. a miracle happened. the performance returned to the level before. we compared the .config of 2.6.18.8 with 2.6.31.13. the differences are the debug options. I think the default .config in 2.6.31.13 should close the debug options or provide a way to turn off. thanks all Cheers, wyb 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>:> Hi: > I report a Bug !!! We have just upgraded to > xen4.0+kernel2.6.31.13 recently. however , fond that the network > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > xen3.4.2+kernel2.6.18.8 ) . > > our env : > hardware : > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > BCM5709 Gigabit Ethernet (rev 20) > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > BCM5709 Gigabit Ethernet (rev 20) > compile env and filesystem : > Redhat AS 5.4 > > xm info : > ----------------------------------------------------------------- > host : r02k08015 > release : 2.6.31.13xen > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > machine : x86_64 > nr_cpus : 16 > nr_nodes : 2 > cores_per_socket : 4 > threads_per_core : 2 > cpu_mhz : 2266 > hw_caps : > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > virt_caps : hvm > total_memory : 24539 > free_memory : 15596 > node_to_cpu : node0:0,2,4,6,8,10,12,14 > node1:1,3,5,7,9,11,13,15 > node_to_memory : node0:3589 > node1:12007 > node_to_dma32_mem : node0:2584 > node1:0 > max_node_id : 1 > xen_major : 4 > xen_minor : 0 > xen_extra : .0 > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : unavailable > xen_commandline : dom0_mem=10240M > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) > cc_compile_by : root > cc_compile_domain : > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > xend_config_format : 4 > --------------------------------------------------------------------------- > > > test tool: iperf-2.0.4 > command: > root@10.250.6.25 : iperf -s > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > network performance: > > xen4.0+kernel2.6.31.13: > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > xen3.4.2+kernel2.6.18.8: > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > 2 the attachment is the dom0 kernel compile config. > > Cheers, > wyb >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Apr-18 18:55 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote:> the problem is solved. > > we closed most of the debug config options. a miracle happened. the > performance returned to the level before. > we compared the .config of 2.6.18.8 with 2.6.31.13. the differences > are the debug options. > I think the default .config in 2.6.31.13 should close the debug > options or provide a way to turn off. >Could you please post the exact .config options you turned off to fix the problem? I can add that info to the wiki page. Also can you please post the performance numbers with 2.6.18.8 and pvops dom0 with and without debug? This would be interesting to know. Thanks! -- Pasi> thanks all > > Cheers, > wyb > > 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: > > Hi: > > I report a Bug !!! We have just upgraded to > > xen4.0+kernel2.6.31.13 recently. however , fond that the network > > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > > xen3.4.2+kernel2.6.18.8 ) . > > > > our env : > > hardware : > > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > > BCM5709 Gigabit Ethernet (rev 20) > > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > > BCM5709 Gigabit Ethernet (rev 20) > > compile env and filesystem : > > Redhat AS 5.4 > > > > xm info : > > ----------------------------------------------------------------- > > host : r02k08015 > > release : 2.6.31.13xen > > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > > machine : x86_64 > > nr_cpus : 16 > > nr_nodes : 2 > > cores_per_socket : 4 > > threads_per_core : 2 > > cpu_mhz : 2266 > > hw_caps : > > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > > virt_caps : hvm > > total_memory : 24539 > > free_memory : 15596 > > node_to_cpu : node0:0,2,4,6,8,10,12,14 > > node1:1,3,5,7,9,11,13,15 > > node_to_memory : node0:3589 > > node1:12007 > > node_to_dma32_mem : node0:2584 > > node1:0 > > max_node_id : 1 > > xen_major : 4 > > xen_minor : 0 > > xen_extra : .0 > > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > > hvm-3.0-x86_32p hvm-3.0-x86_64 > > xen_scheduler : credit > > xen_pagesize : 4096 > > platform_params : virt_start=0xffff800000000000 > > xen_changeset : unavailable > > xen_commandline : dom0_mem=10240M > > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) > > cc_compile_by : root > > cc_compile_domain : > > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > > xend_config_format : 4 > > --------------------------------------------------------------------------- > > > > > > test tool: iperf-2.0.4 > > command: > > root@10.250.6.25 : iperf -s > > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > > > network performance: > > > > xen4.0+kernel2.6.31.13: > > [ ID] Interval Transfer Bandwidth > > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > > > xen3.4.2+kernel2.6.18.8: > > [ ID] Interval Transfer Bandwidth > > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > > > > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > > 2 the attachment is the dom0 kernel compile config. > > > > Cheers, > > wyb > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
yingbin wang
2010-Apr-24 06:42 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
of course. the attachment is the dom0 kernel compile config that fix the problem. I don''t know the exact config option which cause the problem, so I don''t test 2.6.18.8 with debug.you can compare it with the previous config to find the differences. Here are my test results: test tool: iperf-2.0.4 command: root@10.250.6.25 : iperf -s root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 network performance: xen4.0+kernel2.6.31.13(with debug): [ ID] Interval Transfer Bandwidth [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec xen4.0+kernel2.6.31.13(without debug): [ ID] Interval Transfer Bandwidth [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec xen3.4.2+kernel2.6.18.8(without debug): [ ID] Interval Transfer Bandwidth [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec xen4.0+kernel2.6.18.8(without debug): [ ID] Interval Transfer Bandwidth [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec Cheers, wyb 2010/4/19 Pasi Kärkkäinen <pasik@iki.fi>:> On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote: >> the problem is solved. >> >> we closed most of the debug config options. a miracle happened. the >> performance returned to the level before. >> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences >> are the debug options. >> I think the default .config in 2.6.31.13 should close the debug >> options or provide a way to turn off. >> > > Could you please post the exact .config options you turned off to fix the problem? > I can add that info to the wiki page. > > Also can you please post the performance numbers with 2.6.18.8 and > pvops dom0 with and without debug? This would be interesting to know. > > Thanks! > > -- Pasi > >> thanks all >> >> Cheers, >> wyb >> >> 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: >> > Hi: >> > I report a Bug !!! We have just upgraded to >> > xen4.0+kernel2.6.31.13 recently. however , fond that the network >> > performance drop heavily in dom0 (nearly Reduced by 2/3 vs >> > xen3.4.2+kernel2.6.18.8 ) . >> > >> > our env : >> > hardware : >> > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz >> > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II >> > BCM5709 Gigabit Ethernet (rev 20) >> > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II >> > BCM5709 Gigabit Ethernet (rev 20) >> > compile env and filesystem : >> > Redhat AS 5.4 >> > >> > xm info : >> > ----------------------------------------------------------------- >> > host : r02k08015 >> > release : 2.6.31.13xen >> > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 >> > machine : x86_64 >> > nr_cpus : 16 >> > nr_nodes : 2 >> > cores_per_socket : 4 >> > threads_per_core : 2 >> > cpu_mhz : 2266 >> > hw_caps : >> > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 >> > virt_caps : hvm >> > total_memory : 24539 >> > free_memory : 15596 >> > node_to_cpu : node0:0,2,4,6,8,10,12,14 >> > node1:1,3,5,7,9,11,13,15 >> > node_to_memory : node0:3589 >> > node1:12007 >> > node_to_dma32_mem : node0:2584 >> > node1:0 >> > max_node_id : 1 >> > xen_major : 4 >> > xen_minor : 0 >> > xen_extra : .0 >> > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 >> > hvm-3.0-x86_32p hvm-3.0-x86_64 >> > xen_scheduler : credit >> > xen_pagesize : 4096 >> > platform_params : virt_start=0xffff800000000000 >> > xen_changeset : unavailable >> > xen_commandline : dom0_mem=10240M >> > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) >> > cc_compile_by : root >> > cc_compile_domain : >> > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 >> > xend_config_format : 4 >> > --------------------------------------------------------------------------- >> > >> > >> > test tool: iperf-2.0.4 >> > command: >> > root@10.250.6.25 : iperf -s >> > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 >> > >> > network performance: >> > >> > xen4.0+kernel2.6.31.13: >> > [ ID] Interval Transfer Bandwidth >> > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec >> > >> > xen3.4.2+kernel2.6.18.8: >> > [ ID] Interval Transfer Bandwidth >> > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec >> > >> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. >> > 2 the attachment is the dom0 kernel compile config. >> > >> > Cheers, >> > wyb >> > >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Apr-24 11:18 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release [SOLVED]
Jeremy: this is interesting. The performance of pvops dom0 seems to be pretty close to 2.6.18-xen with iperf workload! Yingbin: Thanks for the result. -- Pasi On Sat, Apr 24, 2010 at 02:42:24PM +0800, yingbin wang wrote:> of course. > the attachment is the dom0 kernel compile config that fix the problem. > I don''t know the exact config option which cause the problem, so I > don''t test 2.6.18.8 with debug.you can compare it with the previous > config to find the differences. > > Here are my test results: > > test tool: iperf-2.0.4 > command: > root@10.250.6.25 : iperf -s > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > network performance: > > xen4.0+kernel2.6.31.13(with debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > xen4.0+kernel2.6.31.13(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec > > xen3.4.2+kernel2.6.18.8(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > > xen4.0+kernel2.6.18.8(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec > > Cheers, > wyb > > 2010/4/19 Pasi Kärkkäinen <pasik@iki.fi>: > > On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote: > >> the problem is solved. > >> > >> we closed most of the debug config options. a miracle happened. the > >> performance returned to the level before. > >> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences > >> are the debug options. > >> I think the default .config in 2.6.31.13 should close the debug > >> options or provide a way to turn off. > >> > > > > Could you please post the exact .config options you turned off to fix the problem? > > I can add that info to the wiki page. > > > > Also can you please post the performance numbers with 2.6.18.8 and > > pvops dom0 with and without debug? This would be interesting to know. > > > > Thanks! > > > > -- Pasi > > > >> thanks all > >> > >> Cheers, > >> wyb > >> > >> 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: > >> > Hi: > >> > I report a Bug !!! We have just upgraded to > >> > xen4.0+kernel2.6.31.13 recently. however , fond that the network > >> > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > >> > xen3.4.2+kernel2.6.18.8 ) . > >> > > >> > our env : > >> > hardware : > >> > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > >> > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > >> > BCM5709 Gigabit Ethernet (rev 20) > >> > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > >> > BCM5709 Gigabit Ethernet (rev 20) > >> > compile env and filesystem : > >> > Redhat AS 5.4 > >> > > >> > xm info : > >> > ----------------------------------------------------------------- > >> > host : r02k08015 > >> > release : 2.6.31.13xen > >> > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > >> > machine : x86_64 > >> > nr_cpus : 16 > >> > nr_nodes : 2 > >> > cores_per_socket : 4 > >> > threads_per_core : 2 > >> > cpu_mhz : 2266 > >> > hw_caps : > >> > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > >> > virt_caps : hvm > >> > total_memory : 24539 > >> > free_memory : 15596 > >> > node_to_cpu : node0:0,2,4,6,8,10,12,14 > >> > node1:1,3,5,7,9,11,13,15 > >> > node_to_memory : node0:3589 > >> > node1:12007 > >> > node_to_dma32_mem : node0:2584 > >> > node1:0 > >> > max_node_id : 1 > >> > xen_major : 4 > >> > xen_minor : 0 > >> > xen_extra : .0 > >> > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > >> > hvm-3.0-x86_32p hvm-3.0-x86_64 > >> > xen_scheduler : credit > >> > xen_pagesize : 4096 > >> > platform_params : virt_start=0xffff800000000000 > >> > xen_changeset : unavailable > >> > xen_commandline : dom0_mem=10240M > >> > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) > >> > cc_compile_by : root > >> > cc_compile_domain : > >> > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > >> > xend_config_format : 4 > >> > --------------------------------------------------------------------------- > >> > > >> > > >> > test tool: iperf-2.0.4 > >> > command: > >> > root@10.250.6.25 : iperf -s > >> > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > >> > > >> > network performance: > >> > > >> > xen4.0+kernel2.6.31.13: > >> > [ ID] Interval Transfer Bandwidth > >> > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > >> > > >> > xen3.4.2+kernel2.6.18.8: > >> > [ ID] Interval Transfer Bandwidth > >> > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > >> > > >> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > >> > 2 the attachment is the dom0 kernel compile config. > >> > > >> > Cheers, > >> > wyb > >> > > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ronaldo C. A. Chaves
2010-Apr-24 14:20 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
I compared the files and the difference is CONFIG_BLK_DEV_LOOP=m CONFIG_ATA_PIIX=m CONFIG_XEN_DEV_EVTCHN=y in config-2.6.31.13-high performance*. * 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com>> of course. > the attachment is the dom0 kernel compile config that fix the problem. > I don''t know the exact config option which cause the problem, so I > don''t test 2.6.18.8 with debug.you can compare it with the previous > config to find the differences. > > Here are my test results: > > test tool: iperf-2.0.4 > command: > root@10.250.6.25 : iperf -s > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > network performance: > > xen4.0+kernel2.6.31.13(with debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > xen4.0+kernel2.6.31.13(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec > > xen3.4.2+kernel2.6.18.8(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > > xen4.0+kernel2.6.18.8(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec > > Cheers, > wyb > > 2010/4/19 Pasi Kärkkäinen <pasik@iki.fi>: > > On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote: > >> the problem is solved. > >> > >> we closed most of the debug config options. a miracle happened. the > >> performance returned to the level before. > >> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences > >> are the debug options. > >> I think the default .config in 2.6.31.13 should close the debug > >> options or provide a way to turn off. > >> > > > > Could you please post the exact .config options you turned off to fix the > problem? > > I can add that info to the wiki page. > > > > Also can you please post the performance numbers with 2.6.18.8 and > > pvops dom0 with and without debug? This would be interesting to know. > > > > Thanks! > > > > -- Pasi > > > >> thanks all > >> > >> Cheers, > >> wyb > >> > >> 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: > >> > Hi: > >> > I report a Bug !!! We have just upgraded to > >> > xen4.0+kernel2.6.31.13 recently. however , fond that the network > >> > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > >> > xen3.4.2+kernel2.6.18.8 ) . > >> > > >> > our env : > >> > hardware : > >> > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > >> > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > >> > BCM5709 Gigabit Ethernet (rev 20) > >> > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > >> > BCM5709 Gigabit Ethernet (rev 20) > >> > compile env and filesystem : > >> > Redhat AS 5.4 > >> > > >> > xm info : > >> > ----------------------------------------------------------------- > >> > host : r02k08015 > >> > release : 2.6.31.13xen > >> > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > >> > machine : x86_64 > >> > nr_cpus : 16 > >> > nr_nodes : 2 > >> > cores_per_socket : 4 > >> > threads_per_core : 2 > >> > cpu_mhz : 2266 > >> > hw_caps : > >> > > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > >> > virt_caps : hvm > >> > total_memory : 24539 > >> > free_memory : 15596 > >> > node_to_cpu : node0:0,2,4,6,8,10,12,14 > >> > node1:1,3,5,7,9,11,13,15 > >> > node_to_memory : node0:3589 > >> > node1:12007 > >> > node_to_dma32_mem : node0:2584 > >> > node1:0 > >> > max_node_id : 1 > >> > xen_major : 4 > >> > xen_minor : 0 > >> > xen_extra : .0 > >> > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > >> > hvm-3.0-x86_32p hvm-3.0-x86_64 > >> > xen_scheduler : credit > >> > xen_pagesize : 4096 > >> > platform_params : virt_start=0xffff800000000000 > >> > xen_changeset : unavailable > >> > xen_commandline : dom0_mem=10240M > >> > cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46) > >> > cc_compile_by : root > >> > cc_compile_domain : > >> > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > >> > xend_config_format : 4 > >> > > --------------------------------------------------------------------------- > >> > > >> > > >> > test tool: iperf-2.0.4 > >> > command: > >> > root@10.250.6.25 : iperf -s > >> > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > >> > > >> > network performance: > >> > > >> > xen4.0+kernel2.6.31.13: > >> > [ ID] Interval Transfer Bandwidth > >> > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > >> > > >> > xen3.4.2+kernel2.6.18.8: > >> > [ ID] Interval Transfer Bandwidth > >> > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > >> > > >> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > >> > 2 the attachment is the dom0 kernel compile config. > >> > > >> > Cheers, > >> > wyb > >> > > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
yingbin wang
2010-Apr-24 14:59 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
which .config you compare with? 2010/4/24 Ronaldo C. A. Chaves <xarqui@gmail.com>:> I compared the files and the difference is > > CONFIG_BLK_DEV_LOOP=m > CONFIG_ATA_PIIX=m > CONFIG_XEN_DEV_EVTCHN=y > > in config-2.6.31.13-high performance. > > > 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com> >> >> of course. >> the attachment is the dom0 kernel compile config that fix the problem. >> I don''t know the exact config option which cause the problem, so I >> don''t test 2.6.18.8 with debug.you can compare it with the previous >> config to find the differences. >> >> Here are my test results: >> >> test tool: iperf-2.0.4 >> command: >> root@10.250.6.25 : iperf -s >> root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 >> >> network performance: >> >> xen4.0+kernel2.6.31.13(with debug): >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec >> >> xen4.0+kernel2.6.31.13(without debug): >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec >> >> xen3.4.2+kernel2.6.18.8(without debug): >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec >> >> xen4.0+kernel2.6.18.8(without debug): >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec >> >> Cheers, >> wyb >> >> 2010/4/19 Pasi Kärkkäinen <pasik@iki.fi>: >> > On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote: >> >> the problem is solved. >> >> >> >> we closed most of the debug config options. a miracle happened. the >> >> performance returned to the level before. >> >> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences >> >> are the debug options. >> >> I think the default .config in 2.6.31.13 should close the debug >> >> options or provide a way to turn off. >> >> >> > >> > Could you please post the exact .config options you turned off to fix >> > the problem? >> > I can add that info to the wiki page. >> > >> > Also can you please post the performance numbers with 2.6.18.8 and >> > pvops dom0 with and without debug? This would be interesting to know. >> > >> > Thanks! >> > >> > -- Pasi >> > >> >> thanks all >> >> >> >> Cheers, >> >> wyb >> >> >> >> 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: >> >> > Hi: >> >> > I report a Bug !!! We have just upgraded to >> >> > xen4.0+kernel2.6.31.13 recently. however , fond that the network >> >> > performance drop heavily in dom0 (nearly Reduced by 2/3 vs >> >> > xen3.4.2+kernel2.6.18.8 ) . >> >> > >> >> > our env : >> >> > hardware : >> >> > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz >> >> > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II >> >> > BCM5709 Gigabit Ethernet (rev 20) >> >> > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II >> >> > BCM5709 Gigabit Ethernet (rev 20) >> >> > compile env and filesystem : >> >> > Redhat AS 5.4 >> >> > >> >> > xm info : >> >> > ----------------------------------------------------------------- >> >> > host : r02k08015 >> >> > release : 2.6.31.13xen >> >> > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 >> >> > machine : x86_64 >> >> > nr_cpus : 16 >> >> > nr_nodes : 2 >> >> > cores_per_socket : 4 >> >> > threads_per_core : 2 >> >> > cpu_mhz : 2266 >> >> > hw_caps : >> >> > >> >> > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 >> >> > virt_caps : hvm >> >> > total_memory : 24539 >> >> > free_memory : 15596 >> >> > node_to_cpu : node0:0,2,4,6,8,10,12,14 >> >> > node1:1,3,5,7,9,11,13,15 >> >> > node_to_memory : node0:3589 >> >> > node1:12007 >> >> > node_to_dma32_mem : node0:2584 >> >> > node1:0 >> >> > max_node_id : 1 >> >> > xen_major : 4 >> >> > xen_minor : 0 >> >> > xen_extra : .0 >> >> > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p >> >> > hvm-3.0-x86_32 >> >> > hvm-3.0-x86_32p hvm-3.0-x86_64 >> >> > xen_scheduler : credit >> >> > xen_pagesize : 4096 >> >> > platform_params : virt_start=0xffff800000000000 >> >> > xen_changeset : unavailable >> >> > xen_commandline : dom0_mem=10240M >> >> > cc_compiler : gcc version 4.1.2 20080704 (Red Hat >> >> > 4.1.2-46) >> >> > cc_compile_by : root >> >> > cc_compile_domain : >> >> > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 >> >> > xend_config_format : 4 >> >> > >> >> > --------------------------------------------------------------------------- >> >> > >> >> > >> >> > test tool: iperf-2.0.4 >> >> > command: >> >> > root@10.250.6.25 : iperf -s >> >> > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 >> >> > >> >> > network performance: >> >> > >> >> > xen4.0+kernel2.6.31.13: >> >> > [ ID] Interval Transfer Bandwidth >> >> > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec >> >> > >> >> > xen3.4.2+kernel2.6.18.8: >> >> > [ ID] Interval Transfer Bandwidth >> >> > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec >> >> > >> >> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. >> >> > 2 the attachment is the dom0 kernel compile config. >> >> > >> >> > Cheers, >> >> > wyb >> >> > >> >> >> >> _______________________________________________ >> >> Xen-devel mailing list >> >> Xen-devel@lists.xensource.com >> >> http://lists.xensource.com/xen-devel >> > >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ronaldo C. A. Chaves
2010-Apr-24 15:07 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
I compared the config-2.6.31-13xen* *with the config-2.6.31.13-high performance. 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com>> which .config you compare with? > > 2010/4/24 Ronaldo C. A. Chaves <xarqui@gmail.com>: > > I compared the files and the difference is > > > > CONFIG_BLK_DEV_LOOP=m > > CONFIG_ATA_PIIX=m > > CONFIG_XEN_DEV_EVTCHN=y > > > > in config-2.6.31.13-high performance. > > > > > > 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com> > >> > >> of course. > >> the attachment is the dom0 kernel compile config that fix the problem. > >> I don''t know the exact config option which cause the problem, so I > >> don''t test 2.6.18.8 with debug.you can compare it with the previous > >> config to find the differences. > >> > >> Here are my test results: > >> > >> test tool: iperf-2.0.4 > >> command: > >> root@10.250.6.25 : iperf -s > >> root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > >> > >> network performance: > >> > >> xen4.0+kernel2.6.31.13(with debug): > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > >> > >> xen4.0+kernel2.6.31.13(without debug): > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec > >> > >> xen3.4.2+kernel2.6.18.8(without debug): > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > >> > >> xen4.0+kernel2.6.18.8(without debug): > >> [ ID] Interval Transfer Bandwidth > >> [ 4] 0.0-16.3 sec 1.79 GBytes 941 Mbits/sec > >> > >> Cheers, > >> wyb > >> > >> 2010/4/19 Pasi Kärkkäinen <pasik@iki.fi>: > >> > On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote: > >> >> the problem is solved. > >> >> > >> >> we closed most of the debug config options. a miracle happened. the > >> >> performance returned to the level before. > >> >> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences > >> >> are the debug options. > >> >> I think the default .config in 2.6.31.13 should close the debug > >> >> options or provide a way to turn off. > >> >> > >> > > >> > Could you please post the exact .config options you turned off to fix > >> > the problem? > >> > I can add that info to the wiki page. > >> > > >> > Also can you please post the performance numbers with 2.6.18.8 and > >> > pvops dom0 with and without debug? This would be interesting to know. > >> > > >> > Thanks! > >> > > >> > -- Pasi > >> > > >> >> thanks all > >> >> > >> >> Cheers, > >> >> wyb > >> >> > >> >> 2010/4/16 yingbin wang <yingbin.wangyb@gmail.com>: > >> >> > Hi: > >> >> > I report a Bug !!! We have just upgraded to > >> >> > xen4.0+kernel2.6.31.13 recently. however , fond that the network > >> >> > performance drop heavily in dom0 (nearly Reduced by 2/3 vs > >> >> > xen3.4.2+kernel2.6.18.8 ) . > >> >> > > >> >> > our env : > >> >> > hardware : > >> >> > Intel(R) Xeon(R) CPU E5520 @ 2.27GHz > >> >> > 01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II > >> >> > BCM5709 Gigabit Ethernet (rev 20) > >> >> > 01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II > >> >> > BCM5709 Gigabit Ethernet (rev 20) > >> >> > compile env and filesystem : > >> >> > Redhat AS 5.4 > >> >> > > >> >> > xm info : > >> >> > ----------------------------------------------------------------- > >> >> > host : r02k08015 > >> >> > release : 2.6.31.13xen > >> >> > version : #1 SMP Tue Apr 13 20:38:51 CST 2010 > >> >> > machine : x86_64 > >> >> > nr_cpus : 16 > >> >> > nr_nodes : 2 > >> >> > cores_per_socket : 4 > >> >> > threads_per_core : 2 > >> >> > cpu_mhz : 2266 > >> >> > hw_caps : > >> >> > > >> >> > > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000 > >> >> > virt_caps : hvm > >> >> > total_memory : 24539 > >> >> > free_memory : 15596 > >> >> > node_to_cpu : node0:0,2,4,6,8,10,12,14 > >> >> > node1:1,3,5,7,9,11,13,15 > >> >> > node_to_memory : node0:3589 > >> >> > node1:12007 > >> >> > node_to_dma32_mem : node0:2584 > >> >> > node1:0 > >> >> > max_node_id : 1 > >> >> > xen_major : 4 > >> >> > xen_minor : 0 > >> >> > xen_extra : .0 > >> >> > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p > >> >> > hvm-3.0-x86_32 > >> >> > hvm-3.0-x86_32p hvm-3.0-x86_64 > >> >> > xen_scheduler : credit > >> >> > xen_pagesize : 4096 > >> >> > platform_params : virt_start=0xffff800000000000 > >> >> > xen_changeset : unavailable > >> >> > xen_commandline : dom0_mem=10240M > >> >> > cc_compiler : gcc version 4.1.2 20080704 (Red Hat > >> >> > 4.1.2-46) > >> >> > cc_compile_by : root > >> >> > cc_compile_domain : > >> >> > cc_compile_date : Tue Apr 13 23:04:16 CST 2010 > >> >> > xend_config_format : 4 > >> >> > > >> >> > > --------------------------------------------------------------------------- > >> >> > > >> >> > > >> >> > test tool: iperf-2.0.4 > >> >> > command: > >> >> > root@10.250.6.25 : iperf -s > >> >> > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > >> >> > > >> >> > network performance: > >> >> > > >> >> > xen4.0+kernel2.6.31.13: > >> >> > [ ID] Interval Transfer Bandwidth > >> >> > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > >> >> > > >> >> > xen3.4.2+kernel2.6.18.8: > >> >> > [ ID] Interval Transfer Bandwidth > >> >> > [ 4] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec > >> >> > > >> >> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s. > >> >> > 2 the attachment is the dom0 kernel compile config. > >> >> > > >> >> > Cheers, > >> >> > wyb > >> >> > > >> >> > >> >> _______________________________________________ > >> >> Xen-devel mailing list > >> >> Xen-devel@lists.xensource.com > >> >> http://lists.xensource.com/xen-devel > >> > > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >> > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
listmail
2010-Apr-24 15:57 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
Do you mind attaching the config for 2.6.18.8 ? I believe we should be comparing that with "config-2.6.31.13-high performance" Ronaldo C. A. Chaves wrote:> I compared the config-2.6.31-13xen* *with the config-2.6.31.13-high > performance. > > > 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com > <mailto:yingbin.wangyb@gmail.com>> > > which .config you compare with? > > 2010/4/24 Ronaldo C. A. Chaves <xarqui@gmail.com > <mailto:xarqui@gmail.com>>: > > I compared the files and the difference is > > > > CONFIG_BLK_DEV_LOOP=m > > CONFIG_ATA_PIIX=m > > CONFIG_XEN_DEV_EVTCHN=y > > > > in config-2.6.31.13-high performance. > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
listmail
2010-Apr-24 16:03 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
Or nevermind... :) The *config-2.6.31-13xen <http://lists.xensource.com/archives/html/xen-devel/2010-04/binpV1aVlN6Wc.bin> you originally posted was the one *with* the performance issue. I was surprised to not see any debug differences. *listmail wrote:> Do you mind attaching the config for 2.6.18.8 ? I believe we should > be comparing that with "config-2.6.31.13-high performance" > > > Ronaldo C. A. Chaves wrote: >> I compared the config-2.6.31-13xen* *with the config-2.6.31.13-high >> performance. >> >> >> 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com >> <mailto:yingbin.wangyb@gmail.com>> >> >> which .config you compare with? >> >> 2010/4/24 Ronaldo C. A. Chaves <xarqui@gmail.com >> <mailto:xarqui@gmail.com>>: >> > I compared the files and the difference is >> > >> > CONFIG_BLK_DEV_LOOP=m >> > CONFIG_ATA_PIIX=m >> > CONFIG_XEN_DEV_EVTCHN=y >> > >> > in config-2.6.31.13-high performance. >> > >> > >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
listmail
2010-Apr-24 16:22 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
a quick diff between the two (attached file is full diff -u output) $ grep ''^+'' xenperf.diff +++ config-2.6.31.13-high performance 2010-04-24 12:07:04.000000000 -0400 +# Fri Apr 16 13:26:59 2010 +# CONFIG_XEN_DEBUG_FS is not set +# CONFIG_X86_CPU_DEBUG is not set +# CONFIG_PM_DEBUG is not set +# CONFIG_CPU_FREQ_DEBUG is not set +# CONFIG_CAN_DEBUG_DEVICES is not set +# CONFIG_CFG80211_DEBUGFS is not set +CONFIG_BLK_DEV_LOOP=m +# CONFIG_SCSI_DEBUG is not set +CONFIG_ATA_PIIX=m +# CONFIG_DM_DEBUG is not set +# CONFIG_LIBERTAS_DEBUG is not set +# CONFIG_ATH5K_DEBUG is not set +# CONFIG_IWLWIFI_DEBUG is not set +# CONFIG_B43_DEBUG is not set +# CONFIG_B43LEGACY_DEBUG is not set +# CONFIG_RT2X00_LIB_DEBUGFS is not set +# CONFIG_SND_DEBUG is not set +# CONFIG_HID_DEBUG is not set +# CONFIG_USB_SERIAL_DEBUG is not set +# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set +CONFIG_XEN_DEV_EVTCHN=y +# CONFIG_JBD2_DEBUG is not set +# CONFIG_DLM_DEBUG is not set +# CONFIG_DEBUG_KERNEL is not set +# CONFIG_SLUB_DEBUG_ON is not set +# CONFIG_DYNAMIC_DEBUG is not set +# CONFIG_DMA_API_DEBUG is not set +# CONFIG_KEYS_DEBUG_PROC_KEYS is not set listmail wrote:> Or nevermind... :) The *config-2.6.31-13xen > <http://lists.xensource.com/archives/html/xen-devel/2010-04/binpV1aVlN6Wc.bin> > you originally posted was the one *with* the performance issue. I was > surprised to not see any debug differences. > > *listmail wrote: >> Do you mind attaching the config for 2.6.18.8 ? I believe we should >> be comparing that with "config-2.6.31.13-high performance" >> >> >> Ronaldo C. A. Chaves wrote: >>> I compared the config-2.6.31-13xen* *with the config-2.6.31.13-high >>> performance. >>> >>> >>> 2010/4/24 yingbin wang <yingbin.wangyb@gmail.com >>> <mailto:yingbin.wangyb@gmail.com>> >>> >>> which .config you compare with? >>> >>> 2010/4/24 Ronaldo C. A. Chaves <xarqui@gmail.com >>> <mailto:xarqui@gmail.com>>: >>> > I compared the files and the difference is >>> > >>> > CONFIG_BLK_DEV_LOOP=m >>> > CONFIG_ATA_PIIX=m >>> > CONFIG_XEN_DEV_EVTCHN=y >>> > >>> > in config-2.6.31.13-high performance. >>> > >>> > >>> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2010-Apr-26 20:23 UTC
Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release
On 04/23/2010 11:42 PM, yingbin wang wrote:> of course. > the attachment is the dom0 kernel compile config that fix the problem. > I don''t know the exact config option which cause the problem, so I > don''t test 2.6.18.8 with debug.you can compare it with the previous > config to find the differences. > > Here are my test results: > > test tool: iperf-2.0.4 > command: > root@10.250.6.25 : iperf -s > root@10.250.6.28 : iperf -c 10.250.6.25 -i 1 -t 100 > > network performance: > > xen4.0+kernel2.6.31.13(with debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0- 9.5 sec 249 MBytes 219 Mbits/sec > > xen4.0+kernel2.6.31.13(without debug): > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-100.0 sec 10.7 GBytes 920 Mbits/sec >How do these compare to running the same 2.6.31.13 kernel native? Thanks, J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dear all, I have a problem with /etc/init.d/xendomains. It doesn''t shut down my DomUs properly any longer. Having looked into the code, I was able to find some things out, but currently, I have no clue, how to solve. I have set everything so that xend will start all DomUs from /etc/xen/auto and all DomUs are shut Down when shutting down the system. I have in /etc/xen the DomU config files aaa bbb ccc, but I make links like 10-bbb, 20-ccc, 30-aaa in /etc/xen/auto to start by order. So the DomU config files I use to start are not the same as the DomU names. There are some loops around "xm list -l", which are analyzed by parseln in a way, that the string "(domain" is starting a block, and "(name" and "(domid" are used to identified the domains that e.g. should be shut down. Problem could be that the PCI part of "xm list -l" is also returning "(domain" strings, I guess this Could confuse the mechanism a bit. Another thing is that it seems the "$1" =~ "\string" doesn''t seem to fire for me. I use Debian Lenny with bash 3.2-4. I attached the /etc/default/xendomains, which is called by /etc/init.d/xendomains (I have put it in another place). BR, Carsten. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel