Hi everybody! Now that Xen 3.0 unstable (downloaded on 2006-04-06) is up and running for me I did some performance tests. I have chosen a Linux kernel compile as a benchmark to compare the native versus the domU performance. The results are: native domU loss make -j4 553s 666s -17% make -j2 565s 713s -22% make 1,026s 1,199s -14% System: Athlon64, Dual-Core, 2,0 GHz, 64bit, glibc 2.3.6 (Debian Etch) Native settings: kernel booted with ''mem=512M'', kernel 2.6.16.1 Xen settings: dom0 128 MByte, domU 512 MByte, kernel 2.6.16.1-xen Test sequence: make -jN clean && make -jN && make -jN clean && time make -jN Both test series'' ran on the same partition on the same disk. In the Xen setup I exported the partition to the domU using disk = [ ..., ''phy:sda1,hda11,w'' ] in the config file. The performance loss is greater than what I have expected. Can anybody confirm the dimension of the performance loss? Are these values normal for a Xen setup? I''m also interested in whether there are already best practices for performance tuning. Thanks, Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Austermühle wrote:> native domU loss > make -j4 553s 666s -17% > make -j2 565s 713s -22% > make 1,026s 1,199s -14%This is very interesting, Stephan. Could you run a test with dom0 compiling the kernel? Even with the disk exported as a physical disk to the domU, domU is still sending messages via the hypervisor for block read requests. So, a run of dom0 compiling the kernel should provide useful information. By the way, good work on controlling the memory size to 512MB when running native. -- Randy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of > Stephan Austermühle > Sent: 11 April 2006 15:59 > To: xen-users@lists.xensource.com > Subject: [Xen-users] Performance issues > > Hi everybody! > > Now that Xen 3.0 unstable (downloaded on 2006-04-06) is up > and running for me I did some performance tests. I have > chosen a Linux kernel compile as a benchmark to compare the > native versus the domU performance. The results are: > > native domU loss > make -j4 553s 666s -17% > make -j2 565s 713s -22% > make 1,026s 1,199s -14% > > > System: Athlon64, Dual-Core, 2,0 GHz, 64bit, glibc 2.3.6 > (Debian Etch) Native settings: kernel booted with ''mem=512M'', > kernel 2.6.16.1 Xen settings: dom0 128 MByte, domU 512 MByte, > kernel 2.6.16.1-xen Test sequence: > make -jN clean && make -jN && make -jN clean && time make -jN > > Both test series'' ran on the same partition on the same disk. > In the Xen setup I exported the partition to the domU using > > disk = [ ..., > ''phy:sda1,hda11,w'' ] > > in the config file. > > The performance loss is greater than what I have expected. > Can anybody confirm the dimension of the performance loss? > Are these values normal for a Xen setup?I haven''t got any benchmarks, but I don''t think the results you''re seeing are completely unreasonable. The benchmarks that you''ve choosen are VERY file-intensive, and any delay in delivering the file-data to the compiler (etc.) would show up "at the bottom line". File-reads, for example, will have to pass from DomU to Dom0, where the actual read of the hard-disk is performed, and then passed back to DomU. These extra steps, whilst individually not huge, will add to the total time. I agree with Randy, to see how much overhead is Xen "just being there", and how much is emulating the hard-disk interface in DomU, you could run the compile in Dom0. Also, whilst it''s great that you run 512MB for native Linux, but I''d be surprised if the disk-caching in Dom0 is quite as effective as it could be - maybe you''d get better results (for this particular type of benchmark) if you gave another lot of memory to the Dom0 and took it away from DomU (even better, give some more to Dom0 without taking it away from DomU!). Any free memory in Linux is used for Disk Caching, and most of the time, the compiler will not use up 512MB (not even 4 compiles at the same time, unless you have HUGE C-files with large functions). -- Mats> > I''m also interested in whether there are already best > practices for performance tuning. > > Thanks, > > Stephan >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi> ... glibc 2.3.6 (Debian Etch) ...Did you compile this yourself? Or did you remove /lib/tls? Regards, Steffen begin 666 smime.p7s M,( &"2J&2(;W#0$''`J" ,( "`0$Q"S )!@4K#@,"&@4`,( &"2J&2(;W#0$'' M`0``H(((XC""`FHP@@''3H ,"`0("`P]Z%# -!@DJADB&]PT!`00%`#!B,0LP M"08#500&$P):03$E,",&`U4$"A,<5&AA=W1E($-O;G-U;''1I;F<@*%!T>2D@ M3''1D+C$L,"H&`U4$`Q,C5&AA=W1E(%!E<G-O;F%L($9R965M86EL($ES<W5I M;F<@0T$P''A<-,#4P.3$T,#DP-34W6A<-,#8P.3$T,#DP-34W6C!>,0TP"P8# M500$$P1(96EL,1 P#@8#500J$P=3=&5F9F5N,14P$P8#500#$PQ3=&5F9F5N M($AE:6PQ)# B!@DJADB&]PT!"0$6%6QI<W1S0''-T969F96XM:&5I;"YD93"! MGS -!@DJADB&]PT!`0$%``.!C0`P@8D"@8$`NAP.,GIQZ(>]).H36NM*T''\& MB2P,C(C/>37W''*JK3>F*$:N_"7&B`8;Y^K:F$E[4E[!R,->\!EK^2_3+<"SR MW:F[TC(3D1[Y_'' >KRHIS9^,(<>[,8]Z&FV<9DFZLH)^<I;FWU_FJO5P,J6H MZ*J0_8.;?)"(;J@X(\P;K7ELAYL"`P$``:,R,# P( 8#51T1!!DP%X$5;&ES M=''- <W1E9F9E;BUH96EL+F1E, P&`U4=$P$!_P0", `P#08)*H9(AO<-`0$$ M!0`#@8$`9.YD\C#[/8UR*R%+"@2$=^B.9<>]"&F/_3<PN!XSY2U5#F41MJN9 M4ZX*VJW)05\!`-]JV2&F^0[>4C$A[U,$7QB_ON67T!<[&C-QV$9S:230.M\" MEW$7P-NO)C2\C60.K1@6XC-*84QZ^64@2S\GZ(F49=3UQ_U>IT@##JNT0",P M@@,M,(("EJ #`@$"`@$`, T&"2J&2(;W#0$!! 4`,(''1,0LP"08#500&$P): M03$5,!,&`U4$"!,,5V5S=&5R;B!#87!E,1(P$ 8#500''$PE#87!E(%1O=VXQ M&C 8!@-5! H3$51H87=T92!#;VYS=6QT:6YG,2@P)@8#500+$Q]#97)T:69I M8V%T:6]N(%-E<G9I8V5S($1I=FES:6]N,20P(@8#500#$QM4:&%W=&4@4&5R M<V]N86P@1G)E96UA:6P@0T$Q*S I!@DJADB&]PT!"0$6''''!E<G-O;F%L+69R M965M86EL0''1H87=T92YC;VTP''A<-.38P,3 Q,# P,# P6A<-,C Q,C,Q,C,U M.34Y6C"!T3$+, D&`U4$!A,"6D$Q%3 3!@-5! @3#%=E<W1E<FX@0V%P93$2 M,! &`U4$!Q,)0V%P92!4;W=N,1HP& 8#500*$Q%4:&%W=&4@0V]N<W5L=&EN M9S$H,"8&`U4$"Q,?0V5R=&EF:6-A=&EO;B!397)V:6-E<R!$:79I<VEO;C$D M,"(&`U4$`Q,;5&AA=W1E(%!E<G-O;F%L($9R965M86EL($-!,2LP*08)*H9( MAO<-`0D!%AQP97)S;VYA;"UF<F5E;6%I;$!T:&%W=&4N8V]M,(&?, T&"2J& M2(;W#0$!`04``X&-`#"!B0*!@0#4:=?4L)1D6W''I1]@,4;;J<I&PA%Y]+0V/ M>Q+?A25U*''0Z0BQC)Y^5>TOO?AF''''8;JH]VYSI9D&L(4;D2L?.:/Z$T/<1] M.*8`HX=X]OF4AEZMZL!>=NO9%*-=;GI\#*5+57\&&2E_GIHFU6J[."0(:IC'' ML=JCF)'']>=OE6L0<N0(#`0`!HQ,P$3 /!@-5''1,!`?\$!3 #`0''_, T&"2J& M2(;W#0$!! 4``X&!`,?LDGY.^/66I6=B*J3P31%@T&^-8%AAK":[4C5<",\P M^ZA*EHH?8D(CC!</]+IDG!>L1RG?G9A>TFQ@<5RBK-QYX^=N`$<?M0THZ *MY)K]$_2FV7RQ^-Q?(R8)D8!ST!0;WD.I@R7RYIPO%<K^IJN*!W6+#-U1A&OD M^-''.=Z*!,((#/S""`JB@`P(!`@(!#3 -!@DJADB&]PT!`04%`#"!T3$+, D& M`U4$!A,"6D$Q%3 3!@-5! @3#%=E<W1E<FX@0V%P93$2,! &`U4$!Q,)0V%P M92!4;W=N,1HP& 8#500*$Q%4:&%W=&4@0V]N<W5L=&EN9S$H,"8&`U4$"Q,? M0V5R=&EF:6-A=&EO;B!397)V:6-E<R!$:79I<VEO;C$D,"(&`U4$`Q,;5&AA M=W1E(%!E<G-O;F%L($9R965M86EL($-!,2LP*08)*H9(AO<-`0D!%AQP97)S M;VYA;"UF<F5E;6%I;$!T:&%W=&4N8V]M,!X7#3 S,#<Q-S P,# P,%H7#3$S M,#<Q-C(S-3DU.5HP8C$+, D&`U4$!A,"6D$Q)3 C!@-5! H3''%1H87=T92!# M;VYS=6QT:6YG("A0=''DI($QT9"XQ+# J!@-5! ,3(U1H87=T92!097)S;VYA M;"!&<F5E;6%I;"!)<W-U:6YG($-!,(&?, T&"2J&2(;W#0$!`04``X&-`#"! MB0*!@0#$ICQ5<U7[3KG*F5H>:,!U!''"=W^G_HQ[LO<WU6_(:=KU_##IA\K]1 MS@''4Y5 *,-<"8UHLB15PCMW)\"N%6JH_<5;+KSP+!^?Q''Y$V)"H3SRO5\X)W M/0.^*_Z[&#X''OT" `F37IZ:[GV71Q2I4A0](!''^GMM$\801 ''F09<F"W^P(# M`0`!HX&4,(&1,!(&`U4=$P$!_P0(, 8!`?\"`0`P0P8#51T?!#PP.C XH#:@ M-(8R:''1T<#HO+V-R;"YT:&%W=&4N8V]M+U1H87=T95!E<G-O;F%L1G)E96UA M:6Q#02YC<FPP"P8#51T/! 0#`@$&,"D&`U4=$00B,""D''C <,1HP& 8#500# M$Q%0<FEV871E3&%B96PR+3$S.# -!@DJADB&]PT!`04%``.!@0!(C-%0@^H+ M+LP-HV:L9P]_KZR^PA>A0Y:4G7],(;CX-A^J+9\V+\#T''% @DW \_:WA86+# MV3H9?H2QF1L`Q1H+@G2>)5"48L?;)W%7)8W=J9PYCHP@3V5?E=KW]X?6Q@A. MKO;J-.40&ELU37?C5B%X@MPA&37>)+''3''4;_75]E3S&"`L\P@@++`@$!,&DP M8C$+, D&`U4$!A,"6D$Q)3 C!@-5! H3''%1H87=T92!#;VYS=6QT:6YG("A0 M=''DI($QT9"XQ+# J!@-5! ,3(U1H87=T92!097)S;VYA;"!&<F5E;6%I;"!) M<W-U:6YG($-!`@,/>A0P"08%*PX#`AH%`*""`;PP& 8)*H9(AO<-`0D#,0L& M"2J&2(;W#0$''`3 <!@DJADB&]PT!"04Q#Q<-,#8P-#$Q,34T-#0R6C C!@DJ MADB&]PT!"00Q%@041NMGA6H<4F@+4NACH1''K6"\AK=(P9P8)*H9(AO<-`0D/ M,5HP6# *!@@JADB&]PT#!S .!@@JADB&]PT#`@("`( P#08(*H9(AO<-`P(" M`4 P!P8%*PX#`@<P#08(*H9(AO<-`P("`2@P!P8%*PX#`AHP"@8(*H9(AO<- M`@4P> 8)*P8!! &"-Q $,6LP:3!B,0LP"08#500&$P):03$E,",&`U4$"A,< M5&AA=W1E($-O;G-U;''1I;F<@*%!T>2D@3''1D+C$L,"H&`U4$`Q,C5&AA=W1E M(%!E<G-O;F%L($9R965M86EL($ES<W5I;F<@0T$"`P]Z%#!Z!@LJADB&]PT! M"1 ""S%KH&DP8C$+, D&`U4$!A,"6D$Q)3 C!@-5! H3''%1H87=T92!#;VYS M=6QT:6YG("A0=''DI($QT9"XQ+# J!@-5! ,3(U1H87=T92!097)S;VYA;"!& M<F5E;6%I;"!)<W-U:6YG($-!`@,/>A0P#08)*H9(AO<-`0$!!0`$@8"=:>OMZU6K&4XT@&&)@,T_W778E6\D`.[99"UNGI?=*.4F@Y,GT/X@.7".I$$@N@;! MH(,D4:7J4LJKKB90''G=C)/,8R6/&I)9D9,W"*&SO^42>QV?%BGW_L- 3]6<L H\>HIGE^"7?Z(TP0S]9?H1%]//CFW[M)^S[4%6S)GHL6EG0`````````` ` end _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Steffen! Steffen Heil schrieb:> Did you compile this yourself? Or did you remove /lib/tls?As far as I know there is not a thread local storage issue on 64bit systems. At least there is no /lib/*tls* on my system (Debian Etch). Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Randy! Randy Thelen schrieb:>> native domU loss >> make -j4 553s 666s -17% >> make -j2 565s 713s -22% >> make 1,026s 1,199s -14% > > This is very interesting, Stephan. Could you run a test with dom0 > compiling the kernel?I have setup a dom0 with 512 MByte RAM and got the following results: native dom0 loss make -j4 553s 778s -29% make -j2 565s 1,458s -61% (!) make 1,026s 1,434s -28% These results look quite strange to me -- the dom0 performance is worse than the domU one. I ran the compile sequence with ''-j2'' three times -- the remarkable bad result is reproducable. It would be nice if somebody can confirm my results. Anybody having an explanation for that? Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Mats! Petersson, Mats schrieb:> Also, whilst it''s great that you run 512MB for native Linux, but I''d be surprised if the disk-caching in Dom0 is quite as effective as it could be - maybe you''d get better results (for this particular type of benchmark) if you gave another lot of memory to the Dom0 and took it away from DomU (even better, give some more to Dom0 without taking it away from DomU!).Really? The domU has its own cache and the dom0 is an additional one. But just to be sure I will start another test series this evening. Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi,Stephan Are you using UP or SMP kernel for the performance testing? Thanks Yunfeng>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stephan >Austermühle >Sent: 2006年4月12日 13:23 >To: xen-users@lists.xensource.com >Subject: Re: [Xen-users] Performance issues > >Hi Randy! > >Randy Thelen schrieb: > >>> native domU loss >>> make -j4 553s 666s -17% >>> make -j2 565s 713s -22% >>> make 1,026s 1,199s -14% >> >> This is very interesting, Stephan. Could you run a test with dom0 >> compiling the kernel? > >I have setup a dom0 with 512 MByte RAM and got the following results: > > native dom0 loss >make -j4 553s 778s -29% >make -j2 565s 1,458s -61% (!) >make 1,026s 1,434s -28% > >These results look quite strange to me -- the dom0 performance is worse > than the domU one. I ran the compile sequence with ''-j2'' three times -- >the remarkable bad result is reproducable. It would be nice if somebody >can confirm my results. > >Anybody having an explanation for that? > >Stephan_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Yunfeng! Zhao, Yunfeng schrieb:> Are you using UP or SMP kernel for the performance testing?I am using an SMP kernel. Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I tried make -j4 on both xenu and native with using UP kernel. The performance of xenu is about >95% of native. But I never tried this with SMP kernel. How many lps are in your machine? Thanks Yunfeng>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stephan >Austermühle >Sent: 2006年4月12日 13:36 >To: xen-users@lists.xensource.com >Subject: Re: [Xen-users] Performance issues > >Hi Yunfeng! > >Zhao, Yunfeng schrieb: > >> Are you using UP or SMP kernel for the performance testing? > >I am using an SMP kernel. > >Stephan > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Austermühle wrote:> I am using an SMP kernel.Yunfeng started an interesting inquiry. Would it be difficult for you to rebuild your kernel UP and run your tests again? These are very peculiar results. -- Randy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Yunfeng!> How many lps are in your machine?What do you mean with ''lps''? Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Randy!> Yunfeng started an interesting inquiry. Would it be difficult for > you to rebuild your kernel UP and run your tests again? These are > very peculiar results.I will do that but it takes me one to two days. Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stephan >Austermühle >Sent: 2006年4月12日 14:44 >To: xen-users@lists.xensource.com >Subject: RE: [Xen-users] Performance issues > >Hi Yunfeng! > >> How many lps are in your machine? > >What do you mean with ''lps''?Logical CPUs :) To make sure the test config is fair to both native and xenu. You should make sure the native and xenu are using the same number of LPs. For example: If native linux has 4 LPs, you should enable "vcpus=4" in the config file. Another factor may impact the performance testing is the service processes. It would be better to disable most unnecessary services in xenu, domain0 and native linux. Thanks Yunfeng>Stephan > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Austermühle wrote:> I will do that but it takes me one to two days.Hmm. What kind of hardware do you have? What''s your processor? What''s its frequency? What''s your DRAM? What''s it''s speed? That sort of thing. With your processor, include information from /proc/ cpuinfo. -- Randy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 11 Apr 2006 16:58:34 +0200 Stephan Austermühle <au@hcsd.de> wrote:> The performance loss is greater than what I have expected. Can anybody > confirm the dimension of the performance loss? Are these values normal > for a Xen setup?Hello Stephan, hello everybody, I also did some performance tests. I put them on a web page so you can have a look in order to compare with your results. The web page is still under development and results needs a more accurate analysis but it can give some hints. Here is the url: http://www.bullopensource.org/xen/benchs.html Any feedback will be appreciate, Hope this help, Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Guillaume Thouvenin wrote:> On Tue, 11 Apr 2006 16:58:34 +0200 > Stephan Austermühle <au@hcsd.de> wrote: > >> The performance loss is greater than what I have expected. Can anybody >> confirm the dimension of the performance loss? Are these values normal >> for a Xen setup? > > Hello Stephan, hello everybody, > > I also did some performance tests. I put them on a web page so you can > have a look in order to compare with your results. The web page is > still under development and results needs a more accurate analysis but > it can give some hints. Here is the url: > > http://www.bullopensource.org/xen/benchs.html > >The compile tests are interesting. There is a variable being overlooked by everyone -- the Xen scheduler. The SEDF scheduler is not necessarily the best for every workload. Would you mind re-running the first part of the tests (or all if you have the time), but use the BVT scheduler instead? To set CPU weights you can use the cpu_weight variable in the domain config file (ex. cpu_weight = "2"). libxc/xc_domain.c seems to still use that method for setting the weight. Note it does not use cpu_weight for SEDF. Also, in 3.0.2 you can run "xm sched-sedf domid" to get settings, no need for the external sedf program. I am running my own tests currently, but I don''t create pretty graphs/websites :) Thank you, Matt Ayres _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Randy! Randy Thelen schrieb:> Hmm. What kind of hardware do you have? What''s your processor? What''s > its frequency? What''s your DRAM? What''s it''s speed? That sort of > thing. With your processor, include information from /proc/cpuinfo.AMD Athlon64 X2 (2.0 GHz, 3800+) on an Asus A8N-SLI Premium with 1 GByte DDR-RAM (ECC). Information from /proc/cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 43 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ stepping : 1 cpu MHz : 2010.332 cache size : 512 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 0 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm cmp_legacy bogomips : 5027.14 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 43 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ stepping : 1 cpu MHz : 2010.332 cache size : 512 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 0 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm cmp_legacy bogomips : 5027.14 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Yunfeng! Zhao, Yunfeng schrieb:> Logical CPUs :)Ah, okay. I have an AMD Athlon 64 Dual Core processor meaning that I have two logical processors. No hyperthreading or something like that. I have passed both cores to dom0/domU so the CPU count does not differ between the setups. Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, 12 Apr 2006 11:18:18 -0400 Matt Ayres <matta@tektonic.net> wrote:> The SEDF scheduler is not necessarily > the best for every workload. Would you mind re-running the first part > of > the tests (or all if you have the time), but use the BVT scheduler > instead? To set CPU weights you can use the cpu_weight variable in > the > domain config file (ex. cpu_weight = "2"). libxc/xc_domain.c seems to> still use that method for setting the weight. Note it does not use > cpu_weight for SEDF. Also, in 3.0.2 you can run "xm sched-sedf domid" > to > get settings, no need for the external sedf program.Thank you for the hint. I will run test with the BVT scheduler and I will compare results with sEDF scheduler. I will post results next week (with pretty graph ;). Cheers, Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Apr 13, 2006 at 08:57:21AM +0200, Guillaume Thouvenin wrote:> On Wed, 12 Apr 2006 11:18:18 -0400 > Matt Ayres <matta@tektonic.net> wrote: > > > The SEDF scheduler is not necessarily > > the best for every workload. Would you mind re-running the first part > > of > > the tests (or all if you have the time), but use the BVT scheduler > > instead? To set CPU weights you can use the cpu_weight variable in > > the > > domain config file (ex. cpu_weight = "2"). libxc/xc_domain.c seems to > > > still use that method for setting the weight. Note it does not use > > cpu_weight for SEDF. Also, in 3.0.2 you can run "xm sched-sedf domid" > > to > > get settings, no need for the external sedf program. > > > Thank you for the hint. I will run test with the BVT scheduler and I > will compare results with sEDF scheduler. I will post results next week > (with pretty graph ;). >So, did you already have time to do the tests? I''m interested of the results.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, 19 Apr 2006 20:03:35 +0300 Pasi Kärkkäinen <pasik@iki.fi> wrote:> > So, did you already have time to do the tests? I''m interested of the > results..I have some results and it appears that the BVT scheduler is better for this kind of work. I think that Stephan will confirm that. I''m using ''make'' without ''-j'' option and I''m using ''time'' to know the time taken by the compilation. The xenlinux is 2.6.16-xen and I''m testing xen-unstable from mercurial repository (updated two days ago). With BVT scheduler: Dom0: 1945.61user 392.50system 42:30.23elapsed 91%CPU DomU: 1916.35user 382.95system 40:57.35elapsed 93%CPU With sEDF scheduler: Dom0: 1529.67user 347.68system 32:45.93elapsed 95%CPU DomU: 2300.90user 737.12system 1:08:22elapsed 74%CPU I will try the BVT scheduler with other benchmarks. Hope this help, Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi!> I have some results and it appears that the BVT scheduler is better for > this kind of work. I think that Stephan will confirm that.What settings did you use? I have booted Xen with command line argument ''sched=bvt'' but that did not change much. Thanks, Stephan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 21 Apr 2006 09:17:00 +0200 (CEST) Stephan Austermühle <au@hcsd.de> wrote:> What settings did you use? I have booted Xen with command line > argument > ''sched=bvt'' but that did not change much.I use the same line argument as you. Here is what I have in my grub configuration file: root (hd0,0) kernel /boot/xen-3.gz dom0_mem=1048576 com1=9600,8n1 sched=bvt module /boot/vmlinuz-2.6.16-xen root=/dev/sda1 console=ttyS0 module /boot/initrd.img-2.6.16-xen When I test the sEDF scheduler I just remove the sched=bvt argument. Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Apr 21, 2006 at 09:59:35AM +0200, Guillaume Thouvenin wrote:> On Fri, 21 Apr 2006 09:17:00 +0200 (CEST) > Stephan Austermühle <au@hcsd.de> wrote: > > > What settings did you use? I have booted Xen with command line > > argument > > ''sched=bvt'' but that did not change much. > > I use the same line argument as you. Here is what I have in my grub > configuration file: > > root (hd0,0) > kernel /boot/xen-3.gz dom0_mem=1048576 com1=9600,8n1 sched=bvt > module /boot/vmlinuz-2.6.16-xen root=/dev/sda1 console=ttyS0 > module /boot/initrd.img-2.6.16-xen > > When I test the sEDF scheduler I just remove the sched=bvt argument. >How about xm-sched parameters? Do you use the defaults, or custom settings? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 21 Apr 2006 13:19:15 +0300 Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Fri, Apr 21, 2006 at 09:59:35AM +0200, Guillaume Thouvenin wrote: > > On Fri, 21 Apr 2006 09:17:00 +0200 (CEST) > > Stephan Austermühle <au@hcsd.de> wrote: > > > > > What settings did you use? I have booted Xen with command line > > > argument > > > ''sched=bvt'' but that did not change much. > > > > I use the same line argument as you. Here is what I have in my grub > > configuration file: > > > > root (hd0,0) > > kernel /boot/xen-3.gz dom0_mem=1048576 com1=9600,8n1 sched=bvt > > module /boot/vmlinuz-2.6.16-xen root=/dev/sda1 console=ttyS0 > > module /boot/initrd.img-2.6.16-xen > > > > When I test the sEDF scheduler I just remove the sched=bvt argument. > > > > How about xm-sched parameters? Do you use the defaults, or custom > settings?I''m using the default parameters to test the difference between sEDF and BVT scheduler. I only customize the sEDF parameters for the test that checks the influence of the sEDF scheduler. For this test I use three domains and each one has its own parameters. <http://www.bullopensource.org/xen/benchs.html> Cheers, Guillaume _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users