> We are done on SNV87 HVM DomU. > Service ppd-cache-update now online. > I gonna wait for completition on S10U5 HVM DomUCan you please try to run the fork_100 test from the libMicro-0.4.0 benchmark on that SNV87 HVM DomU? The source of the benchmark is available for download here: http://opensolaris.org/os/project/libmicro/ http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % wget http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % gunzip< libmicro-0.4.0.tar.gz |tar xf - % cd libMicro-0.4.0 % make % bin/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 > /tmp/fork.output Running: fork_100 for 4.77470 seconds % cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 280.76964 100 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 237.29929 237.29929 # max 949.35702 424.32036 # mean 306.45514 298.44325 # median 281.65932 280.76964 # stddev 79.72480 44.62684 # standard error 7.89393 4.46268 # 99% confidence level 18.36128 10.38020 # skew 5.29799 1.00536 # kurtosis 38.85951 0.12177 # time correlation -1.47891 -1.07615 # # elasped time 4.76910 # number of samples 100 # number of outliers 2 # getnsecs overhead 399 # # DISTRIBUTION # counts usecs/call means # 1 234.00000 |** 237.29929 # 7 240.00000 |************** 242.43814 # 2 246.00000 |**** 249.26708 # 1 252.00000 |** 256.03853 # 3 258.00000 |****** 262.67013 # 16 264.00000 |******************************** 267.68925 # 10 270.00000 |******************** 273.27753 # 12 276.00000 |************************ 279.59795 # 7 282.00000 |************** 284.16094 # 5 288.00000 |********** 290.65786 # 2 294.00000 |**** 297.36328 # 3 300.00000 |****** 302.75731 # 1 306.00000 |** 309.20215 # 2 312.00000 |**** 315.67402 # 1 318.00000 |** 318.00967 # 1 324.00000 |** 326.74916 # 3 330.00000 |****** 331.74546 # 3 336.00000 |****** 340.82672 # 3 342.00000 |****** 342.74536 # 5 348.00000 |********** 350.05268 # 2 354.00000 |**** 357.77666 # 0 360.00000 | - # 1 366.00000 |** 369.75543 # 0 372.00000 | - # 1 378.00000 |** 383.21981 # 3 384.00000 |****** 386.35274 # # 5 > 95% |********** 408.37425 # # mean of 95% 292.65741 # 95th %ile 397.26077 On an "AMD Athlon(tm) 64 X2 Dual Core Processor 6400+" / metal / snv_89, the fork benchmark runs for 4.77 seconds (that''s the output included above). Under xVM / Xen and on current Intel / AMD processors it should complete in 20-30 seconds. I''ve seen cases where the fork_100 benchmark used > 700 seconds, when I ran it in a 32-bit PV domU. -------------------------------------------------------------------- I do have a workaround for this xen / opensolaris performance problem[*]; it''s a modified /usr/lib/libc/libc_hwcap3.so.1 shared C library (see the attachment). It was compiled from current opensolaris source (post snv_89), but seems to work OK under snv_81. Under xVM you should find a lofs mount like this in "df -h" output: # df -h Filesystem size used avail capacity Mounted on ... /usr/lib/libc/libc_hwcap3.so.1 6.1G 4.1G 1.9G 69% /lib/libc.so.1 ... Now try this in your domU: # gunzip < libc_hwcap3.tar.gz | ( cd /tmp; tar xfv - ) x libc_hwcap3.so.1, 1646444 bytes, 3216 tape blocks # mount -O -F lofs /tmp/libc_hwcap3.so.1 /lib/libc.so.1 Now run the fork_100 benchmark. It might run much faster. And after "umount /lib/libc.so.1", the fork_100 should become slow. ==[*] http://www.opensolaris.org/jive/thread.jspa?threadID=58717&tstart=0
Here it is:- -bash-3.2$ cat fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 28262.30682 101 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # & nbsp; min 19303.53724 24279.60363 # max 35682.69021 35682.69021 # mean 28843.85537 28938.31396 # median 28262.30682 28262.30682 # &n bsp; stddev 2480.86228 2301.52907 # standard error 245.64197 229.01070 # 99% confidence level 571.36322 532.67889 # skew 0.58140 1.31861 # kurtosis 2.43780 & nbsp; 1.26418 # time correlation -11.92490 -17.95016 # # elasped time 306.42171 # number of samples 101 # number of outliers 1 # getnsecs overhead 238 # # DISTRIBUTION # counts usecs/call &n bsp; means # 1 24000.00000 |* 24279.60363 # 0 24400.00000 | - # 0 24800.00000 | - # 1 25200.00000 |* 25530.67435 # 1 25600.00000 |* &n bsp; 25864.52269 # 3 26000.00000 |***** 26215.47420 # 1 26400.00000 |* 26787.31801 # 6 26800.00000 |********** & nbsp; 26978.43441 # 10 27200.00000 |***************** 27416.17151 # 18 27600.00000 |******************************** 27814.69021 # 14 28000.00000 |************************ 28211.51786 # 14 28400.00000 |************************ &nb sp; 28605.96807 # 9 28800.00000 |**************** 28956.49312 # 0 29200.00000 | - # 2 29600.00000 |*** &nb sp; 29739.01901 # 2 30000.00000 |*** 30155.75809 # 1 30400.00000 |* 30736.94535 # 1 30800.00000 |* & nbsp; 30986.33089 # 2 31200.00000 |*** 31505.32102 # 4 31600.00000 |******* 31914.32562 # 0 32000.00000 | &n bsp; - # 1 32400.00000 |* 32642.88572 # 1 32800.00000 |* 32818.75299 # 1 33200.00000 |* 33557.53675 # 2 33600.00000 |*** 33778.43354 # # 6 > 95% |********** &n bsp; 35025.72746 # # mean of 95% 28553.84574 # 95th %ile 34268.24321 --- On Mon, 5/5/08, Juergen Keil wrote: From: Juergen Keil Subject: Re: [xen-discuss] How to install Solaris 0508 HVM DomU at SNV87 To: bderzhavets@yahoo.com Cc: xen-discuss@opensolaris.org Date: Monday, May 5, 2008, 12:43 PM > We are done on SNV87 HVM DomU. > Service ppd-cache-update now online. > I gonna wait for completition on S10U5 HVM DomU Can you please try to run the fork_100 test from the libMicro-0.4.0 benchmark on that SNV87 HVM DomU? The source of the b enchmark is available for download here: http://opensolaris.org/os/project/libmicro/ http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % wget http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % gunzip< libmicro-0.4.0.tar.gz |tar xf - % cd libMicro-0.4.0 % make % bin/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 > /tmp/fork.output Running: fork_100 for 4.77470 seconds % cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 280.76964 100 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 237.29929 237.29929 # max 949.35702 424.32036 # mean 306.45514 298.44325 # median 281.65932 280.76964 # stddev 79.72480 44.62684 # standard error 7.89393 4.46268 # 99% confidence level 18.36128 10.38020 # skew 5.29799 1.00536 # kurtosis 38.85951 0.12177 # time correlation -1.47891 -1.07615 # # elasped time 4.76910 # number of samples 100 # number of outliers 2 # getnsecs overhead 399 # # DISTRIBUTION # counts usecs/call means # 1 234.00000 |** 237.29929 # 7 240.00000 |** ************ 242.43814 # 2 246.00000 |**** 249.26708 # 1 252.00000 |** 256.03853 # 3 258.00000 |****** 262.67013 # 16 264.00000 |******************************** 267.68925 # 10 270.00000 |******************** 273.27753 # 12 276.00000 |************************ 279.59795 # 7 282.00000 |************** 284.16094 # 5 288.00000 |********** 290.65786 # 2 294.00000 |**** 297.36328 # 3 300.00000 |****** 302.75731 # 1 306.00000 |** 309.20215 # 2 312.00000 |**** 315.67402 # 1 318.00000 |** 318.00967 # 1 324.00000 |** 326.74916 # 3 330.00000 |****** 331.74546 # 3 336.00000 |****** 340.82672 # 3 342.00000 |****** 342.74536 # 5 348.00000 |********** 350.05268 # 2 354.00000 |**** 357.77666 # 0 360.00000 | - # 1 366.00000 |** 369.75543 # 0 372.00000 | - # 1 378.00000 |** 383.21981 # 3 384.00000 |****** 386.35274 # # 5 > 95% |********** 408.37425 # # mean of 95% 292.65741 # 95th %ile 397.26077 On an "AMD Athlon(tm) 64 X2 Dual Core Processor 6400+" / metal / snv_89, the fork benchmark runs for 4.77 seconds (that''s the output included above). Under xVM / Xen and on current Intel / AMD processors it should complete in 20-30 seconds. I''ve seen cases where the fork_100 benchmark used > 700 seconds, when I ran it in a 32-bit PV domU. -------------------------------------------------------------------- I do have a workaround for this xen / opensolaris performance problem[*]; it''s a modified /usr/lib/libc/libc_hwcap3.so.1 shared C library (see the attachment). It was compiled from current opens olaris source (post snv_89), but seems to work OK under snv_81. Under xVM you should find a lofs mount like this in "df -h" output: # df -h Filesystem size used avail capacity Mounted on ... /usr/lib/libc/libc_hwcap3.so.1 6.1G 4.1G 1.9G 69% /lib/libc.so.1 ... Now try this in your domU: # gunzip < libc_hwcap3.tar.gz | ( cd /tmp; tar xfv - ) x libc_hwcap3.so.1, 1646444 bytes, 3216 tape blocks # mount -O -F lofs /tmp/libc_hwcap3.so.1 /lib/libc.so.1 Now run the fork_100 benchmark. It might run much faster. And after "umount /lib/libc.so.1", the fork_100 should become slow. ==[*] http://www.opensolaris.org/jive/thread.jspa?threadID=58717&tstart=0 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
Workaround has been applied:- Minus 50 sec -bash-3.2$ cat fork1.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 23881.74167 86 0 100 # # STATISTICS usecs/call (raw) usecs/call (outlie rs removed) # min 22510.65833 22510.65833 # max 49583.97639 26053.60615 # mean 24982.76488 24009.66137 # median 24098.16940 23881.74167 # &n bsp; stddev 3179.35577 686.60767 # standard error 314.80313 74.03881 # 99% confidence level 732.23208 172.21427 # skew 5.00473 0.60015 # kurtosis 33.19530 &n bsp; 0.17513 # time correlation -15.29000 -0.54277 # # elasped time 257.34507 # number of samples 86 # number of outliers 16 # getnsecs overhead 238 # # DISTRIBUTION # counts usecs/call &nb sp; means # 1 22500.00000 |**** 22510.65833 # 0 22590.00000 | - # &nb sp; 0 22680.00000 | - # 0 22770.00000 | - # 3 22860.00000 |************ &n bsp; 22890.55975 # 0 22950.00000 | - # 3 23040.00000 |************ 23074.55564 # 1 23130.00000 |**** 23178.31415 # 2 23220.00000 |******** 23264.49284 # 4 23310.00000 |**************** 23376.40680 # &nb sp; 4 23400.00000 |**************** 23453.45574 # 4 23490.00000 |**************** 23523.64573 # 7 23580.00000 |**************************** 23635.30316 # 8 23670.00000 |******************************** 23710.76531 # &n bsp; 5 23760.00000 |******************** 23788.50820 # 4 23850.00000 |**************** 23893.68383 # 3 23940.00000 |************ 23973.25051 # 4 24030.00000 |**************** 24078.65052 #&nbs p; 3 24120.00000 |************ 24141.85327 # 3 24210.00000 |************ 24246.67291 # 4 24300.00000 |**************** 24364.04465 # 5 24390.00000 |******************** 24444.50472 # 4 24480.00000 |**************** 24539.64735 # 1 24570.00000 |**** 24621.17191 # 2 24660.00000 |******** & nbsp; 24739.20456 # 1 24750.00000 |**** 24760.38002 # 0 24840.00000 | - # &nb sp; 0 24930.00000 | - # 3 25020.00000 |************ 25076.92854 # 1 25110.00000 |**** 25182.04928 # 1 25200.00000 |**** 25276.34825 # # 5 > 95% |******************** 25573.33034 # # mean of 95% 23913.13860 # 95th %ile 25295.36178 --- On Mon, 5/5/08, Juergen Keil wrote: From: Juergen Keil Subject: Re: [xen-discuss] How to install Solaris 0508 HVM DomU at SNV87 To: bderzhavets@yahoo.com Cc: xen-discuss@opensolaris.org Date: Monday, May 5, 2008, 12:43 PM > We are done on SNV87 HVM DomU. > Service ppd-cache-update now online. > I gonna wait for completition on S10U5 HVM DomU Can you please try to run the fork_100 test from the libMicro-0.4.0 benchmark on that SNV87 HVM DomU? The source of the benchmark is available for download here: http://opensolaris.org/os/project/libmicro/ http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % wget http://opensolaris.org/os/project/libmicro/files/libmicro-0.4.0.tar.gz % gunzip< libmicro-0.4.0.tar.gz |tar xf - % cd libMicro-0.4.0 % make % bin/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 > /tmp/fork.output Running: fork_100 for 4.77470 seconds % cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 280.76964 100 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 237.29929 237.29929 # max 949.35702 424.32036 # mean 306.45514 298.44325 # median 281.65932 280.76964 # stddev 79.72480 44.62684 # standard error 7.89393 4.46268 # 99% confidence level 18.36128 10.38020 # skew 5.29799 1.00536 # kurtosi s 38.85951 0.12177 # time correlation -1.47891 -1.07615 # # elasped time 4.76910 # number of samples 100 # number of outliers 2 # getnsecs overhead 399 # # DISTRIBUTION # counts usecs/call means # 1 234.00000 |** 237.29929 # 7 240.00000 |************** 242.43814 # 2 246.00000 |**** 249.26708 # 1 252.00000 |** 256.03853 # 3 258.00000 |****** 262.67013 # 16 264.00000 |******************************** 267.68925 # 10 270.00000 |******************** 273.27753 # 12 276.00000 |************************ 279.59795 # 7 282.00000 |************** 284.16094 # 5 288.00000 |********** 290.65786 # 2 294.00000 |**** 297.36328 # 3 300.00000 |****** 302.75731 # 1 306.00000 |** 309.20215 # 2 312.00000 |**** 315.67402 # 1 318.00000 |** 318.00967 # 1 324.00000 |** 326.74916 # 3 330.00000 |****** 331.74546 # 3 336.00000 |****** 340.82672 # 3 342.00000 |****** 342.74536 # 5 348.00000 |********** 350.05268 # 2 354.00000 |**** 357.77666 # 0 360.00000 | - # 1 366.00000 |** 369.75543 # 0 372.00000 | - # 1 378.00000 |** 383.21981 # 3 384.00000 |****** 386.35274 # # 5 > 95% |********** 408.37425 # # mean of 95% 292.65741 # 95th %ile 397.26077 On an "AMD Athlon(tm) 64 X2 Dual Core Processor 6400+" / metal / snv_89, the fork benchmark runs for 4.77 seconds (that''s the output included above). Under xVM / Xen a nd on current Intel / AMD processors it should complete in 20-30 seconds. I''ve seen cases where the fork_100 benchmark used > 700 seconds, when I ran it in a 32-bit PV domU. -------------------------------------------------------------------- I do have a workaround for this xen / opensolaris performance problem[*]; it''s a modified /usr/lib/libc/libc_hwcap3.so.1 shared C library (see the attachment). It was compiled from current opensolaris source (post snv_89), but seems to work OK under snv_81. Under xVM you should find a lofs mount like this in "df -h" output: # df -h Filesystem size used avail capacity Mounted on ... /usr/lib/libc/libc_hwcap3.so.1 6.1G 4.1G 1.9G 69% /lib/libc.so.1 ... Now try this in your domU: # gunzip < libc_hwcap3.tar.gz | ( cd /tmp; tar xfv - ) x libc_hwcap3.so.1, 1646444 bytes, 3216 tape blocks # mount -O -F lofs /tmp/libc_hwcap3.so.1 /lib/libc.so.1 Now run the fork_100 benchmark. It might run much faster. And after "umount /lib/libc.so.1", the fork_100 should become slow. ==[*] http://www.opensolaris.org/jive/thread.jspa?threadID=58717&tstart=0 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
-bash-3.2$ cat fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 28262.30682 101 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 19303.53724 24279.60363 # max 35682.69021 35682.69021 # mean 28843.85537 28938.31396 # median 28262.30682 28262.30682 # stddev 2480.86228 2301.52907 # standard error 245.64197 229.01070 # 99% confidence level 571.36322 532.67889 # skew 0.58140 1.31861 # kurtosis 2.43780 1.26418 # time correlation -11.92490 -17.95016 # # elasped time 306.42171 # number of samples 101 # number of outliers 1 # getnsecs overhead 238 # # DISTRIBUTION # counts usecs/call means # 1 24000.00000 |* 24279.60363 # 0 24400.00000 | - # 0 24800.00000 | - # 1 25200.00000 |* 25530.67435 # 1 25600.00000 |* 25864.52269 # 3 26000.00000 |***** 26215.47420 # 1 26400.00000 |* 26787.31801 # 6 26800.00000 |********** 26978.43441 # 10 27200.00000 |***************** 27416.17151 # 18 27600.00000 |******************************** 27814.69021 # 14 28000.00000 |************************ 28211.51786 # 14 28400.00000 |************************ 28605.96807 # 9 28800.00000 |**************** 28956.49312 # 0 29200.00000 | - # 2 29600.00000 |*** 29739.01901 # 2 30000.00000 |*** 30155.75809 # 1 30400.00000 |* 30736.94535 # 1 30800.00000 |* 30986.33089 # 2 31200.00000 |*** 31505.32102 # 4 31600.00000 |******* 31914.32562 # 0 32000.00000 | - # 1 32400.00000 |* 32642.88572 # 1 32800.00000 |* 32818.75299 # 1 33200.00000 |* 33557.53675 # 2 33600.00000 |*** 33778.43354 # # 6 > 95% |********** 35025.72746 # # mean of 95% 28553.84574 # 95th %ile 34268.24321 Workaround applied:- -bash-3.2$ cat fork1.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 23881.74167 86 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 22510.65833 22510.65833 # max 49583.97639 26053.60615 # mean 24982.76488 24009.66137 # median 24098.16940 23881.74167 # stddev 3179.35577 686.60767 # standard error 314.80313 74.03881 # 99% confidence level 732.23208 172.21427 # skew 5.00473 0.60015 # kurtosis 33.19530 0.17513 # time correlation -15.29000 -0.54277 # # elasped time 257.34507 # number of samples 86 # number of outliers 16 # getnsecs overhead 238 # # DISTRIBUTION # counts usecs/call means # 1 22500.00000 |**** 22510.65833 # 0 22590.00000 | - # 0 22680.00000 | - # 0 22770.00000 | - # 3 22860.00000 |************ 22890.55975 # 0 22950.00000 | - # 3 23040.00000 |************ 23074.55564 # 1 23130.00000 |**** 23178.31415 # 2 23220.00000 |******** 23264.49284 # 4 23310.00000 |**************** 23376.40680 # 4 23400.00000 |**************** 23453.45574 # 4 23490.00000 |**************** 23523.64573 # 7 23580.00000 |**************************** 23635.30316 # 8 23670.00000 |******************************** 23710.76531 # 5 23760.00000 |******************** 23788.50820 # 4 23850.00000 |**************** 23893.68383 # 3 23940.00000 |************ 23973.25051 # 4 24030.00000 |**************** 24078.65052 # 3 24120.00000 |************ 24141.85327 # 3 24210.00000 |************ 24246.67291 # 4 24300.00000 |**************** 24364.04465 # 5 24390.00000 |******************** 24444.50472 # 4 24480.00000 |**************** 24539.64735 # 1 24570.00000 |**** 24621.17191 # 2 24660.00000 |******** 24739.20456 # 1 24750.00000 |**** 24760.38002 # 0 24840.00000 | - # 0 24930.00000 | - # 3 25020.00000 |************ 25076.92854 # 1 25110.00000 |**** 25182.04928 # 1 25200.00000 |**** 25276.34825 # # 5 > 95% |******************** 25573.33034 # # mean of 95% 23913.13860 # 95th %ile 25295.36178 This message posted from opensolaris.org
************************************** Test has been run also in SNV87 Dom0 ************************************** bash-3.2# cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 567.92916 94 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 417.95968 554.68853 # max 847.82625 583.77334 # mean 571.75114 569.41545 # median 568.07266 567.92916 # stddev 36.39352 4.96461 # standard error 3.60349 0.51206 # 99% confidence level 8.38173 1.19105 # skew 3.78529 0.68021 # kurtosis 35.20308 1.22022 # time correlation -0.15396 0.05434 # # elasped time 7.18553 # number of samples 94 # number of outliers 8 # getnsecs overhead 777 # # DISTRIBUTION # counts usecs/call means # 1 554.00000 |** 554.68853 # 0 555.00000 | - # 1 556.00000 |** 556.86535 # 0 557.00000 | - # 0 558.00000 | - # 0 559.00000 | - # 0 560.00000 | - # 0 561.00000 | - # 0 562.00000 | - # 1 563.00000 |** 563.26697 # 3 564.00000 |****** 564.49375 # 10 565.00000 |******************** 565.67814 # 16 566.00000 |******************************** 566.53424 # 16 567.00000 |******************************** 567.54353 # 12 568.00000 |************************ 568.45371 # 7 569.00000 |************** 569.59792 # 2 570.00000 |**** 570.58145 # 5 571.00000 |********** 571.55168 # 0 572.00000 | - # 2 573.00000 |**** 573.75141 # 3 574.00000 |****** 574.62756 # 2 575.00000 |**** 575.40584 # 5 576.00000 |********** 576.48276 # 2 577.00000 |**** 577.43004 # 1 578.00000 |** 578.16298 # # 5 > 95% |********** 581.90778 # # mean of 95% 568.71363 # 95th %ile 579.78255 Workaround applied:- bash-3.2# cat /tmp/fork1.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 420.05761 88 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 308.28618 403.23468 # max 850.40485 438.22896 # mean 426.06209 420.33389 # median 420.18462 420.05761 # stddev 47.65546 5.97926 # standard error 4.71859 0.63739 # 99% confidence level 10.97545 1.48257 # skew 6.93092 0.04053 # kurtosis 59.34170 0.34644 # time correlation -0.15220 0.01308 # # elasped time 5.94076 # number of samples 88 # number of outliers 14 # getnsecs overhead 777 # # DISTRIBUTION # counts usecs/call means # 1 403.00000 |** 403.23468 # 0 404.00000 | - # 0 405.00000 | - # 0 406.00000 | - # 0 407.00000 | - # 1 408.00000 |** 408.95085 # 1 409.00000 |** 409.57337 # 4 410.00000 |*********** 410.37194 # 0 411.00000 | - # 0 412.00000 | - # 6 413.00000 |***************** 413.58412 # 5 414.00000 |************** 414.67721 # 5 415.00000 |************** 415.53331 # 2 416.00000 |***** 416.65549 # 1 417.00000 |** 417.67079 # 7 418.00000 |******************** 418.57772 # 11 419.00000 |******************************** 419.47031 # 7 420.00000 |******************** 420.45776 # 4 421.00000 |*********** 421.54137 # 3 422.00000 |******** 422.84403 # 2 423.00000 |***** 423.61329 # 9 424.00000 |************************** 424.54374 # 8 425.00000 |*********************** 425.51294 # 3 426.00000 |******** 426.16449 # 2 427.00000 |***** 427.42027 # 1 428.00000 |** 428.93438 # # 5 > 95% |************** 433.19711 # # mean of 95% 419.55900 # 95th %ile 430.7402 This message posted from opensolaris.org
Test has been run on SNV87 HVM DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) bash-3.2# cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 3878.74109 97 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 2240.39017 3136.30213 # max 6578.54134 4476.48874 # mean 3834.85126 3838.71486 # median 3878.74109 3878.74109 # stddev 462.30420 270.98553 # standard error 45.77494 27.51441 # 99% confidence level 106.47250 63.99852 # skew 1.23803 -0.39799 # kurtosis 12.47629 -0.12069 # time correlation 7.23705 4.13648 # # elasped time 40.26895 # number of samples 97 # number of outliers 5 # getnsecs overhead 240 # # DISTRIBUTION # counts usecs/call means # 2 3120.00000 |******* 3141.57870 # 0 3160.00000 | - # 0 3200.00000 | - # 1 3240.00000 |*** 3270.63127 # 1 3280.00000 |*** 3315.86074 # 2 3320.00000 |******* 3343.16514 # 2 3360.00000 |******* 3387.15981 # 0 3400.00000 | - # 4 3440.00000 |************** 3454.11189 # 3 3480.00000 |********** 3506.44740 # 1 3520.00000 |*** 3527.70048 # 2 3560.00000 |******* 3580.31895 # 2 3600.00000 |******* 3633.57409 # 5 3640.00000 |***************** 3654.51873 # 5 3680.00000 |***************** 3702.06410 # 2 3720.00000 |******* 3734.95825 # 5 3760.00000 |***************** 3775.00759 # 7 3800.00000 |************************ 3827.41224 # 5 3840.00000 |***************** 3860.53744 # 8 3880.00000 |**************************** 3906.14779 # 6 3920.00000 |********************* 3944.77994 # 7 3960.00000 |************************ 3972.67598 # 3 4000.00000 |********** 4027.35658 # 9 4040.00000 |******************************** 4056.47496 # 2 4080.00000 |******* 4093.34951 # 5 4120.00000 |***************** 4146.97154 # 1 4160.00000 |*** 4160.12349 # 2 4200.00000 |******* 4229.76086 # # 5 > 95% |***************** 4330.63690 # # mean of 95% 3811.97997 # 95th %ile 4240.23979 This message posted from opensolaris.org
********************************************************************************************* Workaround was applied to SNV87 HVM DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) ********************************************************************************************* bash-3.2# cat /tmp/fork1.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 3490.36255 99 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 2285.29463 3075.92338 # max 3751.96312 3751.96312 # mean 3436.22895 3457.58739 # median 3489.48390 3490.36255 # stddev 188.94766 134.45966 # standard error 18.70861 13.51370 # 99% confidence level 43.51622 31.43288 # skew -2.67349 -0.85493 # kurtosis 12.08353 0.43654 # time correlation 2.62851 1.66626 # # elasped time 35.59440 # number of samples 99 # number of outliers 3 # getnsecs overhead 241 # # DISTRIBUTION # counts usecs/call means # 2 3060.00000 |**** 3076.64599 # 0 3080.00000 | - # 1 3100.00000 |** 3117.03130 # 1 3120.00000 |** 3137.85566 # 0 3140.00000 | - # 1 3160.00000 |** 3174.61826 # 2 3180.00000 |**** 3194.53001 # 0 3200.00000 | - # 0 3220.00000 | - # 2 3240.00000 |**** 3243.54728 # 1 3260.00000 |** 3262.96715 # 2 3280.00000 |**** 3285.89372 # 3 3300.00000 |******* 3306.78048 # 5 3320.00000 |************ 3331.94516 # 2 3340.00000 |**** 3349.90113 # 2 3360.00000 |**** 3370.91456 # 3 3380.00000 |******* 3393.05430 # 4 3400.00000 |********* 3408.38296 # 4 3420.00000 |********* 3426.25905 # 6 3440.00000 |************** 3447.37306 # 5 3460.00000 |************ 3469.49040 # 7 3480.00000 |***************** 3489.08614 # 4 3500.00000 |********* 3513.54859 # 13 3520.00000 |******************************** 3529.17266 # 5 3540.00000 |************ 3549.33307 # 9 3560.00000 |********************** 3566.20301 # 8 3580.00000 |******************* 3590.33065 # 2 3600.00000 |**** 3607.35373 # # 5 > 95% |************ 3659.88758 # # mean of 95% 3446.82674 # 95th %ile 3613.97117 This message posted from opensolaris.org
*************************************************************************** Test has been run on SNV87 PV DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) *************************************************************************** bash-3.2# cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 828.53810 102 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 509.11894 509.11894 # max 1050.92246 1050.92246 # mean 754.51042 754.51042 # median 828.53810 828.53810 # stddev 161.75862 161.75862 # standard error 16.01649 16.01649 # 99% confidence level 37.25436 37.25436 # skew -0.04221 -0.04221 # kurtosis -1.44369 -1.44369 # time correlation 0.30412 0.30412 # # elasped time 10.34040 # number of samples 102 # number of outliers 0 # getnsecs overhead 731 # # DISTRIBUTION # counts usecs/call means # 3 500.00000 |********* 509.71862 # 7 520.00000 |********************** 529.73117 # 6 540.00000 |******************* 552.02722 # 8 560.00000 |************************* 572.87289 # 4 580.00000 |************ 589.62045 # 6 600.00000 |******************* 607.76819 # 2 620.00000 |****** 628.65081 # 3 640.00000 |********* 646.20774 # 1 660.00000 |*** 676.74508 # 2 680.00000 |****** 682.87549 # 4 700.00000 |************ 711.65838 # 0 720.00000 | - # 2 740.00000 |****** 754.75348 # 1 760.00000 |*** 762.48319 # 1 780.00000 |*** 793.82743 # 1 800.00000 |*** 814.04187 # 10 820.00000 |******************************** 833.28151 # 8 840.00000 |************************* 850.07822 # 9 860.00000 |**************************** 869.98583 # 5 880.00000 |**************** 888.15260 # 2 900.00000 |****** 917.26301 # 2 920.00000 |****** 929.18110 # 5 940.00000 |**************** 950.43499 # 4 960.00000 |************ 974.56451 # # 6 > 95% |******************* 1018.37332 # # mean of 95% 738.01899 # 95th %ile 997.25050 This message posted from opensolaris.org
************************************************************************** Test has been run on SNV87 PV DomU at SNV87 Dom0 (all 64-bit) ************************************************************************** bash-3.2# bin/fork -E -C 200 -L -S -W -N fork_100 \> -B 100 -C 100 > /tmp/fork.outputRunning: fork_100 for 12.53613 seconds bash-3.2# cat /tmp/fork.output # bin/../bin-i86pc/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 prc thr usecs/call samples errors cnt/samp fork_100 1 1 960.02883 102 0 100 # # STATISTICS usecs/call (raw) usecs/call (outliers removed) # min 606.30366 606.30366 # max 1224.05959 1224.05959 # mean 934.83237 934.83237 # median 960.02883 960.02883 # stddev 213.32730 213.32730 # standard error 21.12255 21.12255 # 99% confidence level 49.13105 49.13105 # skew -0.25003 -0.25003 # kurtosis -1.48333 -1.48333 # time correlation 0.05421 0.05421 # # elasped time 12.51547 # number of samples 102 # number of outliers 0 # getnsecs overhead 784 # # DISTRIBUTION # counts usecs/call means # 6 600.00000 |******************* 611.79889 # 3 620.00000 |********* 633.45811 # 9 640.00000 |**************************** 648.17319 # 8 660.00000 |************************* 668.67504 # 3 680.00000 |********* 691.71916 # 1 700.00000 |*** 718.37896 # 1 720.00000 |*** 731.30506 # 0 740.00000 | - # 2 760.00000 |****** 767.10363 # 0 780.00000 | - # 0 800.00000 | - # 0 820.00000 | - # 0 840.00000 | - # 3 860.00000 |********* 867.28099 # 3 880.00000 |********* 887.20297 # 3 900.00000 |********* 908.40781 # 4 920.00000 |************ 932.05642 # 5 940.00000 |**************** 946.79857 # 5 960.00000 |**************** 966.56230 # 3 980.00000 |********* 990.79967 # 1 1000.00000 |*** 1002.96987 # 1 1020.00000 |*** 1032.89839 # 0 1040.00000 | - # 3 1060.00000 |********* 1076.75933 # 3 1080.00000 |********* 1085.55975 # 6 1100.00000 |******************* 1108.99967 # 1 1120.00000 |*** 1137.50318 # 7 1140.00000 |********************** 1150.04290 # 10 1160.00000 |******************************** 1168.76820 # 5 1180.00000 |**************** 1189.51249 # # 6 > 95% |******************* 1213.95841 # # mean of 95% 917.38699 # 95th %ile 1203.14453 This message posted from opensolaris.org
Due to testing been done i have to go to conclusion that SNV87 Dom0 won''t provide a good performance for Solaris HVM guests. For SNV87 HVM (64-bit) guest i''ve got elasped time:- 40 sec at Xen 3.2 CentOS 5.1 Dom0 vs 300 sec at SNV87 Dom0. This message posted from opensolaris.org
Hmm, so we have:> Solaris HVM DomU at SNV87 Dom0 (all 64-bit) > # elasped time 306.42171Hmm, slow.> Test has been run on SNV87 HVM DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) > # elasped time 40.26895Ok.> Workaround was applied to SNV87 HVM DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) > # elasped time 35.59440The workaround improves run time, but only by ~ 10%.> Test has been run on SNV87 PV DomU at Xen 3.2.1 CentOS 5.1 Dom0 (all 64-bit) > # elasped time 10.34040Yep, for a HVM domain the hypervisor must fully emulate the MMU using shadow page tables; with a PV domain this isn''t necessary and is much faster.> Test has been run on SNV87 PV DomU at SNV87 Dom0 (all 64-bit) > # elasped time 12.51547Hmm, this test was run on OpenSolaris'' xVM / Xen 3.1.2 hypervisor... I don''t expect that the SNV87 Dom0 kernel vs CentOS 5.1 Dom0 kernel makes a difference, so I think this might be an indication that Xen 3.2.1 has improved PV MMU support over Xen 3.1.2... And for a Solaris HVM domU, the HVM MMU performance increase appears to be even more dramatic between Xen 3.1.2 and Xen 3.2.1... This message posted from opensolaris.org
> Hmm, so we have: > > > Solaris HVM DomU at SNV87 Dom0 (all 64-bit) > > # elasped time 306.42171 > > Hmm, slow. > > > Test has been run on SNV87 HVM DomU at Xen 3.2.1 > CentOS 5.1 Dom0 (all 64-bit) > > # elasped time 40.26895 > > Ok. > > > Workaround was applied to SNV87 HVM DomU at Xen > 3.2.1 CentOS 5.1 Dom0 (all 64-bit) > > # elasped time 35.59440 > > The workaround improves run time, but only by ~ 10%. > > > Test has been run on SNV87 PV DomU at Xen 3.2.1 > CentOS 5.1 Dom0 (all 64-bit) > > # elasped time 10.34040 > > Yep, for a HVM domain the hypervisor must fully > emulate > the MMU using shadow page tables; with a PV domain > this > isn''t necessary and is much faster. > > > Test has been run on SNV87 PV DomU at SNV87 Dom0 > (all 64-bit) > > # elasped time 12.51547 > > Hmm, this test was run on OpenSolaris'' xVM / Xen > 3.1.2 hypervisor... > > I don''t expect that the SNV87 Dom0 kernel vs CentOS > 5.1 Dom0 kernel*********************************************************************** I would say:- SNV87 Dom0 kernel vs 2.6.18.8-xen kernel compiled from source on CentOS 5.1 *********************************************************************** When i load Xen 3.2.1 on xen disabled CentOS 5.1 instance grub entry is:- ****************************************** title Xen-3.2 x86_64 (2.6.18.8-xen) ****************************************** root (hd1,4) kernel /xen-3.2.gz module /vmlinuz-2.6.18.8-xen ro root=/dev/VolGroup01/LogVol00 rhgb quiet module /initrd-2.6.18.8-xen.img *************************************************** and 3.2 xend service is already activated:- ************************************************** # chkconfig --add xend # chkconfig xend on All Xen 3.2.1 binaries are already compiled from source and installed. When i do the same on F8 it will work as well, but Xen 3.2 build would be done by different version of gcc and glibc ( at least i guess so).> makes a difference, so I think this might be an > indication that Xen 3.2.1 has > improved PV MMU support over Xen 3.1.2... > > And for a Solaris HVM domU, the HVM MMU performance > increase appears > to be even more dramatic between Xen 3.1.2 and Xen > 3.2.1...This message posted from opensolaris.org
************************************************************************** Test has been also run on S10U5 HVM DomU at SNV87 Dom0 (all 64-bit) ************************************************************************** LibMicro binaries where uploaded from Dom0 to DomU in the same way as for PV DomUs to avoid install SunStudio Pro C on DomU bash-3.00# uname -a SunOS dhcppc2 5.10 Generic_127128-11 i86pc i386 i86pc bash-3.00# bin/fork -E -C 200 -L -S -W -N fork_100 -B 100 -C 100 > /tmp/fork1.output Running: fork_100 for 220.89201 seconds This message posted from opensolaris.org
> Due to testing been done i have to go to conclusion > that SNV87 Dom0 won''t provide a good performance for > Solaris HVM guests. > For SNV87 HVM (64-bit) guest i''ve got elasped time:- > 40 sec at Xen 3.2 CentOS 5.1 Dom0 vs 300 sec at SNV87 Dom0.Hmm, on a 32-bit Xen 3.2 Gentoo Dom0, a 32-bit HVM SNV85 DomU needs ~ 60 seconds for the fork_100 test. The HVM SNV85 DomU kernel is using a virtual 3-level MMU with the PAE extension. Forcing the HVM SNV85 DomU kernel not to use the PAE MMU extension results in the fork_100 benchmark running twice as fast; now it runs in ~ 30 seconds. GRUB boot entry to force using 2-level MMU page tables in 32-bit mode: title Solaris Express Community Edition snv_85 X86 (32-bit, noPAE) kernel$ /platform/i86pc/kernel/unix -B disablePAE=true module$ /platform/i86pc/boot_archive The same test on a xVM 3.1.2 64-bit SNV_89 Dom0 crashed quite spectacular: First the 32-bit non-PAE HVM domU paniced, and after the HVM domU had written a crash dump and was destroyed the dom0 kernel paniced, too. domU panic: ========== # mdb -k 0 Loading modules: [ unix genunix specfs cpu.generic cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs ip hook neti sctp arp usba uhci s1394 qlc fctl nca lofs ]> ::statusdebugging crash dump vmcore.0 (32-bit) from xen-hvm operating system: 5.11 snv_85 (i86pc) panic message: BAD TRAP: type=e (#pf Page fault) rp=d5e635dc addr=113 occurred in module "genun ix" due to a NULL pointer dereference dump content: kernel pages only> $ci_ddi_intr_get_supported_types+0xe(ffffffff) ddi_intr_alloc+0x83(ffffffff, d5f0b0f0, 1, 0, 1, d5e63768) ec_init+0xa6(ffffffff) xen_pv_init+0x1de() ::msgbuf ... MESSAGE SunOS Release 5.11 Version snv_85 32-bit Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. features: 10e7fff<cpuid,cx16,sse3,nx,sse2,sse,sep,pat,cx8,pae,mca,mmx,cmov,de,pg e,mtrr,msr,tsc,lgpg> mem = 523876K (0x1ff99000) root nexus = i86pc pseudo0 at root pseudo0 is /pseudo scsi_vhci0 at root scsi_vhci0 is /scsi_vhci isa0 at root pcplusmp: vector 0x9 ioapic 0x1 intin 0x9 is bound to cpu 0 pseudo-device: acpippm0 acpippm0 is /pseudo/acpippm@0 pseudo-device: ppm0 ppm0 is /pseudo/ppm@0 pci0 at root: space 0 offset 0 pci0 is /pci@0,0 pcplusmp: ide (ata) instance 0 vector 0xe ioapic 0x1 intin 0xe is bound to cpu 0 IDE device at targ 0, lun 0 lastlun 0x0 model QEMU HARDDISK ATA/ATAPI-7 supported, majver 0xf0 minver 0x16 ata_set_feature: (0x66,0x0) failed ata_set_feature: (0x66,0x0) failed PCI-device: ide@0, ata0 ata0 is /pci@0,0/pci-ide@1,1/ide@0 UltraDMA mode 5 selected Disk0: <Vendor ''Gen-ATA '' Product ''QEMU HARDDISK ''> cmdk0 at ata0 target 0 lun 0 cmdk0 is /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 SMBIOS v2.4 loaded (285 bytes) /cpus (cpunex0) online pseudo-device: dld0 dld0 is /pseudo/dld@0 pcplusmp: i8042 (i8042) instance 0 vector 0x1 ioapic 0x1 intin 0x1 is bound to c pu 0 pcplusmp: i8042 (i8042) instance #0 vector 0xc ioapic 0x1 intin 0xc is bound to cpu 0 8042 device: keyboard@0, kb8042 # 0 kb80420 is /isa/i8042@1,60/keyboard@0 8042 device: mouse@1, mouse8042 # 0 mouse80420 is /isa/i8042@1,60/mouse@1 NOTICE: Kernel debugger present: disabling console power management. NOTICE: MPO disabled because memory is interleaved cpu0: x86 (AuthenticAMD 40F33 family 15 model 67 step 3 clock 3207 MHz) cpu0: AMD Athlon(tm) 64 X2 Dual Core Processor 6400+ workaround applied for cpu erratum #122 workaround applied for cpu issue #6336786 pcplusmp: pci10ec,8139 (rtls) instance 0 vector 0x20 ioapic 0x1 intin 0x20 is bo und to cpu 0 NOTICE: rtls0 registered NOTICE: rtls0 link up, 100 Mbps, half duplex pseudo-device: devinfo0 devinfo0 is /pseudo/devinfo@0 dump on /dev/dsk/c0d0s1 size 512 MB panic[cpu0]/thread=d6a1f800: BAD TRAP: type=e (#pf Page fault) rp=d5e635dc addr=113 occurred in module "genun ix" due to a NULL pointer dereference devices-local: #pf Page fault Bad kernel fault at addr=0x113 pid=74, pc=0xfe8d80ae, sp=0x1, eflags=0x10286 cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6d8<xmme,fxsr,pge,mce,pse,de> cr2: 113 cr3: 1c487000 gs: 1b0 fs: 0 es: 160 ds: d5e60160 edi: ffffffff esi: 1 ebp: d5e63698 esp: d5e63614 ebx: ffffffff edx: d3c243b0 ecx: 0 eax: 0 trp: e err: 0 eip: fe8d80ae cs: 158 efl: 10286 usp: 1 ss: 1 d5e6352c unix:die+98 (e, d5e635dc, 113, 0) d5e635c8 unix:trap+12a5 (d5e635dc, 113, 0) d5e635dc unix:cmntrap+7c (1b0, 0, 160, d5e601) d5e63698 genunix:i_ddi_intr_get_supported_types+e (ffffffff) d5e63724 genunix:ddi_intr_alloc+83 (ffffffff, d5f0b0f0,) d5e6376c xpv:ec_init+a6 (ffffffff) panic[cpu0]/thread=d6a1f800: BAD TRAP: type=e (#pf Page fault) rp=fec3ad20 addr=0 occurred in module "genunix " due to a NULL pointer dereference syncing file systems... [1] 1 done (not all i/o completed) dumping to /dev/dsk/c0d0s1, offset 107413504, content: kernel>Followed by: dom0 panic: ========== # mdb -k 1 Loading modules: [ unix genunix specfs dtrace xpv_uppc xpv_psm scsi_vhci ufs sd ip hook neti sctp arp usba s1394 nca fctl zfs lofs audiosup md random crypto smbsrv nfs fcp fcip logindmux ptm nsctl sdbc sv ii sppp nsmb rdc ipc ]> ::statusdebugging crash dump vmcore.1 (64-bit) from tiger2 operating system: 5.11 snv_90_jk (i86pc) panic message: BAD TRAP: type=d (#gp General protection) rp=ffffff0010369b70 addr=ffffff02e0a0b 940 dump content: kernel pages only> $canon_decref+0x21(68732f6e696228) anon_free+0x81(ffffff02e0a0b940, 0, 1fe000) segvn_free+0x177(ffffff02df7e6440) seg_free+0x34(ffffff02df7e6440) segvn_unmap+0xa9f(ffffff02df7e6440, 7ffffe003000, 1fe000) as_unmap+0x10a(ffffff02df9620e0, 7ffffe003000, 1fe000) munmap+0x87(7ffffe003000, 1fe000) sys_syscall+0x1c9()> ::msgbuf... /xpvd/xdb@1,768 (xdb0) online /xpvd/xdb@1,5632 (xdb1) online /xpvd/xnb@1,0 (xnbe0) online NOTICE: vnic1005 registered NOTICE: vnic1005 unregistered NOTICE: vnic1007 registered /xpvd/xnb@1,0 (xnbe0) offline /xpvd/xdb@1,768 (xdb0) offline /xpvd/xdb@1,5632 (xdb1) offline NOTICE: vnic1007 unregistered /xpvd/xdb@2,768 (xdb0) online /xpvd/xdb@2,5632 (xdb1) online /xpvd/xnb@2,0 (xnbe0) online panic[cpu0]/thread=ffffff02de1cbe00: BAD TRAP: type=d (#gp General protection) rp=ffffff0010369b70 addr=ffffff02e0a0b 940 python: #gp General protection addr=0xffffff02e0a0b940 pid=1042, pc=0xfffffffffbaae2e1, sp=0xffffff0010369c60, eflags=0x10216 cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 660<xmme,fxsr,mce,pae> cr2: 7ffffee43258 rdi: 68732f6e696228 rsi: 0 rdx: 0 rcx: 0 r8: 7ffff r9: ffffff02c7465000 rax: 0 rbx: 68732f6e696228 rbp: ffffff0010369c90 r10: 1fe r11: 200 r12: 68732f6e696228 r13: 0 r14: ffffff0010369cd0 r15: ffffff02e0a0b940 fsb: 7ffffee44200 gsb: fffffffffbc5c070 ds: 4b es: 4b fs: 0 gs: 0 trp: d err: 0 rip: fffffffffbaae2e1 cs: e030 rfl: 10216 rsp: ffffff0010369c60 ss: e02b ffffff0010369a50 unix:die+ea () ffffff0010369b60 unix:trap+3e4 () ffffff0010369b70 unix:_cmntrap+12f () ffffff0010369c90 genunix:anon_decref+21 () ffffff0010369cf0 genunix:anon_free+81 () ffffff0010369d40 genunix:segvn_free+177 () ffffff0010369d70 genunix:seg_free+34 () ffffff0010369e20 genunix:segvn_unmap+a9f () ffffff0010369ec0 genunix:as_unmap+10a () ffffff0010369f00 genunix:munmap+87 () ffffff0010369f10 unix:brand_sys_syscall+261 () syncing file systems... 4 done dumping to /dev/dsk/c9t0d0s1, offset 860356608, content: kernel This message posted from opensolaris.org
> The same test on a xVM 3.1.2 64-bit SNV_89 Dom0 crashed quite > spectacular: First the 32-bit non-PAE HVM domU paniced, and after > the HVM domU had written a crash dump and was destroyed the dom0 > kernel paniced, too.A workaround for the panics is to disable the "xpv" driver. With this grub boot entry I''m able to boot a 32-bit SNV85 HVM domU with a 2-level MMU on 64-bit xVM 3.1.2 SNV89 dom0: title Solaris Express Community Edition snv_85 X86 (32-bit, noPAE) kernel$ /platform/i86pc/kernel/unix -B disablePAE=true,disable-xpv=true module$ /platform/i86pc/boot_archive On this amd64 box, the fork_100 benchmark needs 120 seconds in 32-bit HVM PAE mode, and 47 seconds in 32-bit HVM no-PAE mode. This message posted from opensolaris.org
Your dom0 panic resembles : 6325383 panic: anon_decref dereferenced bad pointer which was fixed in b68. -surya Jürgen Keil wrote:>> Due to testing been done i have to go to conclusion >> that SNV87 Dom0 won''t provide a good performance for >> Solaris HVM guests. >> For SNV87 HVM (64-bit) guest i''ve got elasped time:- >> 40 sec at Xen 3.2 CentOS 5.1 Dom0 vs 300 sec at SNV87 Dom0. >> > > Hmm, on a 32-bit Xen 3.2 Gentoo Dom0, a 32-bit HVM SNV85 DomU > needs ~ 60 seconds for the fork_100 test. The HVM SNV85 DomU > kernel is using a virtual 3-level MMU with the PAE extension. > > Forcing the HVM SNV85 DomU kernel not to use the PAE MMU > extension results in the fork_100 benchmark running twice as fast; > now it runs in ~ 30 seconds. > > GRUB boot entry to force using 2-level MMU page tables in 32-bit mode: > > title Solaris Express Community Edition snv_85 X86 (32-bit, noPAE) > kernel$ /platform/i86pc/kernel/unix -B disablePAE=true > module$ /platform/i86pc/boot_archive > > > > The same test on a xVM 3.1.2 64-bit SNV_89 Dom0 crashed quite > spectacular: First the 32-bit non-PAE HVM domU paniced, and after > the HVM domU had written a crash dump and was destroyed the dom0 > kernel paniced, too. > > domU panic: > ==========> > # mdb -k 0 > Loading modules: [ unix genunix specfs cpu.generic cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs ip hook neti sctp arp usba uhci s1394 qlc fctl nca lofs ] > >> ::status >> > debugging crash dump vmcore.0 (32-bit) from xen-hvm > operating system: 5.11 snv_85 (i86pc) > panic message: > BAD TRAP: type=e (#pf Page fault) rp=d5e635dc addr=113 occurred in module "genun > ix" due to a NULL pointer dereference > dump content: kernel pages only > >> $c >> > i_ddi_intr_get_supported_types+0xe(ffffffff) > ddi_intr_alloc+0x83(ffffffff, d5f0b0f0, 1, 0, 1, d5e63768) > ec_init+0xa6(ffffffff) > xen_pv_init+0x1de() > ::msgbuf > ... > MESSAGE > SunOS Release 5.11 Version snv_85 32-bit > Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. > Use is subject to license terms. > features: 10e7fff<cpuid,cx16,sse3,nx,sse2,sse,sep,pat,cx8,pae,mca,mmx,cmov,de,pg > e,mtrr,msr,tsc,lgpg> > mem = 523876K (0x1ff99000) > root nexus = i86pc > pseudo0 at root > pseudo0 is /pseudo > scsi_vhci0 at root > scsi_vhci0 is /scsi_vhci > isa0 at root > pcplusmp: vector 0x9 ioapic 0x1 intin 0x9 is bound to cpu 0 > pseudo-device: acpippm0 > acpippm0 is /pseudo/acpippm@0 > pseudo-device: ppm0 > ppm0 is /pseudo/ppm@0 > pci0 at root: space 0 offset 0 > pci0 is /pci@0,0 > pcplusmp: ide (ata) instance 0 vector 0xe ioapic 0x1 intin 0xe is bound to cpu 0 > IDE device at targ 0, lun 0 lastlun 0x0 > model QEMU HARDDISK > ATA/ATAPI-7 supported, majver 0xf0 minver 0x16 > ata_set_feature: (0x66,0x0) failed > ata_set_feature: (0x66,0x0) failed > PCI-device: ide@0, ata0 > ata0 is /pci@0,0/pci-ide@1,1/ide@0 > UltraDMA mode 5 selected > Disk0: <Vendor ''Gen-ATA '' Product ''QEMU HARDDISK ''> > cmdk0 at ata0 target 0 lun 0 > cmdk0 is /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0 > SMBIOS v2.4 loaded (285 bytes) > /cpus (cpunex0) online > pseudo-device: dld0 > dld0 is /pseudo/dld@0 > pcplusmp: i8042 (i8042) instance 0 vector 0x1 ioapic 0x1 intin 0x1 is bound to c > pu 0 > pcplusmp: i8042 (i8042) instance #0 vector 0xc ioapic 0x1 intin 0xc is bound to > cpu 0 > 8042 device: keyboard@0, kb8042 # 0 > kb80420 is /isa/i8042@1,60/keyboard@0 > 8042 device: mouse@1, mouse8042 # 0 > mouse80420 is /isa/i8042@1,60/mouse@1 > NOTICE: Kernel debugger present: disabling console power management. > NOTICE: MPO disabled because memory is interleaved > > cpu0: x86 (AuthenticAMD 40F33 family 15 model 67 step 3 clock 3207 MHz) > cpu0: AMD Athlon(tm) 64 X2 Dual Core Processor 6400+ > workaround applied for cpu erratum #122 > workaround applied for cpu issue #6336786 > pcplusmp: pci10ec,8139 (rtls) instance 0 vector 0x20 ioapic 0x1 intin 0x20 is bo > und to cpu 0 > NOTICE: rtls0 registered > NOTICE: rtls0 link up, 100 Mbps, half duplex > pseudo-device: devinfo0 > devinfo0 is /pseudo/devinfo@0 > dump on /dev/dsk/c0d0s1 size 512 MB > > panic[cpu0]/thread=d6a1f800: > BAD TRAP: type=e (#pf Page fault) rp=d5e635dc addr=113 occurred in module "genun > ix" due to a NULL pointer dereference > > > devices-local: > #pf Page fault > Bad kernel fault at addr=0x113 > pid=74, pc=0xfe8d80ae, sp=0x1, eflags=0x10286 > cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6d8<xmme,fxsr,pge,mce,pse,de> > cr2: 113 > cr3: 1c487000 > > gs: 1b0 fs: 0 es: 160 ds: d5e60160 > edi: ffffffff esi: 1 ebp: d5e63698 esp: d5e63614 > ebx: ffffffff edx: d3c243b0 ecx: 0 eax: 0 > trp: e err: 0 eip: fe8d80ae cs: 158 > efl: 10286 usp: 1 ss: 1 > > d5e6352c unix:die+98 (e, d5e635dc, 113, 0) > d5e635c8 unix:trap+12a5 (d5e635dc, 113, 0) > d5e635dc unix:cmntrap+7c (1b0, 0, 160, d5e601) > d5e63698 genunix:i_ddi_intr_get_supported_types+e (ffffffff) > d5e63724 genunix:ddi_intr_alloc+83 (ffffffff, d5f0b0f0,) > d5e6376c xpv:ec_init+a6 (ffffffff) > > panic[cpu0]/thread=d6a1f800: > BAD TRAP: type=e (#pf Page fault) rp=fec3ad20 addr=0 occurred in module "genunix > " due to a NULL pointer dereference > > syncing file systems... > [1] > 1 > done (not all i/o completed) > dumping to /dev/dsk/c0d0s1, offset 107413504, content: kernel > > > > > Followed by: > > dom0 panic: > ==========> > # mdb -k 1 > Loading modules: [ unix genunix specfs dtrace xpv_uppc xpv_psm scsi_vhci ufs sd ip hook neti sctp arp usba s1394 nca fctl zfs lofs audiosup md random crypto smbsrv nfs fcp fcip logindmux ptm nsctl sdbc sv ii sppp nsmb rdc ipc ] > >> ::status >> > debugging crash dump vmcore.1 (64-bit) from tiger2 > operating system: 5.11 snv_90_jk (i86pc) > panic message: > BAD TRAP: type=d (#gp General protection) rp=ffffff0010369b70 addr=ffffff02e0a0b > 940 > dump content: kernel pages only > >> $c >> > anon_decref+0x21(68732f6e696228) > anon_free+0x81(ffffff02e0a0b940, 0, 1fe000) > segvn_free+0x177(ffffff02df7e6440) > seg_free+0x34(ffffff02df7e6440) > segvn_unmap+0xa9f(ffffff02df7e6440, 7ffffe003000, 1fe000) > as_unmap+0x10a(ffffff02df9620e0, 7ffffe003000, 1fe000) > munmap+0x87(7ffffe003000, 1fe000) > sys_syscall+0x1c9() > > >> ::msgbuf >> > ... > /xpvd/xdb@1,768 (xdb0) online > /xpvd/xdb@1,5632 (xdb1) online > /xpvd/xnb@1,0 (xnbe0) online > NOTICE: vnic1005 registered > NOTICE: vnic1005 unregistered > NOTICE: vnic1007 registered > /xpvd/xnb@1,0 (xnbe0) offline > /xpvd/xdb@1,768 (xdb0) offline > /xpvd/xdb@1,5632 (xdb1) offline > NOTICE: vnic1007 unregistered > /xpvd/xdb@2,768 (xdb0) online > /xpvd/xdb@2,5632 (xdb1) online > /xpvd/xnb@2,0 (xnbe0) online > > panic[cpu0]/thread=ffffff02de1cbe00: > BAD TRAP: type=d (#gp General protection) rp=ffffff0010369b70 addr=ffffff02e0a0b > 940 > > > python: > #gp General protection > addr=0xffffff02e0a0b940 > pid=1042, pc=0xfffffffffbaae2e1, sp=0xffffff0010369c60, eflags=0x10216 > cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 660<xmme,fxsr,mce,pae> > cr2: 7ffffee43258 > > rdi: 68732f6e696228 rsi: 0 rdx: 0 > rcx: 0 r8: 7ffff r9: ffffff02c7465000 > rax: 0 rbx: 68732f6e696228 rbp: ffffff0010369c90 > r10: 1fe r11: 200 r12: 68732f6e696228 > r13: 0 r14: ffffff0010369cd0 r15: ffffff02e0a0b940 > fsb: 7ffffee44200 gsb: fffffffffbc5c070 ds: 4b > es: 4b fs: 0 gs: 0 > trp: d err: 0 rip: fffffffffbaae2e1 > cs: e030 rfl: 10216 rsp: ffffff0010369c60 > ss: e02b > > ffffff0010369a50 unix:die+ea () > ffffff0010369b60 unix:trap+3e4 () > ffffff0010369b70 unix:_cmntrap+12f () > ffffff0010369c90 genunix:anon_decref+21 () > ffffff0010369cf0 genunix:anon_free+81 () > ffffff0010369d40 genunix:segvn_free+177 () > ffffff0010369d70 genunix:seg_free+34 () > ffffff0010369e20 genunix:segvn_unmap+a9f () > ffffff0010369ec0 genunix:as_unmap+10a () > ffffff0010369f00 genunix:munmap+87 () > ffffff0010369f10 unix:brand_sys_syscall+261 () > > syncing file systems... > 4 > done > dumping to /dev/dsk/c9t0d0s1, offset 860356608, content: kernel > > > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org >
> Your dom0 panic resembles : > 6325383 panic: anon_decref dereferenced bad pointer > which was fixed in b68.Hmm, but I''ve installed b85, and bfu''ed that using current onnv-gate mercurial bits (post b89). The domU panic might be: 6670693 xpv driver hangs in 32-bit HVM domU on a 64-bit dom0 There had been sporadic hangs when trying to boot the b85 32-bit HVM domU (PAE enabled) on the 64-bit b89 dom0. With PAE disabled I got the domU panic. By booting the 32-bit b85 HVM domU with -B disable-xpv=true I was able to avoid the hang / panic. Yesterday I bfu''ed to the domU from b85 to the post b89 bits, after the bfu upgrade it boots 32-bit HVM without haning / panicing (and the xpv driver is active). This message posted from opensolaris.org