Hi, I just made simple peformance test with iozone tool on xen 3.2.1. time iozone -t 20 -r 2m -s 100m -m -R and: Dom0 - lvm(XFS) : real 1m35.670s DomU - 2m46.963s lvm(xfs) cpus = "4-7" vcpus=4 DomU - 1m53.772s lvm(xfs) cpus = "8" vcpus=1 It seems DomU is much slower if using more than one cpu. am I right? -- Thanks Janul _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Janusz Ulanowski wrote:> It seems DomU is much slower if using more than one cpu. > > am I right?I have heard that before but I thought it related more to efficiency. Im guessing it also depends on the type of workload. What happens when you try cpu bound calculations with smp and not smp. Maybe run the benchmark for dnetc? -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Thanks for your answer. I''ve done as you said. Unfortunatelly no difference. it goes worse when I increase file size or threads number. for: time iozone -t 40 -r 2m -s 100m -m -R DomU with four cpus (each from different core) real 6m19.300s DomU with one cpu real 3m32.651s Dom0 real 2m37.418s -- # xm info host : xxxx release : 2.6.18.8-xen version : #2 SMP Mon May 12 13:19:28 IST 2008 machine : x86_64 nr_cpus : 16 nr_nodes : 1 cores_per_socket : 4 threads_per_core : 1 cpu_mhz : 2393 hw_caps : bfebfbff:20100800:00000000:00000140:0004e3bd:00000000:00000001 total_memory : 65530 free_memory : 12 node_to_cpu : node0:0-15 xen_major : 3 xen_minor : 2 xen_extra : .1 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri May 02 14:23:07 2008 +0100 16888:9733f681d5c5 cc_compiler : gcc version 4.1.3 20080114 (prerelease) (Debian 4.1.2-19) cc_compile_by : root cc_compile_domain : lan cc_compile_date : Fri May 9 14:41:37 IST 2008 xend_config_format : 4 Thanks in advance. 2008/5/14 Apparao, Padmashree K <padmashree.k.apparao@intel.com>:> The SMP case may be suffering because of context switching. > > Can you run the SMP by affintizing the vcpus to specific cores > > > > Vcpu on core 1 > > Vcpu2 on core 2 > > Vcpu3 on core 3 etc > > > > Thanks > > -Padma > > > > ------------------------------ > > *From:* xen-users-bounces@lists.xensource.com [mailto: > xen-users-bounces@lists.xensource.com] *On Behalf Of *Janusz Ulanowski > *Sent:* Wednesday, May 14, 2008 4:36 AM > *To:* xen-users@lists.xensource.com > *Subject:* [Xen-users] DomU slower with smp? > > > > Hi, > I just made simple peformance test with iozone tool on xen 3.2.1. > > time iozone -t 20 -r 2m -s 100m -m -R > > and: > > Dom0 - lvm(XFS) : real 1m35.670s > > > DomU - 2m46.963s > lvm(xfs) > cpus = "4-7" > vcpus=4 > > > DomU - 1m53.772s > lvm(xfs) > cpus = "8" > vcpus=1 > > > It seems DomU is much slower if using more than one cpu. > > am I right? > > -- > Thanks > Janul >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I fact I tested with 1GB RAM in DomU When I increased to 9GB the result looks much better: real 2m59.121s sorry for confusing... this is 64bit architecture so maybe that''s why. 2008/5/14 Janusz Ulanowski <janul666@gmail.com>:> Hi, > Thanks for your answer. > I''ve done as you said. > Unfortunatelly no difference. > it goes worse when I increase file size or threads number. > > for: > time iozone -t 40 -r 2m -s 100m -m -R > DomU with four cpus (each from different core) > real 6m19.300s > DomU with one cpu > real 3m32.651s > Dom0 > real 2m37.418s > > -- > # xm info > host : xxxx > release : 2.6.18.8-xen > version : #2 SMP Mon May 12 13:19:28 IST 2008 > machine : x86_64 > nr_cpus : 16 > nr_nodes : 1 > cores_per_socket : 4 > threads_per_core : 1 > cpu_mhz : 2393 > hw_caps : > bfebfbff:20100800:00000000:00000140:0004e3bd:00000000:00000001 > total_memory : 65530 > free_memory : 12 > node_to_cpu : node0:0-15 > xen_major : 3 > xen_minor : 2 > xen_extra : .1 > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : Fri May 02 14:23:07 2008 +0100 16888:9733f681d5c5 > cc_compiler : gcc version 4.1.3 20080114 (prerelease) (Debian > 4.1.2-19) > cc_compile_by : root > cc_compile_domain : lan > cc_compile_date : Fri May 9 14:41:37 IST 2008 > xend_config_format : 4 > > Thanks in advance. > > > 2008/5/14 Apparao, Padmashree K <padmashree.k.apparao@intel.com>: > > The SMP case may be suffering because of context switching. >> >> Can you run the SMP by affintizing the vcpus to specific cores >> >> >> >> Vcpu on core 1 >> >> Vcpu2 on core 2 >> >> Vcpu3 on core 3 etc >> >> >> >> Thanks >> >> -Padma >> >> >> >> ------------------------------ >> >> *From:* xen-users-bounces@lists.xensource.com [mailto: >> xen-users-bounces@lists.xensource.com] *On Behalf Of *Janusz Ulanowski >> *Sent:* Wednesday, May 14, 2008 4:36 AM >> *To:* xen-users@lists.xensource.com >> *Subject:* [Xen-users] DomU slower with smp? >> >> >> >> Hi, >> I just made simple peformance test with iozone tool on xen 3.2.1. >> >> time iozone -t 20 -r 2m -s 100m -m -R >> >> and: >> >> Dom0 - lvm(XFS) : real 1m35.670s >> >> >> DomU - 2m46.963s >> lvm(xfs) >> cpus = "4-7" >> vcpus=4 >> >> >> DomU - 1m53.772s >> lvm(xfs) >> cpus = "8" >> vcpus=1 >> >> >> It seems DomU is much slower if using more than one cpu. >> >> am I right? >> >> -- >> Thanks >> Janul >> > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Janusz Ulanowski wrote:> Hi, > Thanks for your answer. > I''ve done as you said. > Unfortunatelly no difference. > it goes worse when I increase file size or threads number. > > for: > time iozone -t 40 -r 2m -s 100m -m -R > DomU with four cpus (each from different core) > real 6m19.300s > DomU with one cpu > real 3m32.651s > Dom0 > real 2m37.418sDid you try a process that was less io intensive? It would be worthwhile to see if the bottleneck is cpu or io. -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008/5/15 Nick Anderson <nick@anders0n.net>:> Janusz Ulanowski wrote: > >> Hi, >> Thanks for your answer. >> I''ve done as you said. >> Unfortunatelly no difference. >> it goes worse when I increase file size or threads number. >> >> for: >> time iozone -t 40 -r 2m -s 100m -m -R >> DomU with four cpus (each from different core) >> real 6m19.300s >> DomU with one cpu >> real 3m32.651s >> Dom0 >> real 2m37.418s >> > > Did you try a process that was less io intensive? It would be worthwhile > to see if the bottleneck is cpu or io. >what''s the best way ? vmstat, iostat or something else.? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Janusz Ulanowski wrote:> Did you try a process that was less io intensive? It would be worthwhile > to see if the bottleneck is cpu or io. > > > what''s the best way ? vmstat, iostat or something else.?Actually thats a good question. What is a good way to benchmark virtual machines especially with regard to SMP. There is unixbench, and nbench. But to my knowledge neither of those are really suited well for benchmarking SMP systems. I was thinking running distributed.net with one thread/cpu. I might run some unixbench tests just to see what kind of results I get. Anyone else have a good benchmarking procedure that encompasses cpu, memory, and disk IO? -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Anderson wrote:> really suited well for benchmarking SMP systems. I was thinking running > distributed.net with one thread/cpu. I might run some unixbench tests > just to see what kind of results I get.Disclaimer: The best way to actually test a system is to test it under your workload. Checkout Xen and the Art of Repeted Research www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf and Quantifying the Performance Isolation Properties of Virtualization Systems http://www.usenix.org/events/expcs07/papers/6-matthews.pdf But its kinda fun to see how different workloads are handled given different domU resources. So I did grab unixbench 4.1 and run a few tests. I only ran the shell tests because as far as I can tell it was the only one that had any sort of concurrency. Im kind of supprised that dom0 did so poorly compared to increasing the vcpu count inside domU. After each test I rebooted domU with the next vcpu configuration. dom0: debian etch, xen 3.0.3 from etch repository INDEX VALUES TEST BASELINE RESULT INDEX Shell Scripts (8 concurrent) 6.0 249.6 416.0 ======== FINAL SCORE 416.0 domU vcpu = 1, mem = 2048 INDEX VALUES TEST BASELINE RESULT INDEX Shell Scripts (8 concurrent) 6.0 273.3 455.5 ======== FINAL SCORE 455.5 vcpu = 2, mem = 2048 INDEX VALUES TEST BASELINE RESULT INDEX Shell Scripts (8 concurrent) 6.0 491.6 819.3 ======== FINAL SCORE 819.3 vcpu = 4, mem = 2048 INDEX VALUES TEST BASELINE RESULT INDEX Shell Scripts (8 concurrent) 6.0 657.5 1095.8 ======== FINAL SCORE 1095.8 vcpu = 8, mem = 2048 INDEX VALUES TEST BASELINE RESULT INDEX Shell Scripts (8 concurrent) 6.0 667.6 1112.7 ======== FINAL SCORE 1112.7 One small modification to Unixbench: pgms/tst.sh changed od sort.$$ | sort -n -1 > od.$$ to od sort.$$ | sort -n -k 1 > od.$$ since -1 is depricated I also had to manually specify the system type as Linux in Run -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
"Janusz Ulanowski" <janul666@gmail.com> writes:> I just made simple peformance test with iozone tool on xen 3.2.1....> It seems DomU is much slower if using more than one cpu.I would guess that maybe when you are running a single cpu on the DomU, you are using different pcpu for dom0 and domu. when you are using dual cpu on the DomU, they likely step on oneanother. It''s easy enough to move such things around with xm vcpu-pin and xm vcpu-set If you only have 2 CPUs, (or if your DomU and Dom0 are using the same physical cpu, check with xm vcpu-list) you can have trouble because every I/O operation in a DomU requires an I/O operation in the Dom0 A simple thing that often helps I/O performance is to give the Dom0 either a dedicated CPU (that no other Domain uses) or failing that, to give the Dom0 a large ''weight'' - try xm sched-credit -d 0 -w 4096 or something (-w can be anything larger than what you set the DomUs to. Personally, I set it on the dom0 to equal the total megabytes of ram in the box, and each DomU to the megabytes of ram it is allocated, so each DomU has proportional fair share, and the Dom0 has greatest share of all so that I/O goes through unmolested.) my experience has been that even if you don''t mess with pining vcpus, performance is ok so long as the weight of Dom0 is sufficiently high. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke S Crawford wrote:> "Janusz Ulanowski" <janul666@gmail.com> writes: > >> I just made simple peformance test with iozone tool on xen 3.2.1. > ... >> It seems DomU is much slower if using more than one cpu.Its a dual socket quad core amd. So I have 8 cores. cpus for dom0 is set to 0 and according to xend-config.sxp that will give dom0 access to all cpus. I havent adjusted any weighting so im not sure what would be stepping on it. The only thing running during the test was that one domU. and the domU was idle when running the test on dom0. -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Anderson <nick@anders0n.net> writes:> Its a dual socket quad core amd. So I have 8 cores. cpus for dom0 is > set to 0 and according to xend-config.sxp that will give dom0 access > to all cpus. I havent adjusted any weighting so im not sure what would > be stepping on it. The only thing running during the test was that one > domU. and the domU was idle when running the test on dom0.here is what I''m talking about: [root@coloma lsc]# xm vcpu-list 0 Name ID VCPUs CPU State Time(s) CPU Affinity Domain-0 0 0 0 -b- 33162.5 any cpu Domain-0 0 1 1 r-- 6688.2 any cpu if you notice, Dom0 has access to 2 CPUs, but most of the work is done on the first CPU. I/O, generally speaking, is not an operation that threads well. [root@coloma lsc]# xm vcpu-list oggfrog Name ID VCPUs CPU State Time(s) CPU Affinity oggfrog 124 0 0 -b- 2160.2 any cpu one of the DomUs, oggfrog, is also on CPU 0... however, I''ve set the weights: [root@coloma lsc]# xm sched-credit -d 0 {''cap'': 0, ''weight'': 4096} [root@coloma lsc]# xm sched-credit -d oggfrog {''cap'': 0, ''weight'': 256} before I set the weights, IO was pretty bad. This way is acceptable, but it would probably be significantly faster if I gave a whole CPU to the Dom0 (but that box only has 2 cores, so I''m sharing- this gives my DomUs better CPU but worse IO performance.) the way to see if this is your problem is to execute the xm vcpu-list commands while reproducing the problem in the SMP and Non-SMP DomU. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I have 4 QuadCore cpus. I''ve done as you said and no difference. I missed my file with all results (I do again). Also when I was increasing number of cpus for DomU I got worse result. "xm vcpu-list 0" always shown me the same running vcpu: something like that: xm vcpu-list 0 Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 -b- 366.6 0 Domain-0 0 1 1 r-- 1068.2 1 Domain-0 0 2 2 -b- 1629.7 2 Domain-0 0 3 3 -b- 1410.6 3 ... ... I''m a bit confused about numbering cpus. doesn''t it matter what `cat /proc/cpuinfo |grep ^core` in Dom0 shows ? Thanks Janusz Apparao, Padmashree K wrote:> > Here''s one more suggestion > > · Try interrupt binding. Find out the irq of the nic/IO device > > Now affinitize the interrupts to a particular cpu (usually 0) > > Turn off interrupt balancing in Xen > > Service Irq balance stop (in linux). There is a config parameter in Xen. > > > > · Xm vcpu-set 0 2 > > · Xm vcpu-pin 0 0 0 > > · Xm vcpu-pin 0 1 2 > > > > · Now start your smp (with 4 vcpus) domU (say domid 2) > > > > · Xm vcpu-pin 2 0 1 > > · Xm vcpu-pin 2 1 3 > > · Xm vcpu-pin 2 2 5 > > · Xm vcpu-pin 2 3 7 > > > > · Typically the cpus are numbered as cores 0,2, 4 and 6 on 1 > die (1 quad core) and the cores 1,3,5,7 on another die. You want to > separate the domU from dom0 altogether. Also by putting the domU on a > different die you will eliminate any cache corruption. > > > > · > > · I am not familiar with how the AMD machines enumerate the > cores but this is how the Intel machines do. > > · Can you please run this config and let us know what the > thruput is. Similarly you can run the domU with 1 vcpu and bind it to > one of the cores. > > · Also observe what the cpu% is for dom0 and domU. > > > > I hope this helps. I am interested in seeing the data on this > > > > Thanks > > -Padma > > > > > > > > > > > > > > > > > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Nick Anderson > Sent: Thursday, May 15, 2008 5:46 PM > To: Luke S Crawford > Cc: Janusz Ulanowski; xen-users@lists.xensource.com > Subject: Re: [Xen-users] DomU slower with smp? > > > > Luke S Crawford wrote: > > > "Janusz Ulanowski" <janul666@gmail.com> writes: > > > > > >> I just made simple peformance test with iozone tool on xen 3.2.1. > > > ... > > >> It seems DomU is much slower if using more than one cpu. > > Its a dual socket quad core amd. So I have 8 cores. cpus for dom0 is set > > to 0 and according to xend-config.sxp that will give dom0 access to all > > cpus. I havent adjusted any weighting so im not sure what would be > > stepping on it. The only thing running during the test was that one > > domU. and the domU was idle when running the test on dom0. > > > > -- > > Nick Anderson <nick@anders0n.net> > > http://www.cmdln.org > > http://www.anders0n.net >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
unfortunatelly I had no chance to do tests yet. I have intel cpus /proc/cpuinfo shows me: cpu cores : 1 I checked on generic kernel 2.6.24 and it shows: cpu cores : 4 it''s strange isn''t it? what are tools to check which cpus (intel) are in exactly the same core? processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU E7340 @ 2.40GHz stepping : 11 cpu MHz : 2393.888 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu de tsc msr pae cx8 apic sep mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni cx16 lahf_lm bogomips : 4789.82 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU E7340 @ 2.40GHz stepping : 11 cpu MHz : 2393.888 cache size : 4096 KB physical id : 2 siblings : 1 core id : 0 cpu cores : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu de tsc msr pae cx8 apic sep mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni cx16 lahf_lm bogomips : 4789.82 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: .... processor : 15 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU E7340 @ 2.40GHz stepping : 11 cpu MHz : 2393.888 cache size : 4096 KB physical id : 6 siblings : 1 core id : 3 cpu cores : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu de tsc msr pae cx8 apic sep mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni cx16 lahf_lm bogomips : 4789.82 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: -- 2008/5/16 Apparao, Padmashree K <padmashree.k.apparao@intel.com>:> /proc/cpuinfo only shows what the OS sees the cpu as (aka logical cpu). > As to what physical cores the logical cpus are mapped can be obtained by > using special tools( I don''t know if you have any for AMD). > > What you show below sows dom0 having 4 vcpus, but what your guest? > What is the cpu utilization of the dom0 cores? Can you get that > information from Xentop. This very much seems to be a scheduling/context > switching problem and it would be interesting to drill into this. > > > Thanks > -Padma > > > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Janusz > Ulanowski > Sent: Friday, May 16, 2008 1:29 AM > To: xen-users@lists.xensource.com > Subject: Re: [Xen-users] DomU slower with smp? > > Hi, > I have 4 QuadCore cpus. > I''ve done as you said and no difference. I missed my file with all > results (I do again). > Also when I was increasing number of cpus for DomU I got worse result. > "xm vcpu-list 0" always shown me the same running vcpu: something like > that: > xm vcpu-list 0 > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 0 -b- 366.6 0 > Domain-0 0 1 1 r-- 1068.2 1 > Domain-0 0 2 2 -b- 1629.7 2 > Domain-0 0 3 3 -b- 1410.6 3 > ... > ... > I''m a bit confused about numbering cpus. doesn''t it matter what `cat > /proc/cpuinfo |grep ^core` in Dom0 shows ? > > Thanks > Janusz > > > Apparao, Padmashree K wrote: > > > > Here''s one more suggestion > > > > * Try interrupt binding. Find out the irq of the nic/IO device > > > > Now affinitize the interrupts to a particular cpu (usually 0) > > > > Turn off interrupt balancing in Xen > > > > Service Irq balance stop (in linux). There is a config parameter in > Xen. > > > > > > > > * Xm vcpu-set 0 2 > > > > * Xm vcpu-pin 0 0 0 > > > > * Xm vcpu-pin 0 1 2 > > > > > > > > * Now start your smp (with 4 vcpus) domU (say domid 2) > > > > > > > > * Xm vcpu-pin 2 0 1 > > > > * Xm vcpu-pin 2 1 3 > > > > * Xm vcpu-pin 2 2 5 > > > > * Xm vcpu-pin 2 3 7 > > > > > > > > * Typically the cpus are numbered as cores 0,2, 4 and 6 on 1 > > die (1 quad core) and the cores 1,3,5,7 on another die. You want to > > separate the domU from dom0 altogether. Also by putting the domU on a > > different die you will eliminate any cache corruption. > > > > > > > > * > > > > * I am not familiar with how the AMD machines enumerate the > > cores but this is how the Intel machines do. > > > > * Can you please run this config and let us know what the > > thruput is. Similarly you can run the domU with 1 vcpu and bind it to > > one of the cores. > > > > * Also observe what the cpu% is for dom0 and domU. > > > > > > > > I hope this helps. I am interested in seeing the data on this > > > > > > > > Thanks > > > > -Padma > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > From: xen-users-bounces@lists.xensource.com > > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Nick > Anderson > > Sent: Thursday, May 15, 2008 5:46 PM > > To: Luke S Crawford > > Cc: Janusz Ulanowski; xen-users@lists.xensource.com > > Subject: Re: [Xen-users] DomU slower with smp? > > > > > > > > Luke S Crawford wrote: > > > > > "Janusz Ulanowski" <janul666@gmail.com> writes: > > > > > > > > > >> I just made simple peformance test with iozone tool on xen 3.2.1. > > > > > ... > > > > >> It seems DomU is much slower if using more than one cpu. > > > > Its a dual socket quad core amd. So I have 8 cores. cpus for dom0 is > set > > > > to 0 and according to xend-config.sxp that will give dom0 access to > all > > > > cpus. I havent adjusted any weighting so im not sure what would be > > > > stepping on it. The only thing running during the test was that one > > > > domU. and the domU was idle when running the test on dom0. > > > > > > > > -- > > > > Nick Anderson <nick@anders0n.net> > > > > http://www.cmdln.org > > > > http://www.anders0n.net > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users