Hi list, Next week I''ll receive a couple of IBM M2 servers. The team wanted to test a DomU with RHEL 5 virtualized on XCP vs a RHEL 5 native, installed directly to the servers. Does anybody has done this test before? We''ll test a java web application (online banking), and we basically want to see which environment will process more TPS (transactions per second), which is directly related to disk I/O. Both servers has identical hardware, with 15k rpm disks. Do I have to modify something in the XCP config? I think (and want) XCP will win.. What should I expect? Will the difference in TPS be notorious? Thanks. -- @cereal_bars _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
What makes you think that adding a virtualization layer to the picture would increase the performance? Would be bit odd to have same operating system perform better when it is run on a virtual machine. -Henrik Andersson On 12 January 2011 22:13, Boris Quiroz <bquiroz.work@gmail.com> wrote:> Hi list, > > Next week I''ll receive a couple of IBM M2 servers. The team wanted to > test a DomU with RHEL 5 virtualized on XCP vs a RHEL 5 native, > installed directly to the servers. Does anybody has done this test > before? > > We''ll test a java web application (online banking), and we basically > want to see which environment will process more TPS (transactions per > second), which is directly related to disk I/O. Both servers has > identical hardware, with 15k rpm disks. > > Do I have to modify something in the XCP config? I think (and want) > XCP will win.. What should I expect? Will the difference in TPS be > notorious? > > Thanks. > > -- > @cereal_bars > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 01/12/2011 03:17 PM, Henrik Andersson wrote:> What makes you think that adding a virtualization layer to the picture > would increase the performance? Would be bit odd to have same operating > system perform better when it is run on a virtual machine. > > -Henrik AnderssonI would suspect that OP is trying to asses whether the performance hit would be significant enough to outweigh the benefits of virtualization. -- Digimer E-Mail: digimer@alteeve.com AN!Whitepapers: http://alteeve.com Node Assassin: http://nodeassassin.org _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I don''t know what he means but he states, and I quote: "We''ll test a java web application (online banking), and *we basically want to see which environment will process more TPS* (transactions per second), which is directly related to disk I/O." To me he''s benchmarking native RHEL5 against virtualized RHEL5, same way one could benchmark RHEL5 against Ubuntu or Solaris. Maybe he will tell us what he meant by that and to be honest, I might understand him incorrectly because of my english. -Henrik Andersson On 12 January 2011 22:21, Digimer <linux@alteeve.com> wrote:> On 01/12/2011 03:17 PM, Henrik Andersson wrote: > > What makes you think that adding a virtualization layer to the picture > > would increase the performance? Would be bit odd to have same operating > > system perform better when it is run on a virtual machine. > > > > -Henrik Andersson > > I would suspect that OP is trying to asses whether the performance hit > would be significant enough to outweigh the benefits of virtualization. > > -- > Digimer > E-Mail: digimer@alteeve.com > AN!Whitepapers: http://alteeve.com > Node Assassin: http://nodeassassin.org >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/1/12 Digimer <linux@alteeve.com>:> On 01/12/2011 03:17 PM, Henrik Andersson wrote: >> What makes you think that adding a virtualization layer to the picture >> would increase the performance? Would be bit odd to have same operating >> system perform better when it is run on a virtual machine. >> >> -Henrik Andersson > > I would suspect that OP is trying to asses whether the performance hit > would be significant enough to outweigh the benefits of virtualization. > > -- > Digimer > E-Mail: digimer@alteeve.com > AN!Whitepapers: http://alteeve.com > Node Assassin: http://nodeassassin.org >Answering Henrik''s question: We just want to see the results. If RHEL native process, for example, 5000 TPS and RHEL+XCP process 4900, we obviously will continue with a virtualized environment because the difference is not significant to us. Literature says that performance for guest is near-native, and that''s what I wanna test. And if somebody has already done this kind of test, I could know what to expect. Thanks. -- @cereal_bars _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I think that is valid thing to figure out and I''m also interested in the results, it happens to be something I have been thinking also. Just didn''t get to testing part for some reason. By the way, what version of XCP are you planning to use? XCP 0.5 or 1.0 Beta? Benchmark''s made using xenServer should give you atleast some idea of what to expect. I''m thinking some one has made similar test''s, would be odd if not. -Henrik Andersson On 12 January 2011 22:39, Boris Quiroz <bquiroz.work@gmail.com> wrote:> 2011/1/12 Digimer <linux@alteeve.com>: > > On 01/12/2011 03:17 PM, Henrik Andersson wrote: > >> What makes you think that adding a virtualization layer to the picture > >> would increase the performance? Would be bit odd to have same operating > >> system perform better when it is run on a virtual machine. > >> > >> -Henrik Andersson > > > > I would suspect that OP is trying to asses whether the performance hit > > would be significant enough to outweigh the benefits of virtualization. > > > > -- > > Digimer > > E-Mail: digimer@alteeve.com > > AN!Whitepapers: http://alteeve.com > > Node Assassin: http://nodeassassin.org > > > > Answering Henrik''s question: We just want to see the results. If RHEL > native process, for example, 5000 TPS and RHEL+XCP process 4900, we > obviously will continue with a virtualized environment because the > difference is not significant to us. > > Literature says that performance for guest is near-native, and that''s > what I wanna test. And if somebody has already done this kind of test, > I could know what to expect. > > Thanks. > > > -- > @cereal_bars >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Jan 12, 2011 at 2:39 PM, Boris Quiroz <bquiroz.work@gmail.com>wrote:> Literature says that performance for guest is near-native, and that''s > what I wanna test. And if somebody has already done this kind of test, > I could know what to expect. >No one else has likely tested your proprietary application. You should test without expectation. I have, un-expectantly, seen an application in a DomU outperform baremetal in some corner cases with memory-access-intensive applications. I think this had to do with the DomU (paravirt) being put entirely into HugePages, where baremetal was using standard 4K pages, thus increasing TLB efficiency and reducing memory accesses. The performance impact can vary largely based on workload, up or down, believe it or not. -JR _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/1/12 Henrik Andersson <henrik.j.andersson@gmail.com>:> I think that is valid thing to figure out and I''m also interested in the > results, it happens to be something I have been thinking also. Just didn''t > get to testing part for some reason. > By the way, what version of XCP are you planning to use? XCP 0.5 or 1.0 > Beta?I''ll use XCP 0.5 and RHEL 5 The app is written in java, so the memory is limited by JVM. Both systems will use the same heap size. Thank you all for the answers and if you have any other idea please let me know (on this thread :P). I''ll let you know about the results. Cheers!> Benchmark''s made using xenServer should give you atleast some idea of what > to expect. I''m thinking some one has made similar test''s, would be odd if > not. > > -Henrik Andersson > On 12 January 2011 22:39, Boris Quiroz <bquiroz.work@gmail.com> wrote: >> >> 2011/1/12 Digimer <linux@alteeve.com>: >> > On 01/12/2011 03:17 PM, Henrik Andersson wrote: >> >> What makes you think that adding a virtualization layer to the picture >> >> would increase the performance? Would be bit odd to have same operating >> >> system perform better when it is run on a virtual machine. >> >> >> >> -Henrik Andersson >> > >> > I would suspect that OP is trying to asses whether the performance hit >> > would be significant enough to outweigh the benefits of virtualization. >> > >> > -- >> > Digimer >> > E-Mail: digimer@alteeve.com >> > AN!Whitepapers: http://alteeve.com >> > Node Assassin: http://nodeassassin.org >> > >> >> Answering Henrik''s question: We just want to see the results. If RHEL >> native process, for example, 5000 TPS and RHEL+XCP process 4900, we >> obviously will continue with a virtualized environment because the >> difference is not significant to us. >> >> Literature says that performance for guest is near-native, and that''s >> what I wanna test. And if somebody has already done this kind of test, >> I could know what to expect. >> >> Thanks. >> >> >> -- >> @cereal_bars > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- @cereal_bars _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > > I''ll use XCP 0.5 and RHEL 5 > > The app is written in java, so the memory is limited by JVM. Both > systems will use the same heap size. > > Thank you all for the answers and if you have any other idea please > let me know (on this thread :P). I''ll let you know about the results. >Cheers! I have not done this test with XCP but I have with Xen 3.4.2. As long as I use an LVM volume I get very very near real performance ie. mysqlbench comes in at about 99% of native. Using a disk file it comes in at about 95% of native. CPU utilization to get those numbers on the domU are higher than on native hardware though. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra Giraldez
2011-Jan-16 15:22 UTC
Re: [Xen-users] redhat native vs. redhat on XCP
On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams <grantmasterflash@gmail.com> wrote:> As long as I use an LVM volume I get very very near real performance ie. > mysqlbench comes in at about 99% of native.without any real load on other DomUs, i guess in my settings the biggest ''con'' of virtualizing some loads is the sharing of resources, not the hypervisor overhead. Since it''s easier (and cheaper) to get hardware oversized on CPU and RAM than on IO speed (specially on IOPS), that means that i have some database servers that I can''t virtualize on the near term. Of course, most of this would be solved by dedicating spindles instead of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5" bays, instead of the current 3.5" ones. Not using LVM is a real drawback, but it still seems to be better than dedicating whole boxes. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@guerrag.com>wrote:> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams > <grantmasterflash@gmail.com> wrote: > > As long as I use an LVM volume I get very very near real performance ie. > > mysqlbench comes in at about 99% of native. > > without any real load on other DomUs, i guess >in my settings the biggest ''con'' of virtualizing some loads is the> sharing of resources, not the hypervisor overhead. Since it''s easier > (and cheaper) to get hardware oversized on CPU and RAM than on IO > speed (specially on IOPS), that means that i have some database > servers that I can''t virtualize on the near term. > > But that is the same as just putting more than one service on one box. Ibelieve he was wondering what the overhead was to virtualizing as apposed to bare metal. Anytime you have more than one process running on a box you have to think about the resources they use and how they''ll interact with each other. This has nothing to do with virtualizing itself unless the hypervisor has a bad scheduler. Of course, most of this would be solved by dedicating spindles instead> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5" > bays, instead of the current 3.5" ones. Not using LVM is a real > drawback, but it still seems to be better than dedicating whole boxes. > > -- > Javier >I''ve moved all my VMs to running on LVs on SSDs for this purpose. The overhead of LV over just bare drives is very very little unless you''re doing a lot of snapshots. Grant McWilliams Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/1/16 Grant McWilliams <grantmasterflash@gmail.com>:> > > On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@guerrag.com> > wrote: >> >> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams >> <grantmasterflash@gmail.com> wrote: >> > As long as I use an LVM volume I get very very near real performance ie. >> > mysqlbench comes in at about 99% of native. >> >> without any real load on other DomUs, i guess >> >> in my settings the biggest ''con'' of virtualizing some loads is the >> sharing of resources, not the hypervisor overhead. Since it''s easier >> (and cheaper) to get hardware oversized on CPU and RAM than on IO >> speed (specially on IOPS), that means that i have some database >> servers that I can''t virtualize on the near term. >> > But that is the same as just putting more than one service on one box. I > believe he was wondering what the overhead was to virtualizing as apposed to > bare metal. Anytime you have more than one process running on a box you have > to think about the resources they use and how they''ll interact with each > other. This has nothing to do with virtualizing itself unless the hypervisor > has a bad scheduler. > >> Of course, most of this would be solved by dedicating spindles instead >> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5" >> bays, instead of the current 3.5" ones. Not using LVM is a real >> drawback, but it still seems to be better than dedicating whole boxes. >> >> -- >> Javier > > I''ve moved all my VMs to running on LVs on SSDs for this purpose. The > overhead of LV over just bare drives is very very little unless you''re doing > a lot of snapshots. > > > Grant McWilliams > > Some people, when confronted with a problem, think "I know, I''ll use > Windows." > Now they have two problems. > >Hi list, I did a preliminary test using [1], and the result was near to what I expect. This was a very very small test, because I''ve a lot of things to do before I can setup a good and representative test, but I think it is a good start. Using the tool stress I started with the default command: stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here''s the output of both xen and non-xen servers: [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd stress: info: [3682] successful run completed in 10s [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd stress: info: [5284] successful run completed in 10s As you can see, the result is the same, but what happen when I include hdd i/o to the test? Here''s the output: [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10 --timeout 10s stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd stress: info: [3700] successful run completed in 59s [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10 --timeout 10s stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd stress: info: [5332] successful run completed in 37s Including some HDD stress, the result is different. Both servers (xen and non-xen) are using LVM, but to be honest, I was expecting this kind of result because of the disk access. Later this week I''ll continue with the tests (well designed tests :P) and I''ll share the results. Cheers. 1. http://freshmeat.net/projects/stress/ -- @cereal_bars _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@gmail.com>wrote:> 2011/1/16 Grant McWilliams <grantmasterflash@gmail.com>: > > > > > > On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez < > javier@guerrag.com> > > wrote: > >> > >> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams > >> <grantmasterflash@gmail.com> wrote: > >> > As long as I use an LVM volume I get very very near real performance > ie. > >> > mysqlbench comes in at about 99% of native. > >> > >> without any real load on other DomUs, i guess > >> > >> in my settings the biggest ''con'' of virtualizing some loads is the > >> sharing of resources, not the hypervisor overhead. Since it''s easier > >> (and cheaper) to get hardware oversized on CPU and RAM than on IO > >> speed (specially on IOPS), that means that i have some database > >> servers that I can''t virtualize on the near term. > >> > > But that is the same as just putting more than one service on one box. I > > believe he was wondering what the overhead was to virtualizing as apposed > to > > bare metal. Anytime you have more than one process running on a box you > have > > to think about the resources they use and how they''ll interact with each > > other. This has nothing to do with virtualizing itself unless the > hypervisor > > has a bad scheduler. > > > >> Of course, most of this would be solved by dedicating spindles instead > >> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5" > >> bays, instead of the current 3.5" ones. Not using LVM is a real > >> drawback, but it still seems to be better than dedicating whole boxes. > >> > >> -- > >> Javier > > > > I''ve moved all my VMs to running on LVs on SSDs for this purpose. The > > overhead of LV over just bare drives is very very little unless you''re > doing > > a lot of snapshots. > > > > > > Grant McWilliams > > > > Some people, when confronted with a problem, think "I know, I''ll use > > Windows." > > Now they have two problems. > > > > > > Hi list, > > I did a preliminary test using [1], and the result was near to what I > expect. This was a very very small test, because I''ve a lot of things > to do before I can setup a good and representative test, but I think > it is a good start. > > Using the tool stress I started with the default command: stress --cpu > 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here''s the output of > both xen and non-xen servers: > > [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s > stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd > stress: info: [3682] successful run completed in 10s > > [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout > 10s > stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd > stress: info: [5284] successful run completed in 10s > > As you can see, the result is the same, but what happen when I include > hdd i/o to the test? Here''s the output: > > [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10 > --timeout 10s > stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd > stress: info: [3700] successful run completed in 59s > > [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd > 10 --timeout 10s > stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd > stress: info: [5332] successful run completed in 37s > > Including some HDD stress, the result is different. Both servers (xen > and non-xen) are using LVM, but to be honest, I was expecting this > kind of result because of the disk access. > > Later this week I''ll continue with the tests (well designed tests :P) > and I''ll share the results. > > Cheers. > > 1. http://freshmeat.net/projects/stress/ > > -- > @cereal_bars >You weren''t specific about whether the Xen tests were done on a Dom0 or DomU. I could assume DomU since there should be next to zero overhead for a Xen Dom0 over a non-xen host. Can you post your DomU config please? Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/1/18 Grant McWilliams <grantmasterflash@gmail.com>:> > > > On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@gmail.com> > wrote: >> >> 2011/1/16 Grant McWilliams <grantmasterflash@gmail.com>: >> > >> > >> > On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez >> > <javier@guerrag.com> >> > wrote: >> >> >> >> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams >> >> <grantmasterflash@gmail.com> wrote: >> >> > As long as I use an LVM volume I get very very near real performance >> >> > ie. >> >> > mysqlbench comes in at about 99% of native. >> >> >> >> without any real load on other DomUs, i guess >> >> >> >> in my settings the biggest ''con'' of virtualizing some loads is the >> >> sharing of resources, not the hypervisor overhead. Since it''s easier >> >> (and cheaper) to get hardware oversized on CPU and RAM than on IO >> >> speed (specially on IOPS), that means that i have some database >> >> servers that I can''t virtualize on the near term. >> >> >> > But that is the same as just putting more than one service on one box. I >> > believe he was wondering what the overhead was to virtualizing as >> > apposed to >> > bare metal. Anytime you have more than one process running on a box you >> > have >> > to think about the resources they use and how they''ll interact with each >> > other. This has nothing to do with virtualizing itself unless the >> > hypervisor >> > has a bad scheduler. >> > >> >> Of course, most of this would be solved by dedicating spindles instead >> >> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5" >> >> bays, instead of the current 3.5" ones. Not using LVM is a real >> >> drawback, but it still seems to be better than dedicating whole boxes. >> >> >> >> -- >> >> Javier >> > >> > I''ve moved all my VMs to running on LVs on SSDs for this purpose. The >> > overhead of LV over just bare drives is very very little unless you''re >> > doing >> > a lot of snapshots. >> > >> > >> > Grant McWilliams >> > >> > Some people, when confronted with a problem, think "I know, I''ll use >> > Windows." >> > Now they have two problems. >> > >> > >> >> Hi list, >> >> I did a preliminary test using [1], and the result was near to what I >> expect. This was a very very small test, because I''ve a lot of things >> to do before I can setup a good and representative test, but I think >> it is a good start. >> >> Using the tool stress I started with the default command: stress --cpu >> 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here''s the output of >> both xen and non-xen servers: >> >> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s >> stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd >> stress: info: [3682] successful run completed in 10s >> >> [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout >> 10s >> stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd >> stress: info: [5284] successful run completed in 10s >> >> As you can see, the result is the same, but what happen when I include >> hdd i/o to the test? Here''s the output: >> >> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10 >> --timeout 10s >> stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd >> stress: info: [3700] successful run completed in 59s >> >> [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd >> 10 --timeout 10s >> stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd >> stress: info: [5332] successful run completed in 37s >> >> Including some HDD stress, the result is different. Both servers (xen >> and non-xen) are using LVM, but to be honest, I was expecting this >> kind of result because of the disk access. >> >> Later this week I''ll continue with the tests (well designed tests :P) >> and I''ll share the results. >> >> Cheers. >> >> 1. http://freshmeat.net/projects/stress/ >> >> -- >> @cereal_bars > > You weren''t specific about whether the Xen tests were done on a Dom0 or > DomU. I could assume DomU since there should be next to zero overhead for a > Xen Dom0 over a non-xen host. Can you post your DomU config please? > > Grant McWilliams > >Sorry.. I forgot include that info. And yes, the test were done in a DomU running over XCP 0.5. In [1] you can find the output of xe vm-para-list command. As I said, later this week or maybe next week I''ll start with a well designed test (not designed yet, so any comment/advice is welcome) and prepare a little inform about it. Thanks. 1. https://xen.privatepaste.com/2c123b90c1 -- @cereal_bars _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jan 18, 2011 at 4:31 AM, Boris Quiroz <bquiroz.work@gmail.com>wrote:> 2011/1/18 Grant McWilliams <grantmasterflash@gmail.com>: > > > > > > > You weren''t specific about whether the Xen tests were done on a Dom0 or > > DomU. I could assume DomU since there should be next to zero overhead for > a > > Xen Dom0 over a non-xen host. Can you post your DomU config please? > > > > Grant McWilliams > > > > > > Sorry.. I forgot include that info. > And yes, the test were done in a DomU running over XCP 0.5. In [1] you > can find the output of xe vm-para-list command. > > As I said, later this week or maybe next week I''ll start with a well > designed test (not designed yet, so any comment/advice is welcome) and > prepare a little inform about it. > > Thanks. > > 1. https://xen.privatepaste.com/2c123b90c1 > >Would still be interested in DomU config. Grant McWilliams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users