Hello, I''ve done some I/O benchmarks on an RHEL 5.1-based xen setup. The main (dom0) server is an x86_64 host with a FC-connected IBM SAN. The guest servers are paravirtualized. I used bonnie++ to stress-test and to try to analyze I/O performance. Bonnie++ was run with this command, current working directory being the relevant part of the file system: bonnie++ -n 4 -s 20g -x 5 -u nobody The chunk size (20g) was much larger than the memory available for the dom0/domU (both having ½GB RAM available). Testing ran for several hours. No stability problems were seen. Bonnie++ was run on the dom0 and one of the domUs, making sure that the involved disks were otherwise idle, or very close to being idle. I.e., a quiet (and somewhat slow) part of the SAN was used. The storage area was made available for the dom0 as a file system on "raw device" (no use of logical volume management on the operating system). After testing on the dom0, the same device was unmounted, the file system was re-created and subsequently allocated to the domU as a "phy"-device; the phy-device was then mounted to a suitable mountpoint on the domU. The tests were run on different times of the day. The dom0 test was partly run during work hours where there may have been contention on the SAN signalling infrastructure (switch/storage HBAs), although we generally believe that we don''t have a major bottleneck on the SAN optical pathways. The domU test was run during night-time where I/O pressure on the SAN infrastructure is probably somewhat lower (although various backup and batch jobs make sure that the SAN is never sleeping). The results were averaged, after filtering out result which seemed atypical. Bonnie wasn''t able to detect a difference in the file-creation performance, so these values aren''t included. The results: Host | Sequential Output | Sequential Input | Random seeks | | (K/sec) | (K/sec) | (/sec) | | Per Char | Block | Rewrite | Per Char | Block | | ======+=============================+===================+==============+ dom0 | 56739 | 96529 | 46524 | 58346 | 119830 | 94 | ------+----------+--------+---------+----------+--------+--------------+ domU | 56186 | 112796 | 50178 | 65325 | 202569 | 148 | ======+=============================+===================+==============+ domU | | | | | | | gain% | -1 | 17 | 8 | 12 | 69 | 56 | In other words: I''ve found that my domU''s I/O performance generally surpasses that of my dom0(!). On a side note: Running mke2fs went much, much faster on the dom0 than on the domU. So for this kind of I/O, the pattern seems to break. Am I just being an ignorant benchmark-idiot, or could this kind of result actually be expected and/or explained? Is bonnie++ a bad storage benchmarking tool? - If so: What else is better? -- Regards, Troels Arvin <troels@arvin.dk> http://troels.arvin.dk/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Johann Spies
2007-Nov-28 12:27 UTC
Re: [Xen-users] domU has better I/O performance than dom0?
On Wed, Nov 28, 2007 at 11:38:31AM +0000, Troels Arvin wrote:> > I''ve done some I/O benchmarks on an RHEL 5.1-based xen setup. The main > (dom0) server is an x86_64 host with a FC-connected IBM SAN. The guest > servers are paravirtualized. > > I used bonnie++ to stress-test and to try to analyze I/O performance. > Bonnie++ was run with this command, current working directory being the > relevant part of the file system: > bonnie++ -n 4 -s 20g -x 5 -u nobodyI have recently done some tests on three different domU''s on different but very similar physical servers and the results just did not make any sense. The test was the same, but the results were so different that I did not even try to interpret it. Here is what I did: On all three we used a dedicated 100G partition to run the test on. The commandline was /usr/sbin/bonnie -d . -s 0.130 -n 8096 -r 8096 And the results (confusing): Version 1.03 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP mail1a(ext3) 8096 1822 5 21910 9 657 0 1846 5 14092 6 366 0 mail2a(ext3) 8096 4352 7 292 0 271 0 4242 6 178 0 153 0 mail2a(xfs) 8096 547 83 5833 2 1985 6 553 85 136 0 96 0 mail3a(ext3) 8096 501 0 166 0 131 0 512 0 74 0 49 0 Mail1a was on an quad-core CPU Dell 2950 and mail2a and mail3a on 2xdual-core CPU Dell 2950''s. Maybe the fact that the test ran on domU''s with an underlying lvm has something to do with the strange results. Regards Johann -- Johann Spies Telefoon: 021-808 4036 Informasietegnologie, Universiteit van Stellenbosch "The earth is the LORD''S, and the fullness thereof; the world, and they that dwell therein." Psalms 24:1 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Troels Arvin
2007-Nov-28 12:39 UTC
[Xen-users] Re: domU has better I/O performance than dom0?
On Wed, 28 Nov 2007 14:27:54 +0200, Johann Spies wrote:> The commandline was > > /usr/sbin/bonnie -d . -s 0.130 -n 8096 -r 8096You are measuring file creation rate, and not "raw I/O throughput", right? In that sense, your tests are different from what I measured. Still, your results are certainly strange. I wonder if they would be equally strange if you ran them on different non-virtualized servers. -- Regards, Troels Arvin <troels@arvin.dk> http://troels.arvin.dk/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink
2007-Nov-28 12:44 UTC
Re: [Xen-users] Re: domU has better I/O performance than dom0?
On Wed, 28 Nov 2007, Troels Arvin wrote:> On Wed, 28 Nov 2007 14:27:54 +0200, Johann Spies wrote: > > The commandline was > > > > /usr/sbin/bonnie -d . -s 0.130 -n 8096 -r 8096 > > You are measuring file creation rate, and not "raw I/O throughput", > right? In that sense, your tests are different from what I measured. > Still, your results are certainly strange. I wonder if they would be > equally strange if you ran them on different non-virtualized servers.I benchmarked iSCSI and NFS. You will notice the strangeness on NFS too: http://xen.bot.nu/benchmarks/virtueel.html Since NFS crashes my system if I do a benchmark that is larger than my memory I guess the benchmark is done in memory (initially). Since the FC results show better performance, could it be the offloading with multiple processors? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Johann Spies
2007-Nov-29 07:49 UTC
[Xen-users] Re: domU has better I/O performance than dom0?
On Wed, Nov 28, 2007 at 12:39:20PM +0000, Troels Arvin wrote:> > > > /usr/sbin/bonnie -d . -s 0.130 -n 8096 -r 8096 > > You are measuring file creation rate, and not "raw I/O throughput", > right?Correct.> In that sense, your tests are different from what I measured. > Still, your results are certainly strange. I wonder if they would be > equally strange if you ran them on different non-virtualized servers.I will probably do that some time in the future. At the moment I do not have servers available. Regards Johann -- Johann Spies Telefoon: 021-808 4036 Informasietegnologie, Universiteit van Stellenbosch "Who shall ascend into the hill of the LORD? or who shall stand in his holy place? He that hath clean hands, and a pure heart..." Psalms 24:3,4 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mark Williamson
2007-Dec-02 04:01 UTC
Re: [Xen-users] domU has better I/O performance than dom0?
I don''t have an explanation, but I can tell you that when we first tested the current IO architecture (which was before Xen 2 was released) we found that performance on /certain benchmarks/ could be improved in the domU relative to both dom0 and to native execution. We never traced down exactly why this was happening, but suspected it was somehow due to the two layers of IO that were going on (e.g. perhaps a fortuitous interaction of the schedulers in dom0 and domU, some weird behaviour of the Linux IO stack). Nonetheless I''m not surprised if some operations are still faster in dom0 - it does, after all, have less layers to pass through to perform the IO. Cheers, Mark On Wednesday 28 November 2007, Troels Arvin wrote:> Hello, > > I''ve done some I/O benchmarks on an RHEL 5.1-based xen setup. The main > (dom0) server is an x86_64 host with a FC-connected IBM SAN. The guest > servers are paravirtualized. > > I used bonnie++ to stress-test and to try to analyze I/O performance. > Bonnie++ was run with this command, current working directory being the > relevant part of the file system: > bonnie++ -n 4 -s 20g -x 5 -u nobody > > The chunk size (20g) was much larger than the memory available for the > dom0/domU (both having ½GB RAM available). > Testing ran for several hours. No stability problems were seen. > > Bonnie++ was run on the dom0 and one of the domUs, making sure that the > involved disks were otherwise idle, or very close to being idle. I.e., a > quiet (and somewhat slow) part of the SAN was used. > > The storage area was made available for the dom0 as a file system on "raw > device" (no use of logical volume management on the operating system). > After testing on the dom0, the same device was unmounted, the file system > was re-created and subsequently allocated to the domU as a "phy"-device; > the phy-device was then mounted to a suitable mountpoint on the domU. > > The tests were run on different times of the day. The dom0 test was > partly run during work hours where there may have been contention on the > SAN signalling infrastructure (switch/storage HBAs), although we > generally believe that we don''t have a major bottleneck on the SAN > optical pathways. The domU test was run during night-time where I/O > pressure on the SAN infrastructure is probably somewhat lower (although > various backup and batch jobs make sure that the SAN is never sleeping). > > The results were averaged, after filtering out result which seemed > atypical. > > Bonnie wasn''t able to detect a difference in the file-creation > performance, so these values aren''t included. > > The results: > > Host | Sequential Output | Sequential Input | Random seeks | > > | (K/sec) | (K/sec) | (/sec) | > | Per Char | Block | Rewrite | Per Char | Block | | > > ======+=============================+===================+==============+ > dom0 | 56739 | 96529 | 46524 | 58346 | 119830 | 94 | > ------+----------+--------+---------+----------+--------+--------------+ > domU | 56186 | 112796 | 50178 | 65325 | 202569 | 148 | > ======+=============================+===================+==============+ > domU | | | | | | | > gain% | -1 | 17 | 8 | 12 | 69 | 56 | > > In other words: I''ve found that my domU''s I/O performance generally > surpasses that of my dom0(!). > > On a side note: Running mke2fs went much, much faster on the dom0 than on > the domU. So for this kind of I/O, the pattern seems to break. > > Am I just being an ignorant benchmark-idiot, or could this kind of result > actually be expected and/or explained? > > Is bonnie++ a bad storage benchmarking tool? - If so: What else is better?-- Dave: Just a question. What use is a unicyle with no seat? And no pedals! Mark: To answer a question with a question: What use is a skateboard? Dave: Skateboards have wheels. Mark: My wheel has a wheel! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users