Sami Dalouche
2008-Jan-24 22:36 UTC
[Xen-users] Xen Disk I/O performance vs native performance
Hi, Can you guys tell me how much slower your Disk I/O performance is using Xen, compared to native performance ? On my RAID 5 machine, using Ubuntu feisty''s linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read and writes to the disk are something between 2 and 3 times slower. Is that considered normal, or especially bad ? If it''s worse than the average, what kind of information do you want me to give you to diagnose the problem ? Thanks for your help, Sami Dalouche _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Brock Palen
2008-Jan-24 22:51 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
I see the same drop on my machines, no raid, but do use drbd. 3x''s slower Brock Palen Center for Advanced Computing brockp@umich.edu (734)936-1985 On Jan 24, 2008, at 5:36 PM, Sami Dalouche wrote:> Hi, > > Can you guys tell me how much slower your Disk I/O performance is > using > Xen, compared to native performance ? > > On my RAID 5 machine, using Ubuntu feisty''s > linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read > and > writes to the disk are something between 2 and 3 times slower. > > Is that considered normal, or especially bad ? If it''s worse than the > average, what kind of information do you want me to give you to > diagnose > the problem ? > > Thanks for your help, > Sami Dalouche > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mark Williamson
2008-Jan-24 22:54 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
> Can you guys tell me how much slower your Disk I/O performance is using > Xen, compared to native performance ? > > On my RAID 5 machine, using Ubuntu feisty''s > linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read and > writes to the disk are something between 2 and 3 times slower. > > Is that considered normal, or especially bad ? If it''s worse than the > average, what kind of information do you want me to give you to diagnose > the problem ?That''s lower than I expected. Can you fill in a few more details about the precise nature of your setup? The following questions are a starting point: 1) Is this comparing Xen dom0 with native Linux running on the same partition / LVM volume on the same machine? 2) Are your numbers from the domU or from dom0? 3) How are the RAID 5 and LVM implemented? RAID 5 in hardware or software? LVM in dom0 or domU (or both)? 4) Any interesting messages in dmesg that might suggest that the hardware support is not working correctly? 5) Any other ideas? Another thing that you could try is to download VMKNOPPIX and boot into Xen off that, then try doing some runs mounting your disk and reading / writing it using that. You might need to rerun this a few times to ensure that everything is paged in off the CD-ROM drive, otherwise that might affect the results badly. I assume VMKNOPPIX uses something closer to the standard XenLinux kernel, so this might help to eliminate anything Ubuntu''s packaging or choice of kernel version may have introduced. Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre Erenoglu
2008-Jan-24 23:58 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Hi guys, which tool are you using for benchmarking the disks? I used hdparm in the past (-t and -T) and it didn''t show much difference. Emre On Jan 24, 2008 11:51 PM, Brock Palen <brockp@umich.edu> wrote:> I see the same drop on my machines, no raid, but do use drbd. > 3x''s slower > > Brock Palen > Center for Advanced Computing > brockp@umich.edu > (734)936-1985 > > > On Jan 24, 2008, at 5:36 PM, Sami Dalouche wrote: > > > Hi, > > > > Can you guys tell me how much slower your Disk I/O performance is > > using > > Xen, compared to native performance ? > > > > On my RAID 5 machine, using Ubuntu feisty''s > > linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read > > and > > writes to the disk are something between 2 and 3 times slower. > > > > Is that considered normal, or especially bad ? If it''s worse than the > > average, what kind of information do you want me to give you to > > diagnose > > the problem ? > > > > Thanks for your help, > > Sami Dalouche > > > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jan Marquardt
2008-Jan-25 09:45 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Sami Dalouche
2008-Jan-25 20:36 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Hi, 1) this is comparing Xen dom0 with native linux running on the same partition / LVM volume on the same machine I did a simple dd : dd if=/dev/zero of=./t bs=1k count=1048576 Dom0 : 1073741824 bytes (1.1 GB) copied, 9.21037 seconds, 117 MB/s DomU : 1073741824 bytes (1.1 GB) copied, 17.2537 seconds, 62.2 MB/s The writes are done on the exact same partition, and is consistent (did it several times). (the 117MB is achieved by shutting down the DomU, mounting its partition, and running the dd.) the 62 MB is achieved directly from the DomU.. So, it''s not exactly 3 times slower for this case, but I''m pretty sure that for real applications, lots of small writes or reads, etc, the behavior is just worse.. so it''s definitely between 2 and 3 times slower on the DomU than on the Dom0. 3] Raid 5 is implemented in hardware, using a 3DM card. I have 8 disks that are configured in RAID 5 using auto-carving (so it makes 2 physical disks for the system, which are LVM''ed together). The whole LVM disk space is then split into logical volumes, and I have one LV per VM. so for instance, I have a LV : /dev/system/blueedge_root which is referenced as : kernel = "/boot/vmlinuz-2.6.22-14-xen" ramdisk = "/boot/initrd.img-2.6.22-14-xen" builder=''linux'' memory = 512 name = "blueedge" vcpus = 0 vif = [''mac=02:00:00:00:00:01, bridge=xenbr0'' ] disk = [ ''phy:/dev/mapper/system-blueedge_root,hda1,w''] root = "/dev/hda1 ro" extra = ''console=xvc0'' #extra=''xenconsole=tty'' extra=''xencons=tty'' 4] nothing interesting in dmesg. and these performance problems have been consistent over the past few months... I''ll try your VMKNOPPIX thing next time I physically go to the server. I should be able to post results about this later this week-end. thanks a lot for your help, regards, Sami Dalouche On Thu, 2008-01-24 at 22:54 +0000, Mark Williamson wrote:> > Can you guys tell me how much slower your Disk I/O performance is using > > Xen, compared to native performance ? > > > > On my RAID 5 machine, using Ubuntu feisty''s > > linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read and > > writes to the disk are something between 2 and 3 times slower. > > > > Is that considered normal, or especially bad ? If it''s worse than the > > average, what kind of information do you want me to give you to diagnose > > the problem ? > > That''s lower than I expected. Can you fill in a few more details about the > precise nature of your setup? The following questions are a starting point: > > 1) Is this comparing Xen dom0 with native Linux running on the same > partition / LVM volume on the same machine? > 2) Are your numbers from the domU or from dom0? > 3) How are the RAID 5 and LVM implemented? RAID 5 in hardware or software? > LVM in dom0 or domU (or both)? > 4) Any interesting messages in dmesg that might suggest that the hardware > support is not working correctly? > 5) Any other ideas? > > Another thing that you could try is to download VMKNOPPIX and boot into Xen > off that, then try doing some runs mounting your disk and reading / writing > it using that. You might need to rerun this a few times to ensure that > everything is paged in off the CD-ROM drive, otherwise that might affect the > results badly. I assume VMKNOPPIX uses something closer to the standard > XenLinux kernel, so this might help to eliminate anything Ubuntu''s packaging > or choice of kernel version may have introduced. > > Cheers, > Mark > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
For kicks, Try using NFS. I am able to get 100MB/sec from two DomUs at the same time. (I do not care about Dom0 as it really does nothing.) My DomU''s are Sun Grid Engine clients and move really fast. --tmac On Jan 25, 2008 3:36 PM, Sami Dalouche <skoobi@free.fr> wrote:> Hi, > > 1) this is comparing Xen dom0 with native linux running on the same > partition / LVM volume on the same machine > > I did a simple dd : > dd if=/dev/zero of=./t bs=1k count=1048576 > > Dom0 : > 1073741824 bytes (1.1 GB) copied, 9.21037 seconds, 117 MB/s > > DomU : > 1073741824 bytes (1.1 GB) copied, 17.2537 seconds, 62.2 MB/s > > The writes are done on the exact same partition, and is consistent (did > it several times). (the 117MB is achieved by shutting down the DomU, > mounting its partition, and running the dd.) the 62 MB is achieved > directly from the DomU.. > > So, it''s not exactly 3 times slower for this case, but I''m pretty sure > that for real applications, lots of small writes or reads, etc, the > behavior is just worse.. so it''s definitely between 2 and 3 times slower > on the DomU than on the Dom0. > > 3] Raid 5 is implemented in hardware, using a 3DM card. I have 8 disks > that are configured in RAID 5 using auto-carving (so it makes 2 physical > disks for the system, which are LVM''ed together). > The whole LVM disk space is then split into logical volumes, and I have > one LV per VM. > > so for instance, I have a LV : > /dev/system/blueedge_root > > which is referenced as : > kernel = "/boot/vmlinuz-2.6.22-14-xen" > ramdisk = "/boot/initrd.img-2.6.22-14-xen" > builder=''linux'' > memory = 512 > name = "blueedge" > vcpus = 0 > vif = [''mac=02:00:00:00:00:01, bridge=xenbr0'' ] > disk = [ ''phy:/dev/mapper/system-blueedge_root,hda1,w''] > root = "/dev/hda1 ro" > extra = ''console=xvc0'' > #extra=''xenconsole=tty'' > extra=''xencons=tty'' > > 4] nothing interesting in dmesg. and these performance problems have > been consistent over the past few months... > > I''ll try your VMKNOPPIX thing next time I physically go to the server. I > should be able to post results about this later this week-end. > > thanks a lot for your help, > regards, > Sami Dalouche > > On Thu, 2008-01-24 at 22:54 +0000, Mark Williamson wrote: > > > Can you guys tell me how much slower your Disk I/O performance is > using > > > Xen, compared to native performance ? > > > > > > On my RAID 5 machine, using Ubuntu feisty''s > > > linux-ubuntu-modules-2.6.22-14-xen kernel, xen 3.1, LVM disks, read > and > > > writes to the disk are something between 2 and 3 times slower. > > > > > > Is that considered normal, or especially bad ? If it''s worse than the > > > average, what kind of information do you want me to give you to > diagnose > > > the problem ? > > > > That''s lower than I expected. Can you fill in a few more details about > the > > precise nature of your setup? The following questions are a starting > point: > > > > 1) Is this comparing Xen dom0 with native Linux running on the same > > partition / LVM volume on the same machine? > > 2) Are your numbers from the domU or from dom0? > > 3) How are the RAID 5 and LVM implemented? RAID 5 in hardware or > software? > > LVM in dom0 or domU (or both)? > > 4) Any interesting messages in dmesg that might suggest that the > hardware > > support is not working correctly? > > 5) Any other ideas? > > > > Another thing that you could try is to download VMKNOPPIX and boot into > Xen > > off that, then try doing some runs mounting your disk and reading / > writing > > it using that. You might need to rerun this a few times to ensure that > > everything is paged in off the CD-ROM drive, otherwise that might affect > the > > results badly. I assume VMKNOPPIX uses something closer to the standard > > XenLinux kernel, so this might help to eliminate anything Ubuntu''s > packaging > > or choice of kernel version may have introduced. > > > > Cheers, > > Mark > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- --tmac RedHat Certified Engineer #804006984323821 (RHEL4) RedHat Certified Engineer #805007643429572 (RHEL5) Principal Consultant, RABA Technologies _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2008-Jan-25 21:03 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
> 3] Raid 5 is implemented in hardware, using a 3DM card. I have 8 disks > that are configured in RAID 5 using auto-carving (so it makes 2 physical > disks for the system, which are LVM''ed together). > The whole LVM disk space is then split into logical volumes, and I have > one LV per VM.Is the domU on the same "auto-carved" LUN? (Maybe one isn''t performing as well as the other for some reason?)> memory = 512Does dom0 have as much memory as domu? Are they using the same filesystem type? Please use bonnie++ at a minimum for i/o benchmarking. dd is not a benchmarking tool. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2008-Jan-25 21:19 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
On 1/25/08, John Madden <jmadden@ivytech.edu> wrote:> Does dom0 have as much memory as domu? Are they using the same > filesystem type?obviously it''s the same filesystem type, since it''s the same ''partition''. of course, different mount flags could in theory affect measurements. but the heaviest suspect for bad benchmarks is memory size. if Dom0 has enough RAM, a lot of this 1.1G written by dd could still be in writeback cache.> Please use bonnie++ at a minimum for i/o benchmarking. dd is not a > benchmarking tool.besides, no matter what tool you use to measure, use datasets at the very least three or four times the largest memory size. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2008-Jan-25 21:28 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
> obviously it''s the same filesystem type, since it''s the same > ''partition''. of course, different mount flags could in theory affect > measurements.Sorry, I must''ve missed something earlier. I didn''t realize you were mounting and writing to the same filesystem in both cases. But this is interesting -- if you''re mounting a filesystem on an LV in dom0 and then passing it as a physical device to domU, how does domU see it? Does it then put an LV inside this partition?> > Please use bonnie++ at a minimum for i/o benchmarking. dd is not a > > benchmarking tool. > > besides, no matter what tool you use to measure, use datasets at the > very least three or four times the largest memory size.Exactly. bonnie++ (for example) provides the -r argument, which causes it to deal with i/o at twice your memory size to avoid cache benefits. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink
2008-Jan-25 21:36 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 tmac schreef:> For kicks, Try using NFS. I am able to get 100MB/sec from two DomUs at > the same time. > (I do not care about Dom0 as it really does nothing.) My DomU''s are Sun > Grid Engine clients > and move really fast.Did you try files that were bigger than your host memory over NFS with bonnie++ for example? This ment full system crash for me. http://xen.bot.nu/benchmarks/ Some of our benchmarks. Stefan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHmlZaYH1+F2Rqwn0RCpVlAJoDO9kGZr8dNyObyk4dEsupYaxW9ACePMs+ WqULqr6qV7jeKJtPsvosdiI=d9V0 -----END PGP SIGNATURE----- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-25 21:42 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Ok, so I''m currently doing bonnie++ benchmarks and will report the results as soon as everything is finished. But in any case.. I am not trying to create super-accurate benchmarks. I am just trying to say that the VM''s I/O is definitely slowwer than the Dom0, and I don''t even need a benchmark to tell that everything is at least twice as slow. it seriously is super slow, so my original post was about knowing how much slower (vs native performance) is acceptable. Concerning your question, I don''t quite understand it... What I did was : 1] Created a LV on the real disk 2] exported this LV as a Xen disk using disk = [ ''phy:/dev/mapper/mylv,hda1,w''] 3] mounted it on the DomU by mount /dev/mapper/mylv isn''t it what I''m supposed to do ? regards, Sami Dalouche On Fri, 2008-01-25 at 16:28 -0500, John Madden wrote:> > obviously it''s the same filesystem type, since it''s the same > > ''partition''. of course, different mount flags could in theory affect > > measurements. > > Sorry, I must''ve missed something earlier. I didn''t realize you were > mounting and writing to the same filesystem in both cases. But this is > interesting -- if you''re mounting a filesystem on an LV in dom0 and then > passing it as a physical device to domU, how does domU see it? Does it > then put an LV inside this partition? > > > > Please use bonnie++ at a minimum for i/o benchmarking. dd is not a > > > benchmarking tool. > > > > besides, no matter what tool you use to measure, use datasets at the > > very least three or four times the largest memory size. > > Exactly. bonnie++ (for example) provides the -r argument, which causes > it to deal with i/o at twice your memory size to avoid cache benefits. > > John > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-25 21:56 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
This is exactly what I have.. I don''t really get what you mean by using LVM in DomU.. I do have LVM in Dom0, but of course, the DomU do not have a clue about it... Sami On Fri, 2008-01-25 at 16:48 -0500, Javier Guerra wrote:> On 1/25/08, Sami Dalouche <skoobi@free.fr> wrote: > > What I did was : > > 1] Created a LV on the real disk > > 2] exported this LV as a Xen disk using > > disk = [ ''phy:/dev/mapper/mylv,hda1,w''] > > 3] mounted it on the DomU by mount /dev/mapper/mylv > > > > isn''t it what I''m supposed to do ? > > no.... > > at DomU, you should see only a block device /dev/hda1 that looks like > a partition but in fact is your LV. no need to use LVM in DomU if you > use it in Dom0 >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tim McCarthy
2008-Jan-25 22:01 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
See prior posts from me on this subject. I list some sysctl.conf items that greatly improve performance. Stock rhel51(2.6.18-53.1.4) kernel from rhn and the simplest of tests: Dd if=/dev/zero of=./test bs=32768 count=32768 Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Stefan de Konink <skinkie@xs4all.nl> Date: Fri, 25 Jan 2008 22:36:26 To:tmac <tmacmd@gmail.com> Cc:Sami Dalouche <skoobi@free.fr>, Christophe Clapp <christophe.clapp@gmail.com>, Mark Williamson <mark.williamson@cl.cam.ac.uk>, xen-users@lists.xensource.com, Florent Valdelièvre<Florent.Valdelievre@wanadoo.fr> Subject: Re: [Xen-users] Xen Disk I/O performance vs native performance -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 tmac schreef:> For kicks, Try using NFS. I am able to get 100MB/sec from two DomUs at > the same time. > (I do not care about Dom0 as it really does nothing.) My DomU's are Sun > Grid Engine clients > and move really fast.Did you try files that were bigger than your host memory over NFS with bonnie++ for example? This ment full system crash for me. http://xen.bot.nu/benchmarks/ Some of our benchmarks. Stefan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHmlZaYH1+F2Rqwn0RCpVlAJoDO9kGZr8dNyObyk4dEsupYaxW9ACePMs+ WqULqr6qV7jeKJtPsvosdiI=d9V0 -----END PGP SIGNATURE----- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2008-Jan-25 22:07 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
On 1/25/08, Sami Dalouche <skoobi@free.fr> wrote:> This is exactly what I have.. > > I don''t really get what you mean by using LVM in DomU.. I do have LVM in > Dom0, but of course, the DomU do not have a clue about it...ok. what confused us was this line:> 3] mounted it on the DomU by mount /dev/mapper/mylvmaybe you meant Dom0? (when testing, DomU doesn''t need that) -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-25 22:11 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Yeah, no sorry, forget about item 3]. It''s just mounted by xen while booting the system ;) On Fri, 2008-01-25 at 17:07 -0500, Javier Guerra wrote:> On 1/25/08, Sami Dalouche <skoobi@free.fr> wrote: > > This is exactly what I have.. > > > > I don''t really get what you mean by using LVM in DomU.. I do have LVM in > > Dom0, but of course, the DomU do not have a clue about it... > > ok. what confused us was this line: > > > 3] mounted it on the DomU by mount /dev/mapper/mylv > > maybe you meant Dom0? (when testing, DomU doesn''t need that) >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mitch Kelly
2008-Jan-26 02:34 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
Im curious, What PC do you have? Im running a Compaq DL380 and get crap Profomace at the moment. ----- Original Message ----- From: "Sami Dalouche" <skoobi@free.fr> To: "Javier Guerra" <javier@guerrag.com> Cc: "Xen Users" <xen-users@lists.xensource.com> Sent: Saturday, January 26, 2008 7:11 AM Subject: Re: [Xen-users] Xen Disk I/O performance vs native performance> Yeah, no sorry, forget about item 3]. > > It''s just mounted by xen while booting the system ;) > > On Fri, 2008-01-25 at 17:07 -0500, Javier Guerra wrote: >> On 1/25/08, Sami Dalouche <skoobi@free.fr> wrote: >> > This is exactly what I have.. >> > >> > I don''t really get what you mean by using LVM in DomU.. I do have LVM >> > in >> > Dom0, but of course, the DomU do not have a clue about it... >> >> ok. what confused us was this line: >> >> > 3] mounted it on the DomU by mount /dev/mapper/mylv >> >> maybe you meant Dom0? (when testing, DomU doesn''t need that) >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > -- > Internal Virus Database is out-of-date. > Checked by AVG Free Edition. > Version: 7.5.516 / Virus Database: 269.17.13/1205 - Release Date: > 12/31/2007 3:32 PM > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Jan-26 05:13 UTC
RE: [Xen-users] Xen Disk I/O performance vs native performance
> Subject: Re: [Xen-users] Xen Disk I/O performance vs nativeperformance> > Im curious, What PC do you have? Im running a Compaq DL380 and getcrap> Profomace at the moment.What sort of DL380, and what sort of disks? I''ve got a DL385 with 146G 10K 2.5" SAS disks on a HP RAID controller, and hdparm -tT /dev/cciss/c0d0 gives me around 900MB/sec for cached reads, and around 235MB/sec for buffered reads. hdparm -tT on an lv used in DomU, with hdparm being run from Dom0 gives me about the same for cached reads, but only 60MB/sec for buffered reads. hdparm -tT on the same lv under DomU gives me 35-50MB/sec. So I''m getting a pretty drastic performance hit from lvm (never noticed that before... guess I should look into it... maybe it''s an artefact of hdparm) and a small hit from xen blockfront/back. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mitch Kelly
2008-Jan-26 05:18 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
DL380 Gen3 HP SmartArray 5i 64mb. Raid 5 Array: 4x 36Gb 10k Ultra320: /dev/cciss/c0d0: Timing cached reads: 784 MB in 2.00 seconds = 392.13 MB/sec Timing buffered disk reads: 240 MB in 3.01 seconds = 79.78 MB/sec Raid 1 Array: 2x 146gb 10k Ultra320: /dev/cciss/c0d1: Timing cached reads: 850 MB in 2.00 seconds = 424.73 MB/sec Timing buffered disk reads: 210 MB in 3.00 seconds = 69.97 MB/sec ----- Original Message ----- From: "James Harper" <james.harper@bendigoit.com.au> To: "Mitch Kelly" <mitchkelly@westnet.com.au>; "Sami Dalouche" <skoobi@free.fr>; "Javier Guerra" <javier@guerrag.com> Cc: "Xen Users" <xen-users@lists.xensource.com> Sent: Saturday, January 26, 2008 2:13 PM Subject: RE: [Xen-users] Xen Disk I/O performance vs native performance> Subject: Re: [Xen-users] Xen Disk I/O performance vs nativeperformance> > Im curious, What PC do you have? Im running a Compaq DL380 and getcrap> Profomace at the moment.What sort of DL380, and what sort of disks? I''ve got a DL385 with 146G 10K 2.5" SAS disks on a HP RAID controller, and hdparm -tT /dev/cciss/c0d0 gives me around 900MB/sec for cached reads, and around 235MB/sec for buffered reads. hdparm -tT on an lv used in DomU, with hdparm being run from Dom0 gives me about the same for cached reads, but only 60MB/sec for buffered reads. hdparm -tT on the same lv under DomU gives me 35-50MB/sec. So I''m getting a pretty drastic performance hit from lvm (never noticed that before... guess I should look into it... maybe it''s an artefact of hdparm) and a small hit from xen blockfront/back. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users -- Internal Virus Database is out-of-date. Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 269.17.13/1205 - Release Date: 12/31/2007 3:32 PM _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-26 20:08 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
> hdparm -tT on the same lv under DomU gives me 35-50MB/sec. > > So I''m getting a pretty drastic performance hit from lvm (never noticed > that before... guess I should look into it... maybe it''s an artefact of > hdparm) and a small hit from xen blockfront/back.I just tried a benchmark without Xen, Without RAID, on a simple HD. the performance (hdparm + bonnie) between LVM and native partition is exactly the same. Do you have anything special that would make LVM slow ? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-27 14:05 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
Hi, OK, so I did the hdparm + bonnie++ benchmarks on the server, and posted all the results to pastebin : http://pastebin.ca/874800 To put it in a nutshell : Xen I/O is even worse than I previously thought on a RAID array... To give more details : 1] LVM vs Native partition doesn''t seem to have much effect either with or without Xen (so I don''t seem to run the problems Mitch Kelly talked about in a previous post) (compare test 1 vs test 2, as well as test 7 vs test 8) However, note that I haven''t had a chance to do a comparison of LVM vs Native on top of RAID directly... (this would mean re-installing my whole server, etc...). I only tested the performance on non-raid disks. 2] Xen Dom0 vs Non-Xen Kernel doesn''t seem to have a huge performance difference. bonnie++ gave slightly different results, but I guess we can accuse the the experimental nature of the benchmarks (compare test 3 vs test test 5) 3] Xen Dom0 vs Xen Dom U performance is like day and night !!!! Compare test 4 vs test 5). Additionally, the hdparm results are completly different each time the command is ran, so we can only really compare the bonnie++ results, which are not really consistent either..... It''s like if xen over lvm over RAID would just result to not consistent results... Would there be a reason for that ? So, if we take, for instance, Sequential Input by block, we have 37MB/s (worst case)/74MB/s (best case) vs 173MB. this makes Xen SUPER SUPER SUPER slow, at least twice as slow even when considering the best results I could get.. 4] However, the weird thing is that if you look at test 6 vs test 1 you can see that Xen over LVM without RAID does not seem to degrade the performance. I should have used a bigger file size, but test 7 vs test 8 definitely confirms the trend, even if the numbers are not completly exact in test 6 because of the small disk size... So, conclusion, I am lost : On the one side, it seems that Xen, when used on top of a raid array, is wayyy slower, but when used on top a plain old disk, seems to be pretty much native performance. Is there a potential link between Xen and RAID vs non raid performance ? Or maybe the problem is caused by Xen + RAID + LVM ? What do you think about that ? regards, Sami Dalouche On Fri, 2008-01-25 at 22:42 +0100, Sami Dalouche wrote:> Ok, so I''m currently doing bonnie++ benchmarks and will report the > results as soon as everything is finished. > But in any case.. I am not trying to create super-accurate benchmarks. I > am just trying to say that the VM''s I/O is definitely slowwer than the > Dom0, and I don''t even need a benchmark to tell that everything is at > least twice as slow. > > it seriously is super slow, so my original post was about knowing how > much slower (vs native performance) is acceptable. > > Concerning your question, I don''t quite understand it... > What I did was : > 1] Created a LV on the real disk > 2] exported this LV as a Xen disk using > disk = [ ''phy:/dev/mapper/mylv,hda1,w''] > 3] mounted it on the DomU by mount /dev/mapper/mylv > > isn''t it what I''m supposed to do ? > regards, > Sami Dalouche > > On Fri, 2008-01-25 at 16:28 -0500, John Madden wrote: > > > obviously it''s the same filesystem type, since it''s the same > > > ''partition''. of course, different mount flags could in theory affect > > > measurements. > > > > Sorry, I must''ve missed something earlier. I didn''t realize you were > > mounting and writing to the same filesystem in both cases. But this is > > interesting -- if you''re mounting a filesystem on an LV in dom0 and then > > passing it as a physical device to domU, how does domU see it? Does it > > then put an LV inside this partition? > > > > > > Please use bonnie++ at a minimum for i/o benchmarking. dd is not a > > > > benchmarking tool. > > > > > > besides, no matter what tool you use to measure, use datasets at the > > > very least three or four times the largest memory size. > > > > Exactly. bonnie++ (for example) provides the -r argument, which causes > > it to deal with i/o at twice your memory size to avoid cache benefits. > > > > John > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-27 14:11 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
> 4] However, the weird thing is that if you look at test 6 vs test 1 you > can see that Xen over LVM without RAID does not seem to degrade the > performance. I should have used a bigger file size, but test 7 vs test 8 > definitely confirms the trend, even if the numbers are not completly > exact in test 6 because of the small disk size... >small FILE size, sorry _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2008-Jan-28 14:48 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance
On Sat, 2008-01-26 at 21:08 +0100, Sami Dalouche wrote:> Do you have anything special that would make LVM slow ?Uh yeah, hdparm itself. Folks, please PLEASE don''t use hdparm as an i/o benchmark! Regardless of its other problems, LVM isn''t a disk, so hdparm is useless in measuring metrics for it. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2008-Jan-28 14:56 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow
On Sun, 2008-01-27 at 15:05 +0100, Sami Dalouche wrote:> On the one side, it seems that Xen, when used on top of a raid array, > is > wayyy slower, but when used on top a plain old disk, seems to be > pretty > much native performance. Is there a potential link between Xen and > RAID > vs non raid performance ? Or maybe the problem is caused by Xen + RAID > + > LVM ?Unless we''re talking software raid here, I don''t see the connection. I''m still concerned about memory sizes here -- you''re running the benchmark with different write sizes based on your differing memory sizes. Can we shore that up? Limit your dom0 to 512MB of memory and create a domU with 512 MB memory. Create a RAID device and LVM it if you want, then mount it in dom0, run `bonnie++ -u root -r 512`, then umount and export it to domU, mount it there, and run the same benchmark. (Note that in neither case is the device the root filesystem.) With bonnie++, dd, etc., I''ve always seen near-native performance between dom0 and domU. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users