I''ve been getting disappointing results on IO reads when using windows guests. I''m getting usually around 70MB/s reads using hdtune. When installing James'' PV drivers, the speed drops to about 20MB/s. The host system is getting around 350MB/s reads. I''m wondering if there is such a significant slowdown, or if my tests are somehow flawed. The 70MB/s persisted throughout my VM testing, whether I was using Xen, XenSource or VMWare Server. So my questions are: a. Is there a bottleneck somewhere which basically caps the windows disk performance? b. Is my testing methodology flawed. Is there a better windows tool that will measure performance? What about Linux tool? I''m currently using hdparm -t Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I''ve been getting disappointing results on IO reads when using windows > guests. I''m getting usually around 70MB/s reads using hdtune. When > installing James'' PV drivers, the speed drops to about 20MB/s. > > The host system is getting around 350MB/s reads. I''m wondering ifthere> is such a significant slowdown, or if my tests are somehow flawed.The> 70MB/s persisted throughout my VM testing, whether I was using Xen, > XenSource or VMWare Server. > > So my questions are: > > a. Is there a bottleneck somewhere which basically caps the windows > disk performance? > b. Is my testing methodology flawed. Is there a better windows tool > that will measure performance? What about Linux tool? I''m currently > using hdparm -tIf you are using the same tool in Windows then I''m comfortable that a drop from 70MB/s to 20MB/s indicates a major problem somewhere. Actually, since I haven''t implemented any of the event log stuff yet as per my last email, could you run DebugView from sysinternals and see if any logging comes up while you run your performance tool? Make sure kernel logging is on - you can tell if it is if you go into disk management and a whole lot of debug info gets output. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper wrote:>> I''ve been getting disappointing results on IO reads when using windows >> guests. I''m getting usually around 70MB/s reads using hdtune. When >> installing James'' PV drivers, the speed drops to about 20MB/s. >> >> The host system is getting around 350MB/s reads. I''m wondering if >> > there > >> is such a significant slowdown, or if my tests are somehow flawed. >> > The > >> 70MB/s persisted throughout my VM testing, whether I was using Xen, >> XenSource or VMWare Server. >> >> So my questions are: >> >> a. Is there a bottleneck somewhere which basically caps the windows >> disk performance? >> b. Is my testing methodology flawed. Is there a better windows tool >> that will measure performance? What about Linux tool? I''m currently >> using hdparm -t >> > > If you are using the same tool in Windows then I''m comfortable that a > drop from 70MB/s to 20MB/s indicates a major problem somewhere. > > Actually, since I haven''t implemented any of the event log stuff yet as > per my last email, could you run DebugView from sysinternals and see if > any logging comes up while you run your performance tool? Make sure > kernel logging is on - you can tell if it is if you go into disk > management and a whole lot of debug info gets output. > > James > > _______________________________________________ >I will try this on Monday. Do you know how to turn kernel logging on if it''s not? This thread was actually more about why I''m getting such poor perceived performance under the stock drivers. I understand that there may be some issues to work out with the PV drivers, but I wouldn''t have expected such a huge hit of performance across the board with the stock drivers (And across the board I mean, Xen, XenServer and VMWare). Should I try testing with something like IOmeter? I''m really more interested in db type performance, so maybe a real benchmarking package is necessary. You know the kind that Tom''s Hardware uses to measure performance? Does anyone know of any good ones? Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Russ; I have to do disk performance all the time and I use IOmeter heavily. Have a look at this article: http://blogs.orcsweb.com/jeff/archive/2008/01/09/disk-usage-and-capacity .aspx The author explains how you can use Microsoft (Sysinternals) Diskmon to capture your existing workload (the one generated by the database application you want to simulate) and then helps you to calculate that specific workload in IOmeter. Once configured you can try out your workload with various disk configurations / SAN''s etc without having to build your database application on top of every possible scenario. It even provides a spreadsheet to convert all the data into the correct parameters that you need to feed into IOmeter. It lets you get through a bunch of "what if''s" quickly. I recently used this to isolate a problem with an application that uses Microsoft SQL 2000 that started to misbehave after a SAN upgrade. We quickly determined that moving the SQL database from a SAN with Write Back caching enabled to one with Write Through Caching had a massive performance impact based on the way the app writes data, not the amount of data being written. It just happened to be the way the app needed to use the disk. My advice is that if you really want to know how something in particular is going to perform in a new environment, capture the workload characteristics and then play them back in that environment. I recently sent some performance stats to the list for the PVGPL 0.9.5 drivers that include the workload I''m describing. I will run the same tests again now that James has released 0.9.7 along with the iometer configuration. Also be careful when publishing performance stats not to violate anyone''s EULA. I know VMware, in particular, are sticky about that :-). Best Regards Geoff -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Ruslan Sivak Sent: 07 June 2008 5:29 PM To: James Harper Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] Windows Disk performance James Harper wrote:>> I''ve been getting disappointing results on IO reads when usingwindows>> guests. I''m getting usually around 70MB/s reads using hdtune. When >> installing James'' PV drivers, the speed drops to about 20MB/s. >> >> The host system is getting around 350MB/s reads. I''m wondering if >> > there > >> is such a significant slowdown, or if my tests are somehow flawed. >> > The > >> 70MB/s persisted throughout my VM testing, whether I was using Xen, >> XenSource or VMWare Server. >> >> So my questions are: >> >> a. Is there a bottleneck somewhere which basically caps the windows >> disk performance? >> b. Is my testing methodology flawed. Is there a better windows tool >> that will measure performance? What about Linux tool? I''m currently >> using hdparm -t >> > > If you are using the same tool in Windows then I''m comfortable that a > drop from 70MB/s to 20MB/s indicates a major problem somewhere. > > Actually, since I haven''t implemented any of the event log stuff yetas> per my last email, could you run DebugView from sysinternals and seeif> any logging comes up while you run your performance tool? Make sure > kernel logging is on - you can tell if it is if you go into disk > management and a whole lot of debug info gets output. > > James > > _______________________________________________ >I will try this on Monday. Do you know how to turn kernel logging on if it''s not? This thread was actually more about why I''m getting such poor perceived performance under the stock drivers. I understand that there may be some issues to work out with the PV drivers, but I wouldn''t have expected such a huge hit of performance across the board with the stock drivers (And across the board I mean, Xen, XenServer and VMWare). Should I try testing with something like IOmeter? I''m really more interested in db type performance, so maybe a real benchmarking package is necessary. You know the kind that Tom''s Hardware uses to measure performance? Does anyone know of any good ones? Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ruslan Sivak schrieb:> I''ve been getting disappointing results on IO reads when using windows > guests. I''m getting usually around 70MB/s reads using hdtune. When > installing James'' PV drivers, the speed drops to about 20MB/s. > The host system is getting around 350MB/s reads. I''m wondering if there > is such a significant slowdown, or if my tests are somehow flawed. The > 70MB/s persisted throughout my VM testing, whether I was using Xen, > XenSource or VMWare Server. > So my questions are: > > a. Is there a bottleneck somewhere which basically caps the windows > disk performance? > b. Is my testing methodology flawed. Is there a better windows tool > that will measure performance? What about Linux tool? I''m currently > using hdparm -t > > RussHi Russ, first of all let''s clarify: - Is this Mb as in "Megabits per second" or MB as in "Megabytes per second"? You stated both in your original post in the 0.95 driver thread, so I want to make sure... - Did you use the same measurement/tools in all 3 scenarios (dom0, domU/stock, domU/gplpv)? If yes how exactly? (I''m not aware of hdtune for linux, so you most probably ran something else to get dom0 performance) I think the answer to #2 will explain the big difference between dom0 and domU performance (taking into account a bit of a general overhead). When it comes to PV driver''s performance this is an interesting topic. I''ve seen posts giving the opposite (not exact numbers but direction) result, so it would be interesting to find out what causes this. Best regards, Christian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > When it comes to PV driver''s performance this is an interesting topic. > I''ve seen posts giving the opposite (not exact numbers but direction) > result, so it would be interesting to find out what causes this. >I just did a bit of testing myself... the ''32K; 100% Read; 0% random'' test in iometer performs inconsistently when using the qemu drivers. I tried it once and it gave me 35MB/s. I then tried the gplpv drivers and they gave me around 23MB/s. I''m now trying the qemu drivers and they aren''t getting past 19MB/s. I''m using an LVM snapshot at the moment, which is probably something to do with the inconsistent results.... I also tried fiddling with the ''# of Outstanding I/O s'', changing it to 16 (the maximum number of concurrent requests scsiport will give me). For qemu, there was no change. For gplpv, my numbers went up to 66MB/s (from 23MB/s). I''m a little unsure of how much trust to put in that though as hdparm in Dom0 gives me a maximum of 35MB/s on that lvm device, so I can''t quite figure out how a HVM DomU could be getting better results than the hdparm baseline figure. I''m just about to upload 0.9.8, which fixes a performance bug that would cause a huge slowdown (iometer dropped from 23MB/s to 0.5MB/s :) if too many outstanding requests were issued at once. It also prints some statistics to the debug log (viewable via DebugView from sysinternals.com) every 60 seconds which may or may not be useful. Unless the above bug was affecting things, and I am not sure that it was, the reduction in performance may be due to the way that xenpci is now notifying the child drivers (eg xenvbd and xennet) that an interrupt has occurred. This should affect xennet equally though. It was changed with the wdf->wdm rewrite. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christian Tramnitz wrote:> Ruslan Sivak schrieb: >> I''ve been getting disappointing results on IO reads when using >> windows guests. I''m getting usually around 70MB/s reads using >> hdtune. When installing James'' PV drivers, the speed drops to about >> 20MB/s. >> The host system is getting around 350MB/s reads. I''m wondering if >> there is such a significant slowdown, or if my tests are somehow >> flawed. The 70MB/s persisted throughout my VM testing, whether I was >> using Xen, XenSource or VMWare Server. >> So my questions are: >> >> a. Is there a bottleneck somewhere which basically caps the windows >> disk performance? >> b. Is my testing methodology flawed. Is there a better windows tool >> that will measure performance? What about Linux tool? I''m currently >> using hdparm -t >> >> Russ > > Hi Russ, > > first of all let''s clarify: > - Is this Mb as in "Megabits per second" or MB as in "Megabytes per > second"? You stated both in your original post in the 0.95 driver > thread, so I want to make sure...I would think MegaBytes per second.> - Did you use the same measurement/tools in all 3 scenarios (dom0, > domU/stock, domU/gplpv)? If yes how exactly? (I''m not aware of hdtune > for linux, so you most probably ran something else to get dom0 > performance) >dom0 i used htdune -t /dev/sda domU/stock and domU/gplpv I used hdtune.> I think the answer to #2 will explain the big difference between dom0 > and domU performance (taking into account a bit of a general overhead). > > When it comes to PV driver''s performance this is an interesting topic. > I''ve seen posts giving the opposite (not exact numbers but direction) > result, so it would be interesting to find out what causes this. >James mentioned something about something not being aligned properly. This might fix it, as on my dom0, I went from about 150mb/s with hdparm to 350 mb/s after adjusting some parameters as per the 3ware site. Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper schrieb:> I''m a little unsure of how much trust to put in that > though as hdparm in Dom0 gives me a maximum of 35MB/s on that lvm > device, so I can''t quite figure out how a HVM DomU could be getting > better results than the hdparm baseline figure.hdparm is not a good choice for actual hard disk performance testing when RAID is being used (I assume you have your LVs on top of an array). Try bonnie for actual results, although I''m not sure this can be directly compared to Windows iometer results... On another topic, would you mind test signing the drivers before packaging them and distributing the certificate? This would make testing on x64 2008/Vista a lot easier. I already added a note in the wiki, but because the drivers are packaged now it''s not so easy to self test sign them before they are being used. But if you would distribute the certificate with the drivers (import before driver installation) and check for test sign mode on x64 then it would work on-the-fly... If you ever hit something near 1.0 we should start thinking about a class 3 code signing certificate at some point ;-) Best regards, Christian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christian Tramnitz wrote:> James Harper schrieb: >> I''m a little unsure of how much trust to put in that >> though as hdparm in Dom0 gives me a maximum of 35MB/s on that lvm >> device, so I can''t quite figure out how a HVM DomU could be getting >> better results than the hdparm baseline figure. > > hdparm is not a good choice for actual hard disk performance testing > when RAID is being used (I assume you have your LVs on top of an array). > Try bonnie for actual results, although I''m not sure this can be > directly compared to Windows iometer results... > >Isn''t iometer cross platform? i downloaded it on linux, but couldn''t figure out the commands to make it run though. Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ruslan Sivak schrieb:> I would think MegaBytes per second.350 MBytes/s sounds really fast, unless we are speaking about cached throughput.> dom0 i used htdune -t /dev/sdaI still haven''t found hdtune for linux, are you sure you are not running hdparm instead of hdtune?> James mentioned something about something not being aligned properly. > This might fix it, as on my dom0, I went from about 150mb/s with hdparm > to 350 mb/s after adjusting some parameters as per the 3ware site.Which 3ware is this and what RAID mode with how many disks are you using? Unless you are running some RAID0 on multiple 10k SAS disks I doubt that 350MBytes/s is anywhere near the actual throughput. Best regards, Christian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Which 3ware is this and what RAID mode with how many disks are you using? > Unless you are running some RAID0 on multiple 10k SAS disks I doubt that > 350MBytes/s is anywhere near the actual throughput. > > > Best regards, > ChristianEven then it''s not, I have a RAID 0 with 8 10k rapters on a 3ware controller and get nowhere near that even with all the optimizations turned on. It''s probably out of cache. Grant _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christian Tramnitz wrote:> Ruslan Sivak schrieb: >> I would think MegaBytes per second. > > 350 MBytes/s sounds really fast, unless we are speaking about cached > throughput.I would expect cached throughput to be much, much faster.> > >> dom0 i used htdune -t /dev/sda > > I still haven''t found hdtune for linux, are you sure you are not > running hdparm instead of hdtune? >Sorry, I did mean hdparm.>> James mentioned something about something not being aligned >> properly. This might fix it, as on my dom0, I went from about >> 150mb/s with hdparm to 350 mb/s after adjusting some parameters as >> per the 3ware site. > > Which 3ware is this and what RAID mode with how many disks are you using? > Unless you are running some RAID0 on multiple 10k SAS disks I doubt > that 350MBytes/s is anywhere near the actual throughput. >I''m using a 3ware 9650 with 4 10k VelociRaptors in raid 5. I doubt that it''s reading out of cache, I''m probably getting sequential reads from the outside of the disks with readahead and all that fun stuff. Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Grant McWilliams wrote:> > Which 3ware is this and what RAID mode with how many disks are you > using? > Unless you are running some RAID0 on multiple 10k SAS disks I > doubt that 350MBytes/s is anywhere near the actual throughput. > > > Best regards, > Christian > > > Even then it''s not, I have a RAID 0 with 8 10k rapters on a 3ware > controller and get nowhere near that even with all the optimizations > turned on. It''s probably out of cache. > > GrantWhich 3ware controller, which raptors, and what kind of speeds are you getting? I noticed that I seemed to be actually getting better speeds off 500GB 7200 WD''s then I was getting off 300GB 10k Velociraptors. I think it''s mostly due to the fact that 500GB drives pack more data per platter, so they probably beat the raptors in sequential read tests. If you are using older raptors, ie the 150GB of the 74GB models, I wouldn''t expect them to do that great in a sequential read test. Also for some reason I was noticing lower speeds in raid0 then I did in raid10 before I did optimizations, and my raid5 speeds more then doubled after optimizations, so the readahead probably plays a big role in these test. I would expect at least similar performance in windows under hd tune, as I would think it''s doing a sequential test as well? I will see if I can set up IOMeter tommorow and run some tests. Russ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ruslan Sivak schrieb:> I''m using a 3ware 9650 with 4 10k VelociRaptors in raid 5. I doubt that > it''s reading out of cache, I''m probably getting sequential reads from > the outside of the disks with readahead and all that fun stuff.Russ, if you are talking about getting max speed out of a 3Ware including optimizing read-ahead and caching/buffering that''s fine, but you started the topic comparing dom0 vs domU performance and asking if your testing method is flawed... it is! hdparm will give you a nice output, but that has nothing in common with real throughput that you will get when running tools like bonnie++, iozone or iometer. The theoretical maximum I would expect from your setup is something between 200-240MB/s sustained read transfer rate and 150-180MB/s write transfer rate. But transfer rate is not everything, when talking about performance you may want to look for I/O operations per second. I would strongly suggest that you use the exact same tools in dom0 and domU to measure their performance difference and only compare that if you interesting in any domU/GPLPV vs. domU/stock vs. dom0 performance conclusions. That being said, I''m very interested in the results, I will also do some testing with iometer in dom0 and domU, PV and HVM (stock and GPLPV), maybe we can ask the community for some feedback regarding common iometer test parameters that we should use and then gather the results to get ideas. Best regards, Christian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users