James Harper
2008-Apr-26 06:49 UTC
[Xen-devel] Release 0.8.9 of GPL PV drivers for Windows
Here''s the latest version. Took me ages to get to the bottom of what turned out to be a pretty simple problem - windows can give us more pages in a packet than Linux can handle but Linux (netback) doesn''t complain about it, it just creates corrupt packets :( Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zip I''ll probably do another release very shortly, mainly to reduce memory consumption on the tx side so that more interfaces can run at once.>From the testing I''ve done, on a UP windows DomU, with iperf options ''-l1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I tried it under SMP it worked, but the performance was horrible. Probably best if you don''t run it under SMP for the moment :) I did have a test performance of 2.5Gbits/second, but now that I have to copy the windows buffers into my own buffers to reduce page usage, I seem to only be able to get about 1.5Gbits/second out of it... This kind of makes sense given that DomU to Dom0 network performance is going to be CPU and Memory bandwidth bound. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Emre ERENOGLU
2008-Apr-26 10:31 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
Hi James, great news. As I''m in vacation, I won''t be able to test it but hope it works well. Did you consider sending an email to the kernel dev guys to inform them about this issue? maybe it''s by design, maybe there''s something that can be improved. Emre On Sat, Apr 26, 2008 at 9:49 AM, James Harper <james.harper@bendigoit.com.au> wrote:> Here''s the latest version. Took me ages to get to the bottom of what > turned out to be a pretty simple problem - windows can give us more > pages in a packet than Linux can handle but Linux (netback) doesn''t > complain about it, it just creates corrupt packets :( > > Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zip > > I''ll probably do another release very shortly, mainly to reduce memory > consumption on the tx side so that more interfaces can run at once. > > >From the testing I''ve done, on a UP windows DomU, with iperf options ''-l > 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of > about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I > tried it under SMP it worked, but the performance was horrible. Probably > best if you don''t run it under SMP for the moment :) > > I did have a test performance of 2.5Gbits/second, but now that I have to > copy the windows buffers into my own buffers to reduce page usage, I > seem to only be able to get about 1.5Gbits/second out of it... This kind > of makes sense given that DomU to Dom0 network performance is going to > be CPU and Memory bandwidth bound. > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-26 13:41 UTC
[Xen-devel] RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> > Did you consider sending an email to the kernel dev guys to informthem> about this issue? maybe it''s by design, maybe there''s something thatcan> be improved. >Not really. Changing it would have a fairly high impact (eg Dom0 and all Linux DomU''s on the system would have to support the higher number or all sorts of strange things would happen). The only thing you could thinking about defining as a bug is that netback didn''t complain about it anywhere, it just pretended everything was okay and put corrupt packets onto the bridge. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
jim burns
2008-Apr-26 15:21 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Saturday April 26 2008 02:49:59 am James Harper wrote:> I did have a test performance of 2.5Gbits/second, but now that I have to > copy the windows buffers into my own buffers to reduce page usage, I > seem to only be able to get about 1.5Gbits/second out of it... This kind > of makes sense given that DomU to Dom0 network performance is going to > be CPU and Memory bandwidth bound.What was the downside of the higher page usage? Looking forward to testing it (or at least loading it :-). _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello Jim, jim burns schrieb:> Now that ''PCI BUS'' still has the Microsoft driver, and not xenhide, what is > the recommended uninstall procedure, prior to installing a new version? > Thanx. >I used the uninstall.bat and then the install.bat. Worked quite good on my system. I don''t know if this is the recommended procedure. Greetz Age_M _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-26 15:23 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Saturday April 26 2008 02:49:59 am James Harper wrote:> Here''s the latest version.Now that ''PCI BUS'' still has the Microsoft driver, and not xenhide, what is the recommended uninstall procedure, prior to installing a new version? Thanx. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-26 20:23 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Saturday April 26 2008 11:22:14 am Age_M wrote:> I used the uninstall.bat and then the install.bat. Worked quite good on > my system. > I don''t know if this is the recommended procedure.Doh! I didn''t notice that uninstall.bat is new in 0.8.9. Thanx. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-27 00:29 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
The problem isn''t so much page usage as grant table usage. There is a finite limit and it was being reached too quickly, eg with more than a few network interfaces loaded. -----Original Message----- From: "jim burns"<jim_burn@bellsouth.net> Sent: 27/04/08 1:22:53 AM To: "xen-users@lists.xensource.com"<xen-users@lists.xensource.com> Subject: Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows On Saturday April 26 2008 02:49:59 am James Harper wrote:> I did have a test performance of 2.5Gbits/second, but now that I have to > copy the windows buffers into my own buffers to reduce page usage, I > seem to only be able to get about 1.5Gbits/second out of it... This kind > of makes sense given that DomU to Dom0 network performance is going to > be CPU and Memory bandwidth bound.What was the downside of the higher page usage? Looking forward to testing it (or at least loading it :-). _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-27 02:12 UTC
RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> jim burns schrieb: > > Now that ''PCI BUS'' still has the Microsoft driver, and not xenhide,what> is > > the recommended uninstall procedure, prior to installing a newversion?> > Thanx. > > > I used the uninstall.bat and then the install.bat. Worked quite goodon> my system. > I don''t know if this is the recommended procedure. >Hmmm... uninstall.bat is a work in progress and probably shouldn''t have been included. I suspect that if you rebooted after running uninstall.bat but before running install.bat you would have had an unbootable system. The problem is the way I attach xenhide.sys to the pci bus, which is via an AddReg in the .inf file. I think some simple .vbs would be able to clear the registry key and the uninstall.bat file could just call that. All in all though, I am still uncomfortable with the way I hide the qemu network and ata drivers from windows, and I keep telling myself that there must be a better way... everything I try works even worse than before though :) James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-27 07:16 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Saturday April 26 2008 10:12:33 pm James Harper wrote:> Hmmm... uninstall.bat is a work in progress and probably shouldn''t have > been included. I suspect that if you rebooted after running > uninstall.bat but before running install.bat you would have had an > unbootable system.Well, for the first time since 0.8.4, I got winxp to boot w/ /gplpv, using 0.8.9. I booted w/o /gplpv and then ran uninstall.bat. Because of the warning above, I ran install.bat right away w/o a reboot. The final screen came up and said all drivers were updated except xennet, which was ''Ready to Use''. Then I rebooted w/ /gplpv, and the Find New Hardware Wizard came up automatically and guided me through installing xennet w/o a hitch. I had previously disabled all the new features in Device Manager''s Xen Net Device Driver Properties'' Advanced Tab. (This was done by copying over the 0.8.4 files so I could boot w/o a BSOD.) An ''iperf -c dom0-name -t 60'' came up with 27.3 Mbits/s. I then proceeded to turn on each feature one at a time in the Advanced tab, and rebooting w/ /gplpv. Device Manager invariably hung after enabling each feature, which caused the reboot to hang. All measurements are with vcpus=2, unless noted otherwise. After adding enabling Checksum Offload, iperf gave 30.5 Mb/s. After adding setting 61440 for Large Send Offload, iperf gave 25.3 Mb/s. After adding enabling Scatter/Gather, iperf gave 25.1 Mb/s. So there are minor variations with and w/o the various options, but on the whole, much better than the last version I could test, 0.8.4. From James'' original post:> From the testing I''ve done, on a UP windows DomU, with iperf options ''-l > 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of > about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I > tried it under SMP it worked, but the performance was horrible. Probably > best if you don''t run it under SMP for the moment :)Doing ''iperf -c dom0-name -l 1M -w 1M'' gives 28.8 Mb/s, and reversing the direction (winxp as iperf server) gives 30.2 Mb/s. Going down to vcpu=1, dom0 as server gives 27.1 Mb/s, and domu as server gives 35.7 Mb/s, so there is not a lot of difference between 1 and 2 vcpus for me. Nice improvements. I will test disk i/o w/ iometer later. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Florian Manschwetus
2008-Apr-27 10:23 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
James Harper schrieb:> Here''s the latest version. Took me ages to get to the bottom of what > turned out to be a pretty simple problem - windows can give us more > pages in a packet than Linux can handle but Linux (netback) doesn''t > complain about it, it just creates corrupt packets :( > > Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zipSounds, nice, when i get the time these days, I''ll sign this stuff to make it acceptable for my 64Bit 2008. Then lets have a look. Florian> > I''ll probably do another release very shortly, mainly to reduce memory > consumption on the tx side so that more interfaces can run at once. > >> >From the testing I''ve done, on a UP windows DomU, with iperf options ''-l > 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of > about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I > tried it under SMP it worked, but the performance was horrible. Probably > best if you don''t run it under SMP for the moment :) > > I did have a test performance of 2.5Gbits/second, but now that I have to > copy the windows buffers into my own buffers to reduce page usage, I > seem to only be able to get about 1.5Gbits/second out of it... This kind > of makes sense given that DomU to Dom0 network performance is going to > be CPU and Memory bandwidth bound. > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Apr-27 10:24 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sun, Apr 27, 2008 at 03:16:09AM -0400, jim burns wrote:> On Saturday April 26 2008 10:12:33 pm James Harper wrote: > > Hmmm... uninstall.bat is a work in progress and probably shouldn''t have > > been included. I suspect that if you rebooted after running > > uninstall.bat but before running install.bat you would have had an > > unbootable system. > > Well, for the first time since 0.8.4, I got winxp to boot w/ /gplpv, using > 0.8.9. I booted w/o /gplpv and then ran uninstall.bat. Because of the warning > above, I ran install.bat right away w/o a reboot. The final screen came up > and said all drivers were updated except xennet, which was ''Ready to Use''. > Then I rebooted w/ /gplpv, and the Find New Hardware Wizard came up > automatically and guided me through installing xennet w/o a hitch. > > I had previously disabled all the new features in Device Manager''s Xen Net > Device Driver Properties'' Advanced Tab. (This was done by copying over the > 0.8.4 files so I could boot w/o a BSOD.) An ''iperf -c dom0-name -t 60'' came > up with 27.3 Mbits/s. I then proceeded to turn on each feature one at a time > in the Advanced tab, and rebooting w/ /gplpv. Device Manager invariably hung > after enabling each feature, which caused the reboot to hang. All > measurements are with vcpus=2, unless noted otherwise. > > After adding enabling Checksum Offload, iperf gave 30.5 Mb/s. > > After adding setting 61440 for Large Send Offload, iperf gave 25.3 Mb/s. > > After adding enabling Scatter/Gather, iperf gave 25.1 Mb/s. > > So there are minor variations with and w/o the various options, but on the > whole, much better than the last version I could test, 0.8.4. >Just to make sure, you mean megabytes (MB/sec) instead of megabits per second (Mb/sec) ? 25 - 30 Mbit/sec would be really slow..> From James'' original post: > > From the testing I''ve done, on a UP windows DomU, with iperf options ''-l > > 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of > > about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I > > tried it under SMP it worked, but the performance was horrible. Probably > > best if you don''t run it under SMP for the moment :) > > Doing ''iperf -c dom0-name -l 1M -w 1M'' gives 28.8 Mb/s, and reversing the > direction (winxp as iperf server) gives 30.2 Mb/s. > > Going down to vcpu=1, dom0 as server gives 27.1 Mb/s, and domu as server gives > 35.7 Mb/s, so there is not a lot of difference between 1 and 2 vcpus for me. > > Nice improvements. I will test disk i/o w/ iometer later. >IOmeter results would be really nice! -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-27 11:36 UTC
RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> On Saturday April 26 2008 10:12:33 pm James Harper wrote: > > Hmmm... uninstall.bat is a work in progress and probably shouldn''thave> > been included. I suspect that if you rebooted after running > > uninstall.bat but before running install.bat you would have had an > > unbootable system. > > Well, for the first time since 0.8.4, I got winxp to boot w/ /gplpv,using> 0.8.9. I booted w/o /gplpv and then ran uninstall.bat. Because of the > warning > above, I ran install.bat right away w/o a reboot. The final screencame up> and said all drivers were updated except xennet, which was ''Ready toUse''.> Then I rebooted w/ /gplpv, and the Find New Hardware Wizard came up > automatically and guided me through installing xennet w/o a hitch. > > I had previously disabled all the new features in Device Manager''s XenNet> Device Driver Properties'' Advanced Tab. (This was done by copying overthe> 0.8.4 files so I could boot w/o a BSOD.) An ''iperf -c dom0-name -t 60'' > came up with 27.3 Mbits/s. I then proceeded to turn on each featureone at > a time in the Advanced tab, and rebooting w/ /gplpv. Device Manager> invariably hung after enabling each feature, which caused the rebootto> hang.Ouch. I was confident I''d fixed all of those hanging problems. I can enable and disable those settings all I like and it never misses a beat - the driver shuts itself down and restarts without a problem. Can you tell me more about your test system?> All measurements are with vcpus=2, unless noted otherwise. > > After adding enabling Checksum Offload, iperf gave 30.5 Mb/s. > > After adding setting 61440 for Large Send Offload, iperf gave 25.3Mb/s.> > After adding enabling Scatter/Gather, iperf gave 25.1 Mb/s.Scatter/Gather disabled isn''t really a tested configuration, but the others should work, although LSO disabled when Dom0 has it enabled could be problematic... What system are you running? My main test machine is a 1.8GHz dual core AMD 1210 system, and I get 1GBit/sec TX and 200MBit/sec RX with all features enabled. So your figures are lower than I''d expect, given that any system that supports HVM should be at least comparable to my test machine in terms of horsepower... With LSO disabled my testing drops to 283 TX and 6 RX...> So there are minor variations with and w/o the various options, but onthe> whole, much better than the last version I could test, 0.8.4.Well... that''s something :)> > Doing ''iperf -c dom0-name -l 1M -w 1M'' gives 28.8 Mb/s, and reversingthe> direction (winxp as iperf server) gives 30.2 Mb/s.You can achieve the same results with -r to do a TX test followed by an RX (eg client/server roles reverse) or -d to do the TX and RX tests concurrently.> Going down to vcpu=1, dom0 as server gives 27.1 Mb/s, and domu asserver> gives > 35.7 Mb/s, so there is not a lot of difference between 1 and 2 vcpusfor> me.With all features enabled?> Nice improvements. I will test disk i/o w/ iometer later.I''ll be interested in the results, but the disk stuff hasn''t changed in a while. Thanks for the feedback! James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-27 11:38 UTC
RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> On Sun, Apr 27, 2008 at 03:16:09AM -0400, jim burns wrote: > > On Saturday April 26 2008 10:12:33 pm James Harper wrote: > > > Hmmm... uninstall.bat is a work in progress and probably shouldn''t > have > > > been included. I suspect that if you rebooted after running > > > uninstall.bat but before running install.bat you would have had an > > > unbootable system. > > > > Well, for the first time since 0.8.4, I got winxp to boot w/ /gplpv, > using > > 0.8.9. I booted w/o /gplpv and then ran uninstall.bat. Because of the > warning > > above, I ran install.bat right away w/o a reboot. The final screen came > up > > and said all drivers were updated except xennet, which was ''Ready to > Use''. > > Then I rebooted w/ /gplpv, and the Find New Hardware Wizard came up > > automatically and guided me through installing xennet w/o a hitch. > > > > I had previously disabled all the new features in Device Manager''s Xen > Net > > Device Driver Properties'' Advanced Tab. (This was done by copying over > the > > 0.8.4 files so I could boot w/o a BSOD.) An ''iperf -c dom0-name -t 60'' > came > > up with 27.3 Mbits/s. I then proceeded to turn on each feature one at a > time > > in the Advanced tab, and rebooting w/ /gplpv. Device Manager invariably > hung > > after enabling each feature, which caused the reboot to hang. All > > measurements are with vcpus=2, unless noted otherwise. > > > > After adding enabling Checksum Offload, iperf gave 30.5 Mb/s. > > > > After adding setting 61440 for Large Send Offload, iperf gave 25.3 Mb/s. > > > > After adding enabling Scatter/Gather, iperf gave 25.1 Mb/s. > > > > So there are minor variations with and w/o the various options, but on > the > > whole, much better than the last version I could test, 0.8.4. > > > > Just to make sure, you mean megabytes (MB/sec) instead of megabits per > second (Mb/sec) ? > > 25 - 30 Mbit/sec would be really slow..I hadn''t thought of that when I wrote my previous email. If he did mean Mbytes as opposed to the Mbits that I assumed then it makes a lot more sense. Iperf gives the results in Mbits so that is what I assumed... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Scott McKenzie
2008-Apr-27 14:32 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sat, 26 Apr 2008 16:49:59 +1000, "James Harper" <james.harper@bendigoit.com.au> wrote:> Here''s the latest version. Took me ages to get to the bottom of what > turned out to be a pretty simple problem - windows can give us more > pages in a packet than Linux can handle but Linux (netback) doesn''t > complain about it, it just creates corrupt packets :( > > Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zip > > I''ll probably do another release very shortly, mainly to reduce memory > consumption on the tx side so that more interfaces can run at once. > >>From the testing I''ve done, on a UP windows DomU, with iperf options ''-l > 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of > about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I > tried it under SMP it worked, but the performance was horrible. Probably > best if you don''t run it under SMP for the moment :) > > I did have a test performance of 2.5Gbits/second, but now that I have to > copy the windows buffers into my own buffers to reduce page usage, I > seem to only be able to get about 1.5Gbits/second out of it... This kind > of makes sense given that DomU to Dom0 network performance is going to > be CPU and Memory bandwidth bound. > > JamesHi James I''ve tested this release on my system (fresh install) and I''m still getting the duplicate disk problem when I boot with the /gplpv option. There has been some talk lately that this may be a fault of the Red Hat kernel. FWIW I''m running CentOS 5.1 64bit, kernel is 2.6.18-53.1.14.el5xen. Cheers Scott _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-27 16:10 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sunday April 27 2008 07:36:46 am James Harper wrote:> Ouch. I was confident I''d fixed all of those hanging problems. I can > enable and disable those settings all I like and it never misses a beat > - the driver shuts itself down and restarts without a problem. Can you > tell me more about your test system?From a previous benchmark post, ''Release 0.8.0 of GPL PV Drivers for Windows'': Equipment: core duo 2300, 1.66ghz each, 2M, sata drive configured for UDMA/100 System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.3 [updated from 3.1.0-rc7], dom0 2.6.21 Tested hvm: XP Pro SP2, 2002 w/512M Method: The version tested was 1.7.0, to avoid having to apply the kernel patch that comes with 2.0.2. The binaries downloaded were from the project homepage http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the ''Linux libc 2.3 binary and (on fc8 at least) I still had to install the compat-libstdc++-33 package to get it to run.> Scatter/Gather disabled isn''t really a tested configuration, but the > others should work, although LSO disabled when Dom0 has it enabled could > be problematic...It isn''t: [822] > ethtool -k peth0 Offload parameters for peth0: Cannot get device rx csum settings: Operation not supported Cannot get device tx csum settings: Operation not supported Cannot get device scatter-gather settings: Operation not supported Cannot get device tcp segmentation offload settings: Operation not supported Cannot get device udp large send offload settings: Operation not supported rx-checksumming: off tx-checksumming: off scatter-gather: off tcp segmentation offload: off udp fragmentation offload: off generic segmentation offload: off jimb@Insp6400 04/27/08 11:15AM:~ [823] > lspci|grep Broad 03:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev 02) and thus the b44 driver.>> Doing ''iperf -c dom0-name -l 1M -w 1M'' gives 28.8 Mb/s, and reversingthe>> direction (winxp as iperf server) gives 30.2 Mb/s.> You can achieve the same results with -r to do a TX test followed by an > RX (eg client/server roles reverse) or -d to do the TX and RX tests > concurrently.Ok: iperf -c dom0-name -l 1M -w 1M -r gave me a BSOD: DRIVER_IRQL_NOT_LESS_OR_EQUAL [...] *** STOP: 0x000000D1 (0x8945365C,0x00000002,0x00000001,0xF87EE6B4) *** xennet.sys - Address F87EE6B4 base at F87EA000, DateStamp 4812c73b Rebooting and trying again gave me minor variations in the STOP line: *** STOP: 0x000000D1 (0x895D973C,0x00000002,0x00000001,0xF87FE6B4) The dom0 output was: [501] > iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 5001 connected with 192.168.1.102 port 1038 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.2 sec 30.7 MBytes 25.3 Mbits/sec ------------------------------------------------------------ Client connecting to 192.168.1.102, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 1.00 MByte) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 33125 connected with 192.168.1.102 port 5001 Waiting for server threads to complete. Interrupt again to force quit. Running higher window size on the server also gave: [502] > iperf -s -l 1M -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 256 KByte (WARNING: requested 1.00 MByte) ------------------------------------------------------------ and so was not pursued further, ''iperf -c dom0-name -l 256k -w 256k -r'' gave on dom0: [505] > iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 5001 connected with 192.168.1.102 port 1037 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 41.5 MBytes 34.6 Mbits/sec ------------------------------------------------------------ Client connecting to 192.168.1.102, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 256 KByte) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 53348 connected with 192.168.1.102 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.1 sec 38.8 MBytes 32.3 Mbits/sec Isn''t debugging fun :-)>> Going down to vcpu=1, dom0 as server gives 27.1 Mb/s, and domu asserver>> gives >> 35.7 Mb/s, so there is not a lot of difference between 1 and 2 vcpusfor>> me.> With all features enabled?Yes, I left all features enabled after getting them where a new install would put them.>> Nice improvements. I will test disk i/o w/ iometer later.> I''ll be interested in the results, but the disk stuff hasn''t changed ina while. I kind of thought so, since you haven''t mentioned it, but I''ll post the results later anyway. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-27 17:44 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sunday April 27 2008 01:09:57 pm you wrote:> That''s really slow.. how much do you get between pv linux domU and dom?Currently 580 Mb/s. I really haven''t done anything to my Fedora pv domu that would affect connectivity since the 0.8.0 benchmarks, where I reported 1563 Mb/s. Yeah, I know - a step backwards, but still more than I need. (The only Fedora pv change was updating to Rawhide. Dom0 was updated from xen.gz 3.1.0-rc7 to 3.1.3. Xen is still 3.1.2.) We already covered this ground on my system''s performance in that same thread ''Release 0.8.0 of GPL PV Drivers for Windows'', where you had me test a bunch of settings. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Apr-28 04:42 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sun, Apr 27, 2008 at 01:44:04PM -0400, jim burns wrote:> On Sunday April 27 2008 01:09:57 pm you wrote: > > That''s really slow.. how much do you get between pv linux domU and dom? > > Currently 580 Mb/s. I really haven''t done anything to my Fedora pv domu that > would affect connectivity since the 0.8.0 benchmarks, where I reported 1563 > Mb/s. Yeah, I know - a step backwards, but still more than I need. (The only > Fedora pv change was updating to Rawhide. Dom0 was updated from xen.gz > 3.1.0-rc7 to 3.1.3. Xen is still 3.1.2.) > > We already covered this ground on my system''s performance in that same > thread ''Release 0.8.0 of GPL PV Drivers for Windows'', where you had me test a > bunch of settings.Hmm.. so hvm winxp with pv drivers is around 20x slower for you than pv linux domU.. btw did you have other domains running while your tests were running? Have you tried VMware btw? Would be nice to see results with other virtualization software too.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jamesshirley
2008-Apr-28 05:02 UTC
Re: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
Scott McKenzie-3 wrote:> > > I''ve tested this release on my system (fresh install) and I''m still > getting > the duplicate disk problem when I boot with the /gplpv option. > > There has been some talk lately that this may be a fault of the Red Hat > kernel. FWIW I''m running CentOS 5.1 64bit, kernel is > 2.6.18-53.1.14.el5xen. > >Scott, I havent tried this latest version, but its good (in a bad way!) to hear some else having the same duplication issues i''ve been having. I''m running rhel51 which is identical? to centos51.. So sounds like a versioning issue with xen/dom0 kernel in rhel51 When I get the time, I might attempt using rhel beta 5.2 as the dom0 cheers, James -- View this message in context: http://www.nabble.com/Release-0.8.9-of-GPL-PV-drivers-for-Windows-tp16910023p16930327.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-28 05:08 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
> Scott McKenzie-3 wrote: > > > > > > I''ve tested this release on my system (fresh install) and I''m still > > getting > > the duplicate disk problem when I boot with the /gplpv option. > > > > There has been some talk lately that this may be a fault of the RedHat> > kernel. FWIW I''m running CentOS 5.1 64bit, kernel is > > 2.6.18-53.1.14.el5xen. > > > > > > Scott, > > I havent tried this latest version, but its good (in a bad way!) tohear> some else having the same duplication issues i''ve been having. I''mrunning> rhel51 which is identical? to centos51.. > > So sounds like a versioning issue with xen/dom0 kernel in rhel51 > > When I get the time, I might attempt using rhel beta 5.2 as the dom0 > > cheers, >Scott/James, Would it be possible for me to get remote desktop (port 3389) access to one of your systems? I would like to check a few things. Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
james.shirley@westernpower.com.au
2008-Apr-28 05:16 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
James, Nope, there are at least 4 firewalls that will stop you.. I can however do some debugging if you like? cheers James =======================================================================Electricity Networks Corporation, trading as Western Power ABN: 18 540 492 861 TO THE ADDRESSEE - this email is for the intended addressee only and may contain information that is confidential. If you have received this email in error, please notify us immediately by return email or by telephone. Please also destroy this message and any electronic or hard copies of this message. Any claim to confidentiality is not waived or lost by reason of mistaken transmission of this email. Unencrypted email is not secure and may not be authentic. Western Power cannot guarantee the accuracy, reliability, completeness or confidentiality of this email and any attachments. VIRUSES - Western Power scans all outgoing emails and attachments for viruses, however it is the recipient''s responsibility to ensure this email is free of viruses. ======================================================================= _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Apr-28 05:34 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
> James, > > Nope, there are at least 4 firewalls that will stop you..:)> I can however do some debugging if you like? >I''ll send you a list of the stuff I want to know. Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Apr-28 06:18 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Monday April 28 2008 12:42:57 am Pasi Kärkkäinen wrote:> Hmm.. so hvm winxp with pv drivers is around 20x slower for you than pv > linux domU..Hence my interest in the performance of the PV drivers. I''m consigned to *some* slow down in an hvm domain where every paging request is a software trap. Actually, my biggest complaint is the speed of the video ''adapter''. Anything higher than DSL speeds on the net ''adapter'' is acceptable, and James is already well past that. A little less sluggishness would be nice on the ''disk'' also, though I realize there are limits to file backed vbds.> btw did you have other domains running while your tests were running?Just an idle Fedora vm. Dom0 wasn''t doing anything in particular also.> Have you tried VMware btw? Would be nice to see results with other > virtualization software too..I played with Vmware Player before Vmware Workstation was free. Was frustrated by the lack of Windows appliances that only ran on Windows. My whole interest in virtualization is so I don''t have to keep ''hitting the reset button'', and I have that now with Xen. Kvm also didn''t support Windows at the time. I played with Uml at the beginning, but SuSE was starting to phase it out, and I could never get it to work. Vmware is probably an acceptable alternative nowadays, as is qemu and variants, but I''ve chosen Xen. This gives me more than enough topics to learn about - iscsi, HA, PV drivers, PCI passthrough, etc., and I probably don''t have the time to devote to another product now. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Apr-28 06:39 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Mon, Apr 28, 2008 at 02:18:00AM -0400, jim burns wrote:> On Monday April 28 2008 12:42:57 am Pasi Kärkkäinen wrote: > > Hmm.. so hvm winxp with pv drivers is around 20x slower for you than pv > > linux domU.. > > Hence my interest in the performance of the PV drivers. I''m consigned to > *some* slow down in an hvm domain where every paging request is a software > trap. Actually, my biggest complaint is the speed of the video ''adapter''. > Anything higher than DSL speeds on the net ''adapter'' is acceptable, and James > is already well past that. A little less sluggishness would be nice on > the ''disk'' also, though I realize there are limits to file backed vbds. > > > btw did you have other domains running while your tests were running? > > Just an idle Fedora vm. Dom0 wasn''t doing anything in particular also. >OK.> > Have you tried VMware btw? Would be nice to see results with other > > virtualization software too.. > > I played with Vmware Player before Vmware Workstation was free. Was frustrated > by the lack of Windows appliances that only ran on Windows. My whole interest > in virtualization is so I don''t have to keep ''hitting the reset button'', and > I have that now with Xen. Kvm also didn''t support Windows at the time. I > played with Uml at the beginning, but SuSE was starting to phase it out, and > I could never get it to work. Vmware is probably an acceptable alternative > nowadays, as is qemu and variants, but I''ve chosen Xen. This gives me more > than enough topics to learn about - iscsi, HA, PV drivers, PCI passthrough, > etc., and I probably don''t have the time to devote to another product now. >Yeah.. I was just wondering that it would be nice to get some performance numbers on the exact same hardware using vmware accelerated drivers.. to compare with xen windows gplpv drivers. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Michele Castigliego
2008-Apr-30 08:04 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
Hi guys, I''ve done a Windos ITA fresh install under Xen in a non production envirement and installed the GPL PV drivers. Now I have duplicated (working) entries in my hardware list, but I think it''s not correct.. If you are interested I can tell what I''ve done: 1. Install a Debian Etch + xen stable + quemu (I don''t think it''s really relevant) 2. Install Windows XP SP2 ITA under xen without (in)security updates from Bill at the moment 3. Downloaded and extracted 0.8.9 GPL PV drivers 4. Launched install.bat (I didn''t build anything, is it correct?) 4a. It showed me a console which tells: Windows XP detected... Install.. and a ShotdownMon.exe error message: Application error 0xc0000135 -> OK to close. Then all goes as expected. 5. Finally I''ve rebooted non gplpv, founded new hardware, changed boot.ini, rebooted gplpv, and founded duplicated entries.. Now I have two CDs, one QEMU and one XEN PV Scsi, both working, two network cards, same as above, two HD controllers but the HD is on the old QEMU''s one.. I''ve disabled the Realtek NIC by hand and still have networking throw XEN PV. If there is something more I can tell you, ask. Mic _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Seitz
2008-Apr-30 23:22 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
Michele Castigliego schrieb:> Hi guys, > I''ve done a Windos ITA fresh install under Xen in a non production > envirement and installed the GPL PV drivers. Now I have duplicated > (working) entries in my hardware list, but I think it''s not correct.. > > If you are interested I can tell what I''ve done: > > 1. Install a Debian Etch + xen stable + quemu (I don''t think it''s really > relevant) > 2. Install Windows XP SP2 ITA under xen without (in)security updates > from Bill at the moment > 3. Downloaded and extracted 0.8.9 GPL PV drivers > 4. Launched install.bat (I didn''t build anything, is it correct?) > 4a. It showed me a console which tells: Windows XP detected... Install.. > and a ShotdownMon.exe error message: > Application error 0xc0000135 -> OK to close.This error occurs if the Microsoft.NET Framework 2.0 is not installed. shutdownmon needs this framework, the drivers itself should work without it.> Then all goes as expected. > 5. Finally I''ve rebooted non gplpv, founded new hardware, changed > boot.ini, rebooted gplpv, and founded duplicated entries.. > > Now I have two CDs, one QEMU and one XEN PV Scsi, both working, two > network cards, same as above, two HD controllers but the HD is on the > old QEMU''s one..This is an often reported bug, AFAIK there is no real solution. In this situation you''re at high risk of corrupting your domU filesystem. You shouldn''t boot /gplpv if this occurs. I think it has to do with some xen / qemu-dm device namespace as it depends which version of xen you''re running. It looks like internationalized versions of windows are hit more often by this bug.> I''ve disabled the Realtek NIC by hand and still have networking throw > XEN PV.To use xen pv nic''s, I''ve always changed the vif =[...] from type=ioemu to type=netfront and changed the mac address as i found windows getting confused by different nic''s with the same mac.> > If there is something more I can tell you, ask. > > Mic > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Stephan Seitz Senior System Administrator *netz-haut* e.K. multimediale kommunikation zweierweg 22 97074 würzburg fon: +49 931 2876247 fax: +49 931 2876248 web: www.netz-haut.de <http://www.netz-haut.de/> registriergericht: amtsgericht würzburg, hra 5054 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Scott McKenzie
2008-May-04 08:02 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
Scott McKenzie wrote:> On Sat, 26 Apr 2008 16:49:59 +1000, "James Harper" > <james.harper@bendigoit.com.au> wrote: > >> Here''s the latest version. Took me ages to get to the bottom of what >> turned out to be a pretty simple problem - windows can give us more >> pages in a packet than Linux can handle but Linux (netback) doesn''t >> complain about it, it just creates corrupt packets :( >> >> Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zip >> >> I''ll probably do another release very shortly, mainly to reduce memory >> consumption on the tx side so that more interfaces can run at once. >> >> >From the testing I''ve done, on a UP windows DomU, with iperf options ''-l >> 1M -w 1M'', with the iperf server running in Dom0, I get TX throughput of >> about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I >> tried it under SMP it worked, but the performance was horrible. Probably >> best if you don''t run it under SMP for the moment :) >> >> I did have a test performance of 2.5Gbits/second, but now that I have to >> copy the windows buffers into my own buffers to reduce page usage, I >> seem to only be able to get about 1.5Gbits/second out of it... This kind >> of makes sense given that DomU to Dom0 network performance is going to >> be CPU and Memory bandwidth bound. >> >> James >> > > Hi James > > I''ve tested this release on my system (fresh install) and I''m still getting > the duplicate disk problem when I boot with the /gplpv option. > > There has been some talk lately that this may be a fault of the Red Hat > kernel. FWIW I''m running CentOS 5.1 64bit, kernel is > 2.6.18-53.1.14.el5xen. > > > Cheers > Scott > >I''ve just installed openSUSE on my system to test the drivers with their kernel and Xen version. I took a copy of my Windows HVM, booted it, installed 0.8.9, rebooted without /gplpv, rebooted with /gplpv and I had two disk devices appearing in device manager. So it doesn''t look like it''s the dom0 kernel that''s causing this problem. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-05 04:47 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Sunday April 27 2008 12:10:35 pm Jim Burns wrote:> >> Nice improvements. I will test disk i/o w/ iometer later. > > > > I''ll be interested in the results, but the disk stuff hasn''t changed in > > a while. > > I kind of thought so, since you haven''t mentioned it, but I''ll post the > results later anyway.And here we go: Equipment: core duo 2300, 1.66ghz each, 2M, sata drive configured for UDMA/100 System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.3, dom0 2.6.21 Tested hvm: XP Pro SP2, 2002 w/512M, file backed vbd on local disk, tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 1.7.0 (1 min run) Previous iometer results with 0.8.4 (from the ''New binary release of GPL PV drivers for Windows'' thread on 02/24, which was using xen.gz 3.1.0-rc7): pattern 4k, 50% read, 0% random (dynamo is the windows or linux client doing the actual work) dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 273.0 | 1.07 | 431.52 | 0 | 32.27 domu w/qemu | 251.6 | 0.98 | 10.05 | 0 | 28.44 dom0 w/4Gb | 1040.1 | 4.06 | 0.96 | 395.4 | 0 dom0 w/4Gb | 808.1 | 3.16 | 1.24 | 977.1 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 161.6 | 5.05 | -3.32 | 0 | 19.80 domu w/qemu | 109.0 | 3.41 | -8.93 | 0 | 25.35 dom0 w/4Gb | 140.7 | 4.40 | 7.10 | 467.6 | 0 dom0 w/4Gb | 159.3 | 4.98 | 6.28 | 270.1 | 0 And now the 0.8.9 results: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 331.5 | 1.29 | 232.29 | 0 | 35.63 domu w/qemu | 166.1 | 0.65 | 9.67 | 0 | 35.09 dom0 w/4Gb | 1088.3 | 4.25 | 0.92 | 487.4 | 0 dom0 w/4Gb | 1118.0 | 4.37 | 0.89 | 181.3 | 0 (2nd dom0 numbers from when booted w/o /pv) pattern 32k, 50% read, 0% random domu w/gplpv| 166.0 | 5.19 | 7.98 | 0 | 29.85 domu w/qemu | 100.4 | 3.14 | 21.09 | 0 | 35.93 dom0 w/4Gb | 61.8 | 1.93 | 16.14 | 1492.3 | 0 dom0 w/4Gb | 104.9 | 3.28 | 9.54 | 906.6 | 0 Despite some odd anomalies in the 0.8.9 dom0 32k pattern results, the general results are that 0.8.9 is marginally faster than 0.8.4. Domu 32k patterns are closer to dom0 performance than 4k patterns. 0.8.9 is much faster than qemu, compared to the 0.8.4 vs. qemu numbers, mostly because today''s qemu numbers were slower than the previous ones. And now for something totally different: I just upgraded my processor from an Intel Core Duo 2300, 1.66Mhz to a Core 2 Duo 5600, 1.86 Mhz. Here''s some new iometer results: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 501.7 | 1.96 | 2.90 | 0 | 31.68 domu w/qemu | 187.5 | 0.73 | 5.87 | 0 | 29.89 dom0 w/4Gb | 1102.3 | 4.31 | 0.91 | 445.5 | 0 dom0 w/4Gb | 1125.8 | 4.40 | 0.89 | 332.1 | 0 (2nd dom0 numbers from when booted w/o /pv) pattern 32k, 50% read, 0% random domu w/gplpv| 238.3 | 7.45 | 4.09 | 0 | 22.48 domu w/qemu | 157.4 | 4.92 | 6.35 | 0 | 20.51 dom0 w/4Gb | 52.5 | 1.64 | 19.05 | 1590.0 | 0 dom0 w/4Gb | 87.8 | 2.74 | 11.39 | 1286.4 | 0 So, between the two processors, the new one gives qemu and dom0 numbers that are modestly faster, and gplpv numbers that are 50% greater. As far as iperf goes, ''iperf -c dom0-name -t 60'' gives 10 Mbits/s w/o /gplpv, and 32.1 Mbits/s w/ /gplpv. This was with all advanced options turned on for the PV nic. My previous number for 0.8.9 w/ the old processor was 25 Mbits/s. And I haven''t even enables 64 bits yet! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-May-05 09:00 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Mon, May 05, 2008 at 12:47:39AM -0400, jim burns wrote:> On Sunday April 27 2008 12:10:35 pm Jim Burns wrote: > > >> Nice improvements. I will test disk i/o w/ iometer later. > > > > > > I''ll be interested in the results, but the disk stuff hasn''t changed in > > > > a while. > > > > I kind of thought so, since you haven''t mentioned it, but I''ll post the > > results later anyway. > > And here we go: > > Equipment: core duo 2300, 1.66ghz each, 2M, sata drive configured for UDMA/100 > System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.3, dom0 2.6.21 > Tested hvm: XP Pro SP2, 2002 w/512M, file backed vbd on local disk, tested w/ > iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 1.7.0 (1 min run) >Hmm.. have you tried LVM backed devices for HVM guest? Or raw devices.. Could you try iometer on dom0 to see what kind of performance you get there.. or on linux pv domU? And one more thing.. was your XP HVM single vcpu or more? Did you try binding both dom0 and hvm domU to their own dedicated cpu cores? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-06 00:32 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Monday May 05 2008 05:00:07 am Pasi Kärkkäinen wrote:> Hmm.. have you tried LVM backed devices for HVM guest? Or raw devices..From the previous post:> And now for something totally different: I just upgraded my processor from > an Intel Core Duo 2300, 1.66Mhz to a Core 2 Duo 5600, 1.83 Mhz. Here''s some > new iometer results:pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 501.7 | 1.96 | 2.90 | 0 | 31.68 domu w/qemu | 187.5 | 0.73 | 5.87 | 0 | 29.89 dom0 w/4Gb | 1102.3 | 4.31 | 0.91 | 445.5 | 0 dom0 w/4Gb | 1125.8 | 4.40 | 0.89 | 332.1 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 238.3 | 7.45 | 4.09 | 0 | 22.48 domu w/qemu | 157.4 | 4.92 | 6.35 | 0 | 20.51 dom0 w/4Gb | 52.5 | 1.64 | 19.05 | 1590.0 | 0 dom0 w/4Gb | 87.8 | 2.74 | 11.39 | 1286.4 | 0> So, between the two processors, the new one gives qemu and dom0 numbers that > are modestly faster, and gplpv numbers that are 50% greater.I never claimed to have the slickest hardware setup on the block. When I do benchmarks, it''s the relative differences I''m stressing, eg - - qemu vs. gplpv: I obviously expect gplpv to be faster - one version of gplpv vs. the next: the trend has been that each version of gplpv is faster than the previous, especially for iperf, where Realtek gets 10Mbits/s, 0.8.4 got 19.8 Mb/s, and 0.8.9 is getting 32.1 Mb/s. (But that last number is w/ the new processor - it was 25, but that''s still better.) - dom0 vs. domu: obviously, the standard to match is dom0 performance. (I suspect, tho'', that non-xen kernel performance would be even better.) Looking at the 4k pattern numbers above, hvm severely lags dom0. Interestingly enough, for the 32k pattern, hvm is doing better than dom0. That having been said, sure, my hardware setup could be better. My (Fedora) xen server''s physical volume spans all of the physical disk, and I have no room left on that system for anything but my every day domus, leaving a few gig left over for kernel compiles. I currently store my backup & test domus on a SuSE system which does have lots of room. Currently, if I want to fire one of them up, I access it over samba (ouch!). I eventually plan to convert the SuSE box to an iscsi server, serving up lvm slices. As with the processor upgrade above, any change in my configuration will be benchmarked as well. The iscsi conversion is on the back burner for now, tho'', in favor of flashing my Fedora bios to support 64 bits, and then loading a 64bit dom0. I suspect that will get even better results than iscsi. As always, any significant changes will be posted.> Could you try iometer on dom0 to see what kind of performance you get > there.. or on linux pv domU?As you can see above, I did do dom0. I could do a linux pv, but your next idea interests me more.> And one more thing.. was your XP HVM single vcpu or more? Did you try > binding both dom0 and hvm domU to their own dedicated cpu cores?It was vcpu=2. root@Insp6400 05/05/08 6:32PM:~ [977] > xm vcpu-pin Domain-0 0 0 root@Insp6400 05/05/08 6:33PM:~ [978] > xm vcpu-pin Domain-0 1 0 root@Insp6400 05/05/08 6:33PM:~ [979] > xm vcpu-pin fedora 0 0 root@Insp6400 05/05/08 6:33PM:~ [980] > xm vcpu-pin winxp 0 1 root@Insp6400 05/05/08 6:34PM:~ [981] > xm vcpu-pin winxp 1 1 root@Insp6400 05/05/08 6:34PM:~ [982] > xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 5548.2 0 Domain-0 0 1 0 --- 3392.1 0 fedora 3 0 0 -b- 1444.0 0 winxp 2 0 1 r-- 14713.3 1 winxp 2 1 1 --- 15013.8 1 The idle fedora domain shares the same pcpu as dom0. This unfortunately results in a very sluggish and useless domu & its desktop. It even took 10 times as long to reboot. Dom0 seems unaffected. Restoring two pcpus to the domain eliminated the sluggishness. Next I tried booting with vcpus=1, with it pinned to pcpu 1. Now the new iometer results (qemu (booting w/o /gplpv) not tested): pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 115.6 | 0.45 | 8.65 | 1836.7 | 67.63 dom0 w/4Gb | 501.2 | 1.96 | 1.99 | 739.2 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 115.3 | 3.65 | 8.67 | 1735.2 | 54.41 dom0 w/4Gb | 53.0 | 1.66 | 18.86 | 1751.3 | 0 Yeeaaahh - everything tanked! MB/s down, Cpu % up, etc. Console was still a little sluggish. (I suppose pinning cpus might work better with more than one socket on the mobo.) I won''t be trying that config again ;-) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-May-06 07:21 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Mon, May 05, 2008 at 08:32:46PM -0400, jim burns wrote:> - dom0 vs. domu: obviously, the standard to match is dom0 performance. (I > suspect, tho'', that non-xen kernel performance would be even better.) Looking > at the 4k pattern numbers above, hvm severely lags dom0. Interestingly > enough, for the 32k pattern, hvm is doing better than dom0. >domU doing better than dom0 usually happens when you use file backed disks on dom0.. then the memory cache of dom0 will affect the domU results.> > Could you try iometer on dom0 to see what kind of performance you get > > there.. or on linux pv domU? > > As you can see above, I did do dom0. I could do a linux pv, but your next idea > interests me more. >OK. I think measuring pv domU is worth trying too :)> > And one more thing.. was your XP HVM single vcpu or more? Did you try > > binding both dom0 and hvm domU to their own dedicated cpu cores? > > It was vcpu=2. >I think you should re-test with vcpu=1. Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a dedicated core. This way you''re not sharing any pcpu''s between the domains. I think this is the "recommended" setup from xen developers for getting maximum performance. I think the performance will be worse when you have more vcpus in use than your actual pcpu count..> > Yeeaaahh - everything tanked! MB/s down, Cpu % up, etc. Console was still a > little sluggish. (I suppose pinning cpus might work better with more than one > socket on the mobo.) I won''t be trying that config again ;-) >Hmm.. interesting. Maybe it was because of the shared pcpu''s.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-06 09:36 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote:> > - dom0 vs. domu: obviously, the standard to match is dom0 performance. (I > > suspect, tho'', that non-xen kernel performance would be even better.) > > Looking at the 4k pattern numbers above, hvm severely lags dom0. > > Interestingly enough, for the 32k pattern, hvm is doing better than dom0. > > domU doing better than dom0 usually happens when you use file backed disks > on dom0.. then the memory cache of dom0 will affect the domU results.Interesting that that didn''t happen with the 4k pattern numbers, tho''.> I think you should re-test with vcpu=1. > > Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > dedicated core. This way you''re not sharing any pcpu''s between the domains. > I think this is the "recommended" setup from xen developers for getting > maximum performance. > > I think the performance will be worse when you have more vcpus in use than > your actual pcpu count..Will try that later, after I''ve tested out a new (non-xen) kernel update. Having more vcpus than pcpus would be very easy tho'', if you have many domains. I can try this with just the hvm domain running. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-07 03:05 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote:> OK. I think measuring pv domU is worth trying too :)Ok, let''s try a few things. Repeating my original 0.8.9 numbers, with the new processor: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 501.7 | 1.96 | 2.90 | 0 | 31.68 domu w/qemu | 187.5 | 0.73 | 5.87 | 0 | 29.89 dom0 w/4Gb | 1102.3 | 4.31 | 0.91 | 445.5 | 0 dom0 w/4Gb | 1125.8 | 4.40 | 0.89 | 332.1 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 238.3 | 7.45 | 4.09 | 0 | 22.48 domu w/qemu | 157.4 | 4.92 | 6.35 | 0 | 20.51 dom0 w/4Gb | 52.5 | 1.64 | 19.05 | 1590.0 | 0 dom0 w/4Gb | 87.8 | 2.74 | 11.39 | 1286.4 | 0 Now, that was with all workers running on domu and dom0 simultaneously. Let''s try one at a time. On hvm w/gplpv, first the 4k pattern, then later the 32k pattern, with dom0 using the ''idle'' task: 4k pattern | 1026.6 | 4.01 | 39.37 | 0 | 49.70 32k pattern | 311.1 | 9.72 | 45.33 | 0 | 26.21 Now test dom0, with the hvm running the ''idle'' task: 4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0 32k pattern | 165.9 | 5.19 | 6.02 | 226.6 | 0 As expected, all numbers are significantly faster. Compare this to ''dd'' creating the 4GB /iobw.tst file on dom0 at a 22MB/s rate. Now, to test a fedora pv, since space is tight on my fedora xen server, I just ''xm block-attach''-ed dom0''s /iobw.tst as a new partition on the domu, and in the domu, did mkfs, mount, and created a new /iobw.tst on that partition. Results: 4k pattern | 1160.5 | 4.53 | 0.86 | 247.1 | 0 32k pattern | 284.1 | 8.88 | 3.52 | 326.4 | 0 The numbers are very similar to the hvm, including the 32k pattern being faster than dom0, which you pointed out is due to caching. This compares to ''dd'' creating the 3.7GB iobw.tst on the mounted new partition at an 18MB/s rate.> Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > dedicated core. This way you''re not sharing any pcpu''s between the domains. > I think this is the "recommended" setup from xen developers for getting > maximum performance. > > I think the performance will be worse when you have more vcpus in use than > your actual pcpu count..Now I rebooted dom0, after editing xend-config.sxp to include ''(dom0-cpus 1)'', and then did the following pins: [576] > xm create winxp Using config file "/etc/xen/winxp". Started domain winxp root@Insp6400 05/06/08 10:32PM:~ [577] > xm vcpu-pin 0 all 0 root@Insp6400 05/06/08 10:32PM:~ [578] > xm vcpu-pin winxp all 1 root@Insp6400 05/06/08 10:32PM:~ [579] > xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 228.7 0 Domain-0 0 1 - --p 16.0 0 winxp 5 0 1 r-- 36.4 1 Note I also had to set vcpus=1, because with two, I was again getting that extremely sluggish response in my hvm. Going back to simultaneous execution of all workers, to compare against the numbers at the top of this post, I got: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 286.4 | 1.12 | 3.49 | 564.9 | 36.97 dom0 w/4Gb | 1173.9 | 4.59 | 0.85 | 507.3 | 0 pattern 32k, 50% read, 0% random domu w/gplpv| 217.9 | 6.81 | 4.57 | 1633.5 | 22.93 dom0 w/4Gb | 63.3 | 1.97 | 15.85 | 1266.5 | 0 which is somewhat slower. Recommendations of the xen developers aside, my experience is that allowing xen to schedule any vcpu on any pcpu is most efficient. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Todd Deshane
2008-May-07 03:13 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Tue, May 6, 2008 at 11:05 PM, jim burns <jim_burn@bellsouth.net> wrote:> On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote: > > > OK. I think measuring pv domU is worth trying too :) > > Ok, let''s try a few things. Repeating my original 0.8.9 numbers, with the new > processor: > > > pattern 4k, 50% read, 0% random > > dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU > domu w/gplpv| 501.7 | 1.96 | 2.90 | 0 | 31.68 > domu w/qemu | 187.5 | 0.73 | 5.87 | 0 | 29.89 > dom0 w/4Gb | 1102.3 | 4.31 | 0.91 | 445.5 | 0 > dom0 w/4Gb | 1125.8 | 4.40 | 0.89 | 332.1 | 0 > (2nd dom0 numbers from when booted w/o /gplpv) > > pattern 32k, 50% read, 0% random > > domu w/gplpv| 238.3 | 7.45 | 4.09 | 0 | 22.48 > domu w/qemu | 157.4 | 4.92 | 6.35 | 0 | 20.51 > dom0 w/4Gb | 52.5 | 1.64 | 19.05 | 1590.0 | 0 > dom0 w/4Gb | 87.8 | 2.74 | 11.39 | 1286.4 | 0 > > Now, that was with all workers running on domu and dom0 simultaneously. Let''s > try one at a time. On hvm w/gplpv, first the 4k pattern, then later the 32k > pattern, with dom0 using the ''idle'' task: > > 4k pattern | 1026.6 | 4.01 | 39.37 | 0 | 49.70 > 32k pattern | 311.1 | 9.72 | 45.33 | 0 | 26.21 > > Now test dom0, with the hvm running the ''idle'' task: > > 4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0 > 32k pattern | 165.9 | 5.19 | 6.02 | 226.6 | 0 > > As expected, all numbers are significantly faster. Compare this to ''dd'' > creating the 4GB /iobw.tst file on dom0 at a 22MB/s rate. > > Now, to test a fedora pv, since space is tight on my fedora xen server, I > just ''xm block-attach''-ed dom0''s /iobw.tst as a new partition on the domu, > and in the domu, did mkfs, mount, and created a new /iobw.tst on that > partition. Results: > > 4k pattern | 1160.5 | 4.53 | 0.86 | 247.1 | 0 > 32k pattern | 284.1 | 8.88 | 3.52 | 326.4 | 0 > > The numbers are very similar to the hvm, including the 32k pattern being > faster than dom0, which you pointed out is due to caching. This compares > to ''dd'' creating the 3.7GB iobw.tst on the mounted new partition at an 18MB/s > rate. > > > > Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > > dedicated core. This way you''re not sharing any pcpu''s between the domains. > > I think this is the "recommended" setup from xen developers for getting > > maximum performance. > > > > I think the performance will be worse when you have more vcpus in use than > > your actual pcpu count.. > > Now I rebooted dom0, after editing xend-config.sxp to include ''(dom0-cpus 1)'', > and then did the following pins: > > [576] > xm create winxp > Using config file "/etc/xen/winxp". > Started domain winxp > root@Insp6400 05/06/08 10:32PM:~ > [577] > xm vcpu-pin 0 all 0 > root@Insp6400 05/06/08 10:32PM:~ > [578] > xm vcpu-pin winxp all 1 > root@Insp6400 05/06/08 10:32PM:~ > [579] > xm vcpu-list > > Name ID VCPU CPU State Time(s) CPU Affinity > Domain-0 0 0 0 r-- 228.7 0 > Domain-0 0 1 - --p 16.0 0 > winxp 5 0 1 r-- 36.4 1 > > Note I also had to set vcpus=1, because with two, I was again getting that > extremely sluggish response in my hvm. > > Going back to simultaneous execution of all workers, to compare against the > numbers at the top of this post, I got: > > > pattern 4k, 50% read, 0% random > > dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU > domu w/gplpv| 286.4 | 1.12 | 3.49 | 564.9 | 36.97 > dom0 w/4Gb | 1173.9 | 4.59 | 0.85 | 507.3 | 0 > > > pattern 32k, 50% read, 0% random > > domu w/gplpv| 217.9 | 6.81 | 4.57 | 1633.5 | 22.93 > dom0 w/4Gb | 63.3 | 1.97 | 15.85 | 1266.5 | 0 > > which is somewhat slower. Recommendations of the xen developers aside, my > experience is that allowing xen to schedule any vcpu on any pcpu is most > efficient. >I think that your experience (allowing Xen to do the scheduling itself is most efficient and only try to tweak the scheduling in very special cases and/or you really know what you are doing) should be considered conventional wisdom. Can you refresh me on the recommendations of the Xen developers that you are referring to? Thanks, Todd> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-07 03:53 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Tuesday May 06 2008 11:13:36 pm Todd Deshane wrote:> I think that your experience (allowing Xen to do the scheduling itself is > most efficient and only try to tweak the scheduling in very special cases > and/or you really know what you are doing) should be considered > conventional wisdom. > > Can you refresh me on the recommendations of the Xen developers that you > are referring to?I was responding to Pasi''s comment: On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote:> I think you should re-test with vcpu=1. > > Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > dedicated core. This way you''re not sharing any pcpu''s between the domains. > I think this is the "recommended" setup from xen developers for getting > maximum performance. > > I think the performance will be worse when you have more vcpus in use than > your actual pcpu count.._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Todd Deshane
2008-May-07 04:06 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Tue, May 6, 2008 at 11:53 PM, jim burns <jim_burn@bellsouth.net> wrote:> On Tuesday May 06 2008 11:13:36 pm Todd Deshane wrote: > > I think that your experience (allowing Xen to do the scheduling itself is > > most efficient and only try to tweak the scheduling in very special cases > > and/or you really know what you are doing) should be considered > > conventional wisdom. > > > > Can you refresh me on the recommendations of the Xen developers that you > > are referring to? > > I was responding to Pasi''s comment: > > > On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote: > > > I think you should re-test with vcpu=1. > > > > > Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > > dedicated core. This way you''re not sharing any pcpu''s between the domains. > > I think this is the "recommended" setup from xen developers for getting > > maximum performance. > > > > I think the performance will be worse when you have more vcpus in use than > > your actual pcpu count.. > >In our Running Xen book [1] chapter 12 "Managing Guest Resources" it says: "If you are planning on having a substantial number of guests running, we recommend sticking with the default VCPUs. The only place that VCPU pinning may be advantageous is to restrict a CPU to run only for Domain0. Each guest relies on the services Domain0 offers ..." It goes on to talk about especially heavy I/O etc. etc. I bring this up since a lot of thought went into the details of the book. Not that we will always be right, but continuing to sharpen our knowledge and working through the details can only help future versions etc. Cheers, Todd [1] http://runningxen.com> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-May-07 06:34 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Wed, May 07, 2008 at 12:06:41AM -0400, Todd Deshane wrote:> On Tue, May 6, 2008 at 11:53 PM, jim burns <jim_burn@bellsouth.net> wrote: > > On Tuesday May 06 2008 11:13:36 pm Todd Deshane wrote: > > > I think that your experience (allowing Xen to do the scheduling itself is > > > most efficient and only try to tweak the scheduling in very special cases > > > and/or you really know what you are doing) should be considered > > > conventional wisdom. > > > > > > Can you refresh me on the recommendations of the Xen developers that you > > > are referring to? > > > > I was responding to Pasi''s comment: > > > > > > On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote: > > > > > I think you should re-test with vcpu=1. > > > > >And more thing for the number of vcpu''s for the HVM guest.. I think Windows installs UNI or SMP HAL during the install time.. I don''t know what kind of effect there is if you run SMP HAL with only a single (v)CPU ?> > > Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a > > > dedicated core. This way you''re not sharing any pcpu''s between the domains. > > > I think this is the "recommended" setup from xen developers for getting > > > maximum performance. > > > > > > I think the performance will be worse when you have more vcpus in use than > > > your actual pcpu count.. > > > > > In our Running Xen book [1] chapter 12 "Managing Guest Resources" it says: > > "If you are planning on having a substantial number of guests running, we > recommend sticking with the default VCPUs. The only place that VCPU > pinning may be advantageous is to restrict a CPU to run only for Domain0. > Each guest relies on the services Domain0 offers ..." > > It goes on to talk about especially heavy I/O etc. etc. > > I bring this up since a lot of thought went into the details of the book. Not > that we will always be right, but continuing to sharpen our knowledge and > working through the details can only help future versions etc. >Yep. What I meant was if dom0 can''t get enough CPU time it will impact all vm''s.. so at least in some cases it will help to dedicate a pcpu for dom0. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-May-07 06:58 UTC
RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> > And more thing for the number of vcpu''s for the HVM guest.. I think > Windows > installs UNI or SMP HAL during the install time.. I don''t know what kind > of > effect there is if you run SMP HAL with only a single (v)CPU ? >The first time you start a previously UP windows DomU with more than a single vcpu, you get a message like ''your system must be rebooted for your new hardware to complete installation'', and sure enough you don''t see the additional CPU(s) in process monitor until you reboot. Subsequent changes to and from SMP don''t do the same thing, so it is possible that the first time you boot with >1 cpu an SMP hal gets installed, and that it doesn''t get uninstalled afterwards... In a physical as opposed to virtual environment this kind of makes sense - how often are you going to take an SMP machine back to a single processor unless you have a hardware failure or something, which will probably only be temporary anyway until the replacement cpu arrives? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-May-07 08:40 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
On Wed, May 07, 2008 at 04:58:43PM +1000, James Harper wrote:> > > > And more thing for the number of vcpu''s for the HVM guest.. I think > > Windows > > installs UNI or SMP HAL during the install time.. I don''t know what kind > > of > > effect there is if you run SMP HAL with only a single (v)CPU ? > > > > The first time you start a previously UP windows DomU with more than a single vcpu, > you get a message like ''your system must be rebooted for your new hardware to > complete installation'', and sure enough you don''t see the additional CPU(s) in > process monitor until you reboot. > > Subsequent changes to and from SMP don''t do the same thing, so it is possible > that the first time you boot with >1 cpu an SMP hal gets installed, and that it > doesn''t get uninstalled afterwards... In a physical as opposed to virtual environment > this kind of makes sense - how often are you going to take an SMP machine back to > a single processor unless you have a hardware failure or something, which will > probably only be temporary anyway until the replacement cpu arrives? >Yep. Do you know if running the SMP HAL with just a single vcpu will cause any performance degrations? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-May-07 10:13 UTC
RE: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
> > Yep. > > Do you know if running the SMP HAL with just a single vcpu will cause any > performance degrations? >Presumably it would... spinlock''s effectively become no-op''s on UP. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Seitz
2008-May-10 11:43 UTC
Re: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
James, I''ve recently got the time to use the 0.8.9 drivers on a fresh and completely upgraded (SP3) Version of XP Pro 32bit (german) and on a W2K3 SBS SP2 32bit (german) with former 0.7.something PV drivers. Host System is 3.2.0 64bit. I didn''t find any problems even on heavy disk and net IO. I''ll report ioperf results in the next few days. One thing that isn''t clear to me are the domU vif configuration. For former driver versions, I used type=netfront for the PV drivers, the recent drivers seems to work only with the type=ioemu. I''m not sure how to configure the vifs. For blockdevices I continue using xvdN. Thanks for this release! Cheers, Stephan James Harper schrieb:>> James, >> >> Nope, there are at least 4 firewalls that will stop you.. > > :) > >> I can however do some debugging if you like? >> > > I''ll send you a list of the stuff I want to know. > > Thanks > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Stephan Seitz Senior System Administrator *netz-haut* e.K. multimediale kommunikation zweierweg 22 97074 würzburg fon: +49 931 2876247 fax: +49 931 2876248 web: www.netz-haut.de <http://www.netz-haut.de/> registriergericht: amtsgericht würzburg, hra 5054 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-10 14:56 UTC
Re: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
On Saturday May 10 2008 07:43:08 am Stephan Seitz wrote:> One thing that isn''t clear to me are the domU vif configuration. For former > driver versions, I used type=netfront for the PV drivers, the recent > drivers seems to work only with the type=ioemu. I''m not sure how to > configure the vifs.A previous suggestion on this list said to change the mac address when you change between netfront and ioemu. Does that help? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jamesshirley
2008-May-14 07:05 UTC
Re: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
Scott/James, Scott McKenzie-3 wrote:> > Scott McKenzie wrote: > > I''ve just installed openSUSE on my system to test the drivers with their > kernel and Xen version. I took a copy of my Windows HVM, booted it, > installed 0.8.9, rebooted without /gplpv, rebooted with /gplpv and I had > two disk devices appearing in device manager. So it doesn''t look like > it''s the dom0 kernel that''s causing this problem. >I''ve just done an upgrade to rhel5.2 beta, and supposable allot of kernel 2.6.24/xen 3.1.2 stuff has been backported.. However I''m still have the same issue. James, can I ask what platform (versions) you are using to test these gplpv drivers for windows? Cheers, James -- View this message in context: http://www.nabble.com/Release-0.8.9-of-GPL-PV-drivers-for-Windows-tp16910023p17224706.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-May-14 07:13 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
> > James, can I ask what platform (versions) you are using to test these > gplpv drivers for windows?Debian Etch, with the versions of xen from backports (3.1 and just recently 3.2). James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
james.shirley@westernpower.com.au
2008-May-15 02:55 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
James, Do you have any other recommendations on what to try to get this issue fixed. I suppose I could try the ubuntu server installation, to see ifs something related to the xen dom0 software versioning? This would issolate possible hardware issue. What do you think? Cheers, James Shirley ======================================================================= Electricity Networks Corporation, trading as Western Power ABN: 18 540 492 861 TO THE ADDRESSEE - this email is for the intended addressee only and may contain information that is confidential. If you have received this email in error, please notify us immediately by return email or by telephone. Please also destroy this message and any electronic or hard copies of this message. Any claim to confidentiality is not waived or lost by reason of mistaken transmission of this email. Unencrypted email is not secure and may not be authentic. Western Power cannot guarantee the accuracy, reliability, completeness or confidentiality of this email and any attachments. VIRUSES - Western Power scans all outgoing emails and attachments for viruses, however it is the recipient's responsibility to ensure this email is free of viruses. ======================================================================= _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-May-15 03:00 UTC
RE: Re[Xen-users] lease 0.8.9 of GPL PV drivers for Windows
> > Do you have any other recommendations on what to try to get this issue > fixed. > > I suppose I could try the ubuntu server installation, to see ifssomething> related to the xen dom0 software versioning? > > This would issolate possible hardware issue. > > What do you think? >If you can wait a bit the next release should resolve these problems. Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-May-19 04:49 UTC
Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
All right, I just got finished upgrading my i386 Fedora 8 system to x86_64, thanx to the new Core 2 duo processor. (Actually, it was a ''new install'', since apparently there is no clean upgrade path from one arch to another.) After restoring my old configuration, I was pleased to see that my system behaved exactly the same as before, no extra quirks. If it wasn''t for ''yum update''s offering both arches, I wouldn''t be able to tell the difference, tho'' I haven''t explored multimedia much, yet. Even my kernel compiles are simpler, not having to explicitly specify ''rpmbuild --target=i686'', since there are no subarches to worry about. So let''s see whether it''s any faster. Note - I''m only testing the same 0.8.9 gplpv version, just before and after the processor & software upgrade. Current configuration: Equipment: core 2 duo 5600, 1.83ghz each, 2M, sata drive configured for UDMA/100 System: fc8 64bit, xen 3.1.2, xen.gz 3.1.3, dom0 2.6.21 Tested hvm: XP Pro SP2, 2002 32bit w/512M, file backed vbd on local disk, tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 2.0.2 (1 min run) Note, I''m no longer using iperf 1.7.0, since I discovered that iperf 2.0.2 comes with Fedora 8. First the old iometer numbers, from the old 32bit processor, both domu & dom0 threads running at the same time: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 331.5 | 1.29 | 232.29 | 0 | 35.63 domu w/qemu | 166.1 | 0.65 | 9.67 | 0 | 35.09 dom0 w/4Gb | 1088.3 | 4.25 | 0.92 | 487.4 | 0 dom0 w/4Gb | 1118.0 | 4.37 | 0.89 | 181.3 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 166.0 | 5.19 | 7.98 | 0 | 29.85 domu w/qemu | 100.4 | 3.14 | 21.09 | 0 | 35.93 dom0 w/4Gb | 61.8 | 1.93 | 16.14 | 1492.3 | 0 dom0 w/4Gb | 104.9 | 3.28 | 9.54 | 906.6 | 0 And now the new numbers: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/gplpv| 417.5 | 1.63 | 7.39 | 0 | 27.29 domu w/qemu | 155.4 | 0.60 | -4.60 | 0 | 29.23 dom0 w/2Gb | 891.6 | 3.48 | 1.12 | 574.4 | 0 dom0 w/2Gb | 1033.1 | 4.04 | 0.97 | 242.4 | 0 (2nd dom0 numbers from when booted w/o /gplpv) pattern 32k, 50% read, 0% random domu w/gplpv| 228.6 | 7.15 | -4.65 | 0 | 21.64 domu w/qemu | 120.4 | 3.76 | 83.63 | 0 | 28.50 dom0 w/2Gb | 42.0 | 1.31 | 23.80 | 2084.7 | 0 dom0 w/2Gb | 88.3 | 2.76 | 11.32 | 1267.3 | 0 There are significant improvements in gplpv io/s. MB/s, avg. i/o time. and %cpu. There are modest decreases in dom0 performance, and modest improvements in qemu. Now running one domain thread at a time, with any other domains running the ''idle'' task. First the old numbers (with the new processor, but 32bit dom0): gplpv 0.8.9: 4k pattern | 1026.6 | 4.01 | 39.37 | 0 | 49.70 32k pattern | 311.1 | 9.72 | 45.33 | 0 | 26.21 dom0: 4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0 32k pattern | 165.9 | 5.19 | 6.02 | 226.6 | 0 and now the new: gplpv 0.8.9: 4k pattern | 1170.0 | 4.57 | 7.16 | 0 | 41.34 32k pattern | 287.0 | 8.97 | -30.85 | 0 | 23.39 dom0: 4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0 32k pattern | 1484.3 | 5.80 | 0.67 | 314.4 | 0 The differences are insignificant for single thread execution. Since the underlying disk has not changed, just the processor and software, this is not unexpected. However, it was nice to see multi-thread performance improve (which is more software dependent), even if it was just on gplpv. As far as ''iperf -c dom0-name -t 60'' goes, the old numbers (for 1.7.0) are: realtek: 10Mb/s gplpv (old processor): 25 Mb/s gplpv (new processor): 32 Mb/s and the new numbers (for 2.0.2) are: realtek: .5 Mb/s gplpv (new processor): 4Mb/s Huh?!? Ok, let''s try iperf 1.7.0 again: realtek: 9.1 Mb/s gplpv (new processor): 33.6 Mb/s That''s interesting - guess I''ll be sticking with 1.7.0 after all! (Btw, by adding the -r option, I get nearly identical write speeds for dom0 to gplpv domu, but 2-6x faster for qemu.) I''ll look at 0.9.0 later, and if there are significant differences from 0.8.9, I''ll report to the list. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users