Reviewing the new Halsign release. The package name has changed from GateKeeper 1.0 to TurboGate 1.1. 1.1 does *not* uninstall 1.0 - do that first. And make sure the pv nic is not running - boot w/o /pv and disable the Halsign adapter and enable the Realtek (Netfront) one, as uninstalling 1.0 will remove the Halsign adapter. While the linux tarball still has the GPL COPYING file, the Windows .exe license screen has removed all references to GPL components. Ok, here are my iometer and iperf results. Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 Tested hvm: XP Pro SP2, 2002, tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 1.7.0 (1 min run) Previous iometer results with 1.0 (from the ''New binary release of GPL PV drivers for Windows'' thread on 02/24): pattern 4k, 50% read, 0% random (dynamo is the windows or linux client doing the actual work) dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/hals.| 331.3 | 1.29 | -31.78 | 0 | 39.02 domu w/qemu | 460.2 | 1.80 | -5.85 | 0 | 42.35 dom0 w/4Gb | 958.1 | 3.75 | 1.04 | 187.8 | 0 dom0 w/4Gb | 1080.5 | 4.22 | 0.92 | 192.2 | 0 (2nd dom0 numbers from when booted w/o /pv) pattern 32k, 50% read, 0% random domu w/hals.| 81.1 | 2.53 | 49.94 | 0 | 36.36 domu w/qemu | 74.6 | 2.33 | 10.11 | 0 | 32.00 dom0 w/4Gb | 138.9 | 4.34 | 7.20 | 340.8 | 0 dom0 w/4Gb | 148.0 | 4.62 | 6.76 | 228.6 | 0 And now the 1.1 results: pattern 4k, 50% read, 0% random dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU domu w/hals.| 226.7 | 0.89 | 5.25 | 0 | 48.57 domu w/qemu | 233.4 | 0.91 | -8.42 | 0 | 42.23 dom0 w/4Gb | 873.7 | 3.41 | 1.14 | 221.5 | 0 dom0 w/4Gb | 1118.0 | 4.37 | 0.89 | 181.3 | 0 (2nd dom0 numbers from when booted w/o /pv) pattern 32k, 50% read, 0% random domu w/hals.| 73.4 | 2.29 | 54.35 | 0 | 45.21 domu w/qemu | 72.2 | 2.26 | 201.02 | 0 | 48.06 dom0 w/4Gb | 140.5 | 4.39 | 7.11 | 254.2 | 0 dom0 w/4Gb | 139.2 | 4.35 | 7.18 | 263.0 | 0 The numbers are negligibly different between the two domu drivers, and 1.1 is somewhat worse than 1.0. While the 32k pattern numbers are better, they are not as good as dom0, probably because my Halsign domu is on a samba mount, unlike my gplpv domu, which is on the local disk. And now the iperf results. Since I didn''t have iperf at the time I ran my original gplpv tests (from the ''Release 0.8.0 of GPL PV Drivers for Windows'' thread on 03/01), there are no old results to compare against: Server/listener on dom0, client on domu. In the table below, ''udp Mpbs'' is the observed, and ''-b Mpbs'' is the requested rate. (The server has to be invoked with ''iperf -s -u'' for udp tests.) machine | tcp Mbps| udp Mbps| -b Mbps | udp packet loss halsign | 16.2| 0.1| 1 | 0.0% realtek | 9.8| 3.8| 10 | 0.0% The Halsign driver is 65% faster. (Gplpv 0.8.4 was about 2x faster.) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
It seems that you didn''t install the host packages. You can try again with host packages installed. As for disk, you can do benchmark on local disk again to avoid the impact of samba mount. As for nic, about 700Mbps (~20x faster than qemu) can be acheived with TurboGate1.1. The nic benchmark enviroment is pasted below: Target PC: Dell Optiplex 745 (core 2 6300, 1.86G, 2G memory) HostOs: rhel5.x, Hvm GuestOs: windows Xp Sp2 (smp, 1vcpu) TurboGate 1.1 installed Peer PC: Dell Optiplex 745 (core 2 6300, 1.86G, 2G memory) OS: windows Xp Sp2 (smp) Benchmark setting: iperf 1.7.0, windows size = 64k 1Gbps cable jim burns wrote:> > Reviewing the new Halsign release. The package name has changed from > GateKeeper 1.0 to TurboGate 1.1. 1.1 does *not* uninstall 1.0 - do that > first. And make sure the pv nic is not running - boot w/o /pv and disable > the > Halsign adapter and enable the Realtek (Netfront) one, as uninstalling 1.0 > will remove the Halsign adapter. While the linux tarball still has the GPL > COPYING file, the Windows .exe license screen has removed all references > to > GPL components. > > Ok, here are my iometer and iperf results. > > Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 > System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 > Tested hvm: XP Pro SP2, 2002, tested w/ iometer 2006-07-27 (1Gb \iobw.tst, > 5min run) & iperf 1.7.0 (1 min run) > > Previous iometer results with 1.0 (from the ''New binary release of GPL PV > drivers for Windows'' thread on 02/24): > > pattern 4k, 50% read, 0% random > > (dynamo is the windows or linux client doing the actual work) > dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | > %CPU > domu w/hals.| 331.3 | 1.29 | -31.78 | 0 | > 39.02 > domu w/qemu | 460.2 | 1.80 | -5.85 | 0 | > 42.35 > dom0 w/4Gb | 958.1 | 3.75 | 1.04 | 187.8 | > 0 > dom0 w/4Gb | 1080.5 | 4.22 | 0.92 | 192.2 | > 0 > (2nd dom0 numbers from when booted w/o /pv) > > pattern 32k, 50% read, 0% random > > domu w/hals.| 81.1 | 2.53 | 49.94 | 0 | > 36.36 > domu w/qemu | 74.6 | 2.33 | 10.11 | 0 | > 32.00 > dom0 w/4Gb | 138.9 | 4.34 | 7.20 | 340.8 | > 0 > dom0 w/4Gb | 148.0 | 4.62 | 6.76 | 228.6 | > 0 > > And now the 1.1 results: > > pattern 4k, 50% read, 0% random > > dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | > %CPU > domu w/hals.| 226.7 | 0.89 | 5.25 | 0 | > 48.57 > domu w/qemu | 233.4 | 0.91 | -8.42 | 0 | > 42.23 > dom0 w/4Gb | 873.7 | 3.41 | 1.14 | 221.5 | > 0 > dom0 w/4Gb | 1118.0 | 4.37 | 0.89 | 181.3 | > 0 > (2nd dom0 numbers from when booted w/o /pv) > > pattern 32k, 50% read, 0% random > > domu w/hals.| 73.4 | 2.29 | 54.35 | 0 | > 45.21 > domu w/qemu | 72.2 | 2.26 | 201.02 | 0 | > 48.06 > dom0 w/4Gb | 140.5 | 4.39 | 7.11 | 254.2 | > 0 > dom0 w/4Gb | 139.2 | 4.35 | 7.18 | 263.0 | > 0 > > The numbers are negligibly different between the two domu drivers, and 1.1 > is > somewhat worse than 1.0. While the 32k pattern numbers are better, they > are > not as good as dom0, probably because my Halsign domu is on a samba mount, > unlike my gplpv domu, which is on the local disk. > > And now the iperf results. Since I didn''t have iperf at the time I ran my > original gplpv tests (from the ''Release 0.8.0 of GPL PV Drivers for > Windows'' > thread on 03/01), there are no old results to compare against: > > Server/listener on dom0, client on domu. In the table below, ''udp Mpbs'' is > the > observed, and ''-b Mpbs'' is the requested rate. (The server has to be > invoked > with ''iperf -s -u'' for udp tests.) > > machine | tcp Mbps| udp Mbps| -b Mbps | udp packet loss > halsign | 16.2| 0.1| 1 | 0.0% > realtek | 9.8| 3.8| 10 | 0.0% > > The Halsign driver is 65% faster. (Gplpv 0.8.4 was about 2x faster.) > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >-- View this message in context: http://www.nabble.com/Halsign-1.1-tp15945490p15950184.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sunday 09 March 2008 11:16:30 pm halsign wrote:> It seems that you didn''t install the host packages. > You can try again with host packages installed.I did, and I had the gkhost service running. I did not install the kernel & xen packages. I''m not going to downgrade the kernel & xen on the chance that I will get faster performance.> As for disk, you can do benchmark on local disk again > to avoid the impact of samba mount.That''s my limitation. My Fedora xen server has limited space, so I only put my everyday vms there. My test systems are on another computer. Would nfs be any faster, in your experience? I''m eventually going to convert this other computer (SuSE 10.3) to iscsi when I get a chance. What I did in the benchmark was to compare qemu to halsign under the same conditions. I know I''m not going to get fast performance - it was the relative difference I was interested in.> As for nic, about 700Mbps (~20x faster than qemu) > can be acheived with TurboGate1.1.With a 1Gpbs nic, right?> The nic benchmark enviroment is pasted below: > > Target PC: Dell Optiplex 745 (core 2 6300, 1.86G, 2G memory)And you have a core 2 duo - I have only a core duo. My iperf test was domu client to dom0 listener/server, Iperf is generating the data, so nothing is being read off the samba mount vbd, so we are testing the speed of a purely software nic, and faster processors will perform better. Everybody''s mileage will vary. You get great results - I don''t, all tho it''s still better than Realtek. Thanx for taking an interest, tho. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
You have to install the kernel&xen packages for faster performance. If you are using fedora xen server , you can try packages for fc8 at the website of halsign, the version of kernel&xen was keeping the same as origianl fc8. For nic benchmark, 1Gbps nic was used. -- View this message in context: http://www.nabble.com/Halsign-1.1-tp15945490p15973620.html Sent from the Xen - User mailing list archive at Nabble.com. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Monday 10 March 2008 10:45:03 pm halsign wrote:> You have to install the kernel&xen packages for faster > performance. > > If you are using fedora xen server , you can try packages for fc8 > at the website of halsign, the version of kernel&xen was keeping > the same as origianl fc8.Correct - the *original* release of fc8: [1209] > tar xvf Documents/TurboGate-HTools-1.1-fc8-i386.tar TurboGate-HTools-1.1-fc8-i386/ TurboGate-HTools-1.1-fc8-i386/xen-3.1.0-13.fc8.gk1.1.i386.rpm TurboGate-HTools-1.1-fc8-i386/gkhost-1.1-1.gk1.1.i386.rpm TurboGate-HTools-1.1-fc8-i386/kernel-xen-2.6.21-2950.fc8.gk1.1.i686.rpm TurboGate-HTools-1.1-fc8-i386/TurboGate-Tools-Install-Guide.txt TurboGate-HTools-1.1-fc8-i386/COPYING TurboGate-HTools-1.1-fc8-i386/xen-libs-3.1.0-13.fc8.gk1.1.i386.rpm jimb@Insp6400 03/11/08 4:47AM:~ [1210] > grep xen /var/log/rpmpkgs kernel-xen-2.6.21-2957.fc8.i686.rpm kernel-xen-2.6.21.7-2.fc8.i686.rpm kernel-xen-devel-2.6.21-2957.fc8.i686.rpm kernel-xen-devel-2.6.21.7-2.fc8.i686.rpm xen-3.1.2-2.fc8.i386.rpm xen-libs-3.1.2-2.fc8.i386.rpm As you can see, the versions have gone up since then. It''s important to keep up with the latest patches & features. I''m sure you don''t have the resources to issue an update every time each distro comes out with a new release.> For nic benchmark, 1Gbps nic was used.Maybe since this is the GPL-ed portion of your package, you could make available the patches you used, so we can update our latest releases ourselves. I know - that lets out newbies & those who aren''t sophisticated or interested enough to do that. Those people will have to settle for your releases. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users