James Harper
2008-Feb-21 10:05 UTC
[Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
I''ve just uploaded the latest release of the GPL PV drivers for Windows. The changes in this version are: . Bug fixes to xennet. . Bug fixes elsewhere. . Better prevention of the qemu block device loading when /gplpv is specified . pvSCSI support (eg can pass through tape drives and cd burners, although I haven''t tested the latter). This hasn''t seen nearly as much testing as everything else, but I''ve run the tests in the HP Library & Tape Tools successfully, including a firmware update. . very basic installer - just run install.bat and then click lots of times. I highly recommend uninstalling all trace of any previous version before installing this one. Particularly put the Windows PCI Bus driver back the way it was (eg update drivers and just click next a few times). Also, if you boot up with /GPLPV, and notice any qemu block devices, destroy the domain immediately as there is a high chance of data corruption! Also please let me know if this happens. Still outstanding bugs: . If windows bug checks, it still won''t write out a bug check file, just hangs instead. . pvSCSI appears to have a race in it that leaves windows stuck at the loading screen sometimes. . SMP hasn''t been thoroughly tested . XP doesn''t support the synchronisation calls I am using so they have been omitted - very slight chance of races during device enumeration. If you want to make use of the pvSCSI drivers, see the patches posted to -dev by Jun Kamada. I have managed to build it under Debian by applying the patches to the Debian Xen source and then making the hypervisor and tools packages, and then building the backend out of tree. I needed to greatly increase the scsi timeout for my tape drive to work reliably (default is 5 seconds, needs to be more like 10 minutes or more - I made it an hour). Download at http://www.meadowcourt.org/WindowsXenPV-0.8.0.zip . Any and all feedback appreciated. Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-21 10:54 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thursday 21 February 2008 05:05:56 am James Harper wrote:> . pvSCSI support (eg can pass through tape drives and cd burners, > although I haven''t tested the latter). This hasn''t seen nearly as much > testing as everything else, but I''ve run the tests in the HP Library & > Tape Tools successfully, including a firmware update.Nice. Any special syntax needed in the domain config, or dom0 grub, beyond the standard pciback hide stuff? Do you need xen 3.2/Vt-d?> I highly recommend uninstalling all trace of any previous version before > installing this one. Particularly put the Windows PCI Bus driver back > the way it was (eg update drivers and just click next a few times).What''s the best order to uninstall in - the reverse order of installing? That would make it xenhide, xenstub, xennet, xenvbd, xenenum, xenpci? Everytime I rip out xenhide, I have a massive hardware reconfiguration with the Add Hardware Wizard on the next reboot. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-21 11:25 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thu, Feb 21, 2008 at 11:54 AM, jim burns <jim_burn@bellsouth.net> wrote:> On Thursday 21 February 2008 05:05:56 am James Harper wrote: > > . pvSCSI support (eg can pass through tape drives and cd burners, > > although I haven''t tested the latter). This hasn''t seen nearly as much > > testing as everything else, but I''ve run the tests in the HP Library & > > Tape Tools successfully, including a firmware update. > > Nice. Any special syntax needed in the domain config, or dom0 grub, beyond > the > standard pciback hide stuff? Do you need xen 3.2/Vt-d? >I think this shall be using paravirtual drivers, so no vt-d stuff or pciback kind of trick. James, could you confirm? Some additional questions: - Can we say this is "faster" than the other SCSI paravirtual driver? - Does the driver still work if I didn''t apply those patches for pvSCSI - Does anybody know if there''s any plan to merge those patches into xen-unstable? If not, why?> > > > I highly recommend uninstalling all trace of any previous version before > > installing this one. Particularly put the Windows PCI Bus driver back > > the way it was (eg update drivers and just click next a few times). > > What''s the best order to uninstall in - the reverse order of installing? > That > would make it xenhide, xenstub, xennet, xenvbd, xenenum, xenpci? Everytime > I > rip out xenhide, I have a massive hardware reconfiguration with the Add > Hardware Wizard on the next reboot. >I think the best is to boot with standard kernel, then remove xennet, xenenum, xenblk, xenpci and then replace xenhide with standard PCI controller, reboot again with std controller, let the system re-enumarate the devices. -- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-21 13:02 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> On Thursday 21 February 2008 05:05:56 am James Harper wrote: > > . pvSCSI support (eg can pass through tape drives and cd burners, > > although I haven''t tested the latter). This hasn''t seen nearly asmuch> > testing as everything else, but I''ve run the tests in the HP Library&> > Tape Tools successfully, including a firmware update. > > Nice. Any special syntax needed in the domain config, or dom0 grub,beyond> the standard pciback hide stuff? Do you need xen 3.2/Vt-d?Look at Jan Kamada''s posts with a subject of ''[Patch x/7] pvSCSI driver'' on or around the 18th.> > I highly recommend uninstalling all trace of any previous versionbefore> > installing this one. Particularly put the Windows PCI Bus driverback> > the way it was (eg update drivers and just click next a few times). > > What''s the best order to uninstall in - the reverse order ofinstalling?> That > would make it xenhide, xenstub, xennet, xenvbd, xenenum, xenpci?Everytime> I > rip out xenhide, I have a massive hardware reconfiguration with theAdd> Hardware Wizard on the next reboot.I''d just re-install the windows PCI driver (eg right click on xenhide, update driver, next, next, etc). After that I''d just delete C:\Windows\System32\drivers\xen*.sys and reboot. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-21 13:07 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > I think this shall be using paravirtual drivers, so no vt-d stuff or > pciback kind of trick. James, could you confirm?Correct. Read Jun''s posts.> Some additional questions:I assume these questions are all about pvSCSI.> - Can we say this is "faster" than the other SCSI paravirtual driver?In theory, if you have a backend device which is SCSI (eg a physical disk), and you want to map the whole disk through to a domU, then yes this could be faster as it is just a pipe to pass scsi commands through (mostly) unmodified. I haven''t done any benchmarks, and the current cut of the xenscsi driver represents only a few days of ''spare time'' work on my behalf (it''s a heavily modified version of xenvbd), so there may be a few optimisations that could be made.> - Does the driver still work if I didn''t apply those patches forpvSCSI The xenscsi driver will only work if you have applied Jun''s patches. Xenvbd continues to work as it always did.> - Does anybody know if there''s any plan to merge those patches intoxen-> unstable? If not, why?I followed the discussion when the original set of patches were posted a while back. Keir and others made a few suggestions as to things that would need to be done before a merge would be considered. I believe all the concerns have been addressed with this set of patches. Since testing, I''ve made a few more suggestions which I believe will be required before they could be considered working. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink
2008-Feb-21 13:30 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
James Harper schreef:>> I think this shall be using paravirtual drivers, so no vt-d stuff or >> pciback kind of trick. James, could you confirm? > > Correct. Read Jun''s posts. > >> Some additional questions: > > I assume these questions are all about pvSCSI. > >> - Can we say this is "faster" than the other SCSI paravirtual driver? > > In theory, if you have a backend device which is SCSI (eg a physical > disk), and you want to map the whole disk through to a domU, then yes > this could be faster as it is just a pipe to pass scsi commands through > (mostly) unmodified. I haven''t done any benchmarks, and the current cut > of the xenscsi driver represents only a few days of ''spare time'' work on > my behalf (it''s a heavily modified version of xenvbd), so there may be a > few optimisations that could be made.So iSCSI backends could have significant gain? With respect to the the other pv driver? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-21 13:44 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > So iSCSI backends could have significant gain? With respect to the the > other pv driver? >Hmm... hadn''t thought of that. If you had an accelerated iSCSI HBA, then with the pvSCSI backend the path becomes quite direct. Without an accelerated HBA then it probably doesn''t matter so much whether you pass IP packets to the DomU and get the SCSI packet out there, or get the SCSI packet out in Dom0 and pass it to the DomU. As usual, some benchmarks would help a lot! James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink
2008-Feb-21 13:49 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi James, James Harper schreef:>> So iSCSI backends could have significant gain? With respect to the the >> other pv driver? >> > > Hmm... hadn''t thought of that. If you had an accelerated iSCSI HBA, then > with the pvSCSI backend the path becomes quite direct. Without an > accelerated HBA then it probably doesn''t matter so much whether you pass > IP packets to the DomU and get the SCSI packet out there, or get the > SCSI packet out in Dom0 and pass it to the DomU.Ofcourse it would matter from points of security. I don''t want to pass my storage domain directly to all users.> As usual, some benchmarks would help a lot!Will do :) Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Couchman
2008-Feb-21 16:21 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
A couple of questions: 1) You say to remove all traces of previous versions - any recommendations/procedures on doing that? 2) When I try to install this version of the driver, my Xen Enum Device Driver goes to error code 39. Works fine in 0.6.x, but 0.8.0 generates that error code and my other XEN devices disappear. Thanks! -Nick>>> On 2008/02/21 at 03:05, "James Harper" <james.harper@bendigoit.com.au> wrote:I''ve just uploaded the latest release of the GPL PV drivers for Windows. The changes in this version are: . Bug fixes to xennet. . Bug fixes elsewhere. . Better prevention of the qemu block device loading when /gplpv is specified . pvSCSI support (eg can pass through tape drives and cd burners, although I haven''t tested the latter). This hasn''t seen nearly as much testing as everything else, but I''ve run the tests in the HP Library & Tape Tools successfully, including a firmware update. . very basic installer - just run install.bat and then click lots of times. I highly recommend uninstalling all trace of any previous version before installing this one. Particularly put the Windows PCI Bus driver back the way it was (eg update drivers and just click next a few times). Also, if you boot up with /GPLPV, and notice any qemu block devices, destroy the domain immediately as there is a high chance of data corruption! Also please let me know if this happens. Still outstanding bugs: . If windows bug checks, it still won''t write out a bug check file, just hangs instead. . pvSCSI appears to have a race in it that leaves windows stuck at the loading screen sometimes. . SMP hasn''t been thoroughly tested . XP doesn''t support the synchronisation calls I am using so they have been omitted - very slight chance of races during device enumeration. If you want to make use of the pvSCSI drivers, see the patches posted to -dev by Jun Kamada. I have managed to build it under Debian by applying the patches to the Debian Xen source and then making the hypervisor and tools packages, and then building the backend out of tree. I needed to greatly increase the scsi timeout for my tape drive to work reliably (default is 5 seconds, needs to be more like 10 minutes or more - I made it an hour). Download at http://www.meadowcourt.org/WindowsXenPV-0.8.0.zip . Any and all feedback appreciated. Thanks James This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Alexander Piavka
2008-Feb-21 18:36 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi James, A dumb question, do the GPL PV Drivers for Windows require any special xen version on the dom0 side, will vanilla xen-3.1 kernel work or i need xen-3.2 kernel? Thanks Alex On Thu, 21 Feb 2008, James Harper wrote:> I''ve just uploaded the latest release of the GPL PV drivers for Windows. > The changes in this version are: > > . Bug fixes to xennet. > . Bug fixes elsewhere. > . Better prevention of the qemu block device loading when /gplpv is > specified > . pvSCSI support (eg can pass through tape drives and cd burners, > although I haven''t tested the latter). This hasn''t seen nearly as much > testing as everything else, but I''ve run the tests in the HP Library & > Tape Tools successfully, including a firmware update. > . very basic installer - just run install.bat and then click lots of > times. > > I highly recommend uninstalling all trace of any previous version before > installing this one. Particularly put the Windows PCI Bus driver back > the way it was (eg update drivers and just click next a few times). > > Also, if you boot up with /GPLPV, and notice any qemu block devices, > destroy the domain immediately as there is a high chance of data > corruption! Also please let me know if this happens. > > Still outstanding bugs: > . If windows bug checks, it still won''t write out a bug check file, just > hangs instead. > . pvSCSI appears to have a race in it that leaves windows stuck at the > loading screen sometimes. > . SMP hasn''t been thoroughly tested > . XP doesn''t support the synchronisation calls I am using so they have > been omitted - very slight chance of races during device enumeration. > > If you want to make use of the pvSCSI drivers, see the patches posted to > -dev by Jun Kamada. I have managed to build it under Debian by applying > the patches to the Debian Xen source and then making the hypervisor and > tools packages, and then building the backend out of tree. I needed to > greatly increase the scsi timeout for my tape drive to work reliably > (default is 5 seconds, needs to be more like 10 minutes or more - I made > it an hour). > > Download at http://www.meadowcourt.org/WindowsXenPV-0.8.0.zip . Any and > all feedback appreciated. > > Thanks > > James > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-22 00:06 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thursday 21 February 2008 05:05:56 am James Harper wrote:> I highly recommend uninstalling all trace of any previous version before > installing this one. Particularly put the Windows PCI Bus driver back > the way it was (eg update drivers and just click next a few times).I notice your install.bat checks for the Windows version, and jumps to another section of the .bat file accordingly, but you ''cd'' to the winnet subdir in both cases, instead of winnet or winxp. Does this matter? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-22 01:30 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thursday 21 February 2008 05:05:56 am James Harper wrote:> I''ve just uploaded the latest release of the GPL PV drivers for Windows.Well, the results are in. No BSODs. Still minor differences in file copy times between booting w/ /gplpv and w/o. (File backed vbd on the dom0 disk, 100Mb/s physical ethernet.) Still can''t access a physical cd. Not trying pvSCSI. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-22 04:00 UTC
[Xen-devel] RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> A couple of questions: > 1) You say to remove all traces of previous versions - any > recommendations/procedures on doing that?I just reinstalled the Windows PCI driver (see previous emails), deleted c:\Windows\System32\drivers\xen*.sys, then ran install.bat> 2) When I try to install this version of the driver, my Xen EnumDevice> Driver goes to error code 39. Works fine in 0.6.x, but 0.8.0generates> that error code and my other XEN devices disappear.What version of windows? James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2008-Feb-22 04:01 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> Hi James, > > A dumb question, do the GPL PV Drivers for Windows require > any special xen version on the dom0 side, will vanilla xen-3.1 kernelwork> or i need xen-3.2 kernel? >I''m using hypervisor 3.1.2 (Debian package), and I''ve heard that 3.2 works too. 3.0.x hypervisor didn''t work last time I tried it, in fact it caused the whole physical machine to reboot. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Marcel Ritter
2008-Feb-22 10:33 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi, I just tried to install the new drivers on Windows 2003 x86 and amd64. On the AMD64 machine install.bat complains about the 32-bit Version of DPinst and stops installation. So I got the AMD64 binary somewhere from Microsoft (DriverInstallationTools.msi), and replaced DPinst.exe. Everything worked fine after that. To make things easier I modified the directory structure of your ZIP file a bit and changed "install.bat" to work on both 32- and 64-bit machines. CHANGED -> ./install.bat NEW -> ./dpinst/x86/DPInst.exe NEW -> ./dpinst/amd64/dpinst.exe dpinst.exe gets copied before execution, just calling it from the above directory did not work as expected. Just in case someone had the same problem ... Bye, Marcel -- ---- Dipl.-Inf. Marcel Ritter Linux/Novell Regionales Rechenzentrum Erlangen Tel: 09131 / 85-27808 E-Mail: Marcel.Ritter@rrze.uni-erlangen.de ---- Unix _IS_ user friendly... It''s just selective about who its friends are. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I have a big problem with the last Xen unstable changeset (17099) recompiled this morning hang when I launch an hvm (win2K3), and this is the first time it''s happen, with the following config file : name="win2" ostype="windowsxp" memory=512 vcpus=2 cpu_weight=256 cpu_cap=0 on_crash="destroy" on_poweroff="destroy" on_reboot="restart" localtime=1 builder="hvm" extid=0 device_model="/usr/lib64/xen/bin/qemu-dm" kernel="/usr/lib/xen/boot/hvmloader" boot="dc" disk=[ ''phy:/dev/vg0/lv5,xvda,w'', ''file:/win.iso,xvdc:cdrom,r'', ] vif=[ ''mac=00:16:3e:3f:d5:7c,type=netfront'', ''mac=00:16:3e:3f:d5:7d,type=netfront'', ] stdvga=0 sdl=0 vnc=1 vncunused=1 apic=1 acpi=1 pae=0 usb=1 usbdevice="tablet" serial="pty" xm info output : host : suse release : 2.6.18.8-xen version : #1 SMP Fri Feb 22 00:34:29 CET 2008 machine : x86_64 nr_cpus : 4 nr_nodes : 1 cores_per_socket : 4 threads_per_core : 1 cpu_mhz : 3001 hw_caps : bfebfbff:20000800:00000000:00000140:0000e3bd:00000000:00000001 total_memory : 4095 free_memory : 128 node_to_cpu : node0:0-3 xen_major : 3 xen_minor : 3 xen_extra : -unstable xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Thu Feb 21 15:06:37 2008 +0000 17099:591cfd37bd54 cc_compiler : gcc version 4.2.1 (SUSE Linux) cc_compile_by : root cc_compile_domain : xen cc_compile_date : Fri Feb 22 00:22:36 CET 2008 xend_config_format : 4 xm dmesg output : __ __ _____ _____ _ _ _ \ \/ /___ _ __ |___ / |___ / _ _ _ __ ___| |_ __ _| |__ | | ___ \ // _ \ ''_ \ |_ \ |_ \ __| | | | ''_ \/ __| __/ _` | ''_ \| |/ _ \ / \ __/ | | | ___) | ___) |__| |_| | | | \__ \ || (_| | |_) | | __/ /_/\_\___|_| |_| |____(_)____/ \__,_|_| |_|___/\__\__,_|_.__/|_|\___| (XEN) Xen version 3.3-unstable (root@xen) (gcc version 4.2.1 (SUSE Linux)) Fri Feb 22 00:22:36 CET 2008 (XEN) Latest ChangeSet: Thu Feb 21 15:06:37 2008 +0000 17099:591cfd37bd54 (XEN) Command line: vtd=1 (XEN) Video information: (XEN) VGA is text mode 80x25, font 8x16 (XEN) VBE/DDC methods: none; EDID transfer time: 0 seconds (XEN) EDID info not retrieved because no DDC retrieval method detected (XEN) Disc information: (XEN) Found 2 MBR signatures (XEN) Found 2 EDD information structures (XEN) Xen-e820 RAM map: (XEN) 0000000000000000 - 000000000009fc00 (usable) (XEN) 000000000009fc00 - 00000000000a0000 (reserved) (XEN) 00000000000e4000 - 0000000000100000 (reserved) (XEN) 0000000000100000 - 00000000bff80000 (usable) (XEN) 00000000bff80000 - 00000000bff8e000 (ACPI data) (XEN) 00000000bff8e000 - 00000000bffe0000 (ACPI NVS) (XEN) 00000000bffe0000 - 00000000c0000000 (reserved) (XEN) 00000000ffb00000 - 0000000100000000 (reserved) (XEN) 0000000100000000 - 0000000140000000 (usable) (XEN) System RAM: 4095MB (4193404kB) (XEN) Xen heap: 14MB (14940kB) (XEN) Domain heap initialised: DMA width 32 bits (XEN) Processor #0 6:15 APIC version 20 (XEN) Processor #1 6:15 APIC version 20 (XEN) Processor #2 6:15 APIC version 20 (XEN) Processor #3 6:15 APIC version 20 (XEN) IOAPIC[0]: apic_id 4, version 32, address 0xfec00000, GSI 0-23 (XEN) Enabling APIC mode: Flat. Using 1 I/O APICs (XEN) [VT-D]dmar.c:617: No DMAR devices found (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Detected 3001.232 MHz processor. (XEN) HVM: VMX enabled (XEN) CPU0: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz stepping 0b (XEN) Booting processor 1/1 eip 8c000 (XEN) CPU1: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz stepping 0b (XEN) Booting processor 2/2 eip 8c000 (XEN) CPU2: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz stepping 0b (XEN) Booting processor 3/3 eip 8c000 (XEN) CPU3: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz stepping 0b (XEN) Total of 4 processors activated. (XEN) ENABLING IO-APIC IRQs (XEN) -> Using new ACK method (XEN) Platform timer overflows in 14998 jiffies. (XEN) Platform timer is 14.318MHz HPET (XEN) Brought up 4 CPUs (XEN) AMD IOMMU: Disabled (XEN) *** LOADING DOMAIN 0 *** (XEN) Xen kernel: 64-bit, lsb, compat32 (XEN) Dom0 kernel: 64-bit, lsb, paddr 0xffffffff80200000 -> 0xffffffff805b632c (XEN) PHYSICAL MEMORY ARRANGEMENT: (XEN) Dom0 alloc.: 0000000138000000->000000013a000000 (986223 pages to be allocated) (XEN) VIRTUAL MEMORY ARRANGEMENT: (XEN) Loaded kernel: ffffffff80200000->ffffffff805b632c (XEN) Init. ramdisk: ffffffff805b7000->ffffffff80d98400 (XEN) Phys-Mach map: ffffffff80d99000->ffffffff8152f378 (XEN) Start info: ffffffff81530000->ffffffff815304a4 (XEN) Page tables: ffffffff81531000->ffffffff81540000 (XEN) Boot stack: ffffffff81540000->ffffffff81541000 (XEN) TOTAL: ffffffff80000000->ffffffff81800000 (XEN) ENTRY ADDRESS: ffffffff80200000 (XEN) Dom0 has maximum 4 VCPUs (XEN) Initrd len 0x7e1400, start at 0xffffffff805b7000 (XEN) Scrubbing Free RAM: .done. (XEN) Xen trace buffers: disabled (XEN) Std. Loglevel: Errors and warnings (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings) (XEN) Xen is relinquishing VGA console. (XEN) *** Serial input -> DOM0 (type ''CTRL-a'' three times to switch input to Xen) (XEN) Freed 100kB init memory. here is the xend log ouput : [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:84) XendDomainInfo.create([''vm'', [''name'', ''win2''], [''memory'', 512], [''on_poweroff'', ''destroy''], [''on_reboot'', ''restart''], [''on_crash'', ''destroy''], [''vcpus'', 2], [''on_xend_start'', ''ignore''], [''on_xend_stop'', ''ignore''], [''cpu_cap'', 0], [''cpu_weight'', 256], [''localtime'', 1], [''image'', [''hvm'', [''kernel'', ''/usr/lib/xen/boot/hvmloader''], [''device_model'', ''/usr/lib64/xen/bin/qemu-dm''], [''pae'', 0], [''vcpus'', 2], [''boot'', ''dc''], [''fda'', ''''], [''fdb'', ''''], [''timer_mode'', 0], [''localtime'', 1], [''serial'', ''pty''], [''stdvga'', 0], [''isa'', 0], [''nographic'', 0], [''soundhw'', ''''], [''vnc'', 1], [''vncunused'', 1], [''sdl'', 0], [''xauthority'', ''/root/.Xauthority''], [''rtc_timeoffset'', ''0''], [''monitor'', 0], [''acpi'', 1], [''apic'', 1], [''usb'', 1], [''usbdevice'', ''tablet''], [''keymap'', ''''], [''pci'', []], [''hpet'', 0], [''guest_os_type'', ''default''], [''hap'', 1]]], [''device'', [''vbd'', [''uname'', ''phy:/dev/vg0/lv5''], [''dev'', ''xvda''], [''mode'', ''w'']]], [''device'', [''vbd'', [''uname'', ''file:/win.iso''], [''dev'', ''xvdc:cdrom''], [''mode'', ''r'']]], [''device'', [''vif'', [''mac'', ''00:16:3e:3f:d5:7c''], [''type'', ''netfront'']]], [''device'', [''vif'', [''mac'', ''00:16:3e:3f:d5:7d''], [''type'', ''netfront'']]]]) [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:1841) XendDomainInfo.constructDomain [2008-02-22 09:35:05 3303] DEBUG (balloon:132) Balloon: 131072 KiB free; need 2048; done. [2008-02-22 09:35:05 3303] DEBUG (XendDomain:445) Adding Domain: 1 [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:1944) XendDomainInfo.initDomain: 1 256 [2008-02-22 09:35:05 3303] DEBUG (image:239) No VNC passwd configured for vfb access [2008-02-22 09:35:05 3303] DEBUG (image:526) args: boot, val: dc [2008-02-22 09:35:05 3303] DEBUG (image:526) args: fda, val: None [2008-02-22 09:35:05 3303] DEBUG (image:526) args: fdb, val: None [2008-02-22 09:35:05 3303] DEBUG (image:526) args: soundhw, val: None [2008-02-22 09:35:05 3303] DEBUG (image:526) args: localtime, val: 1 [2008-02-22 09:35:05 3303] DEBUG (image:526) args: serial, val: pty [2008-02-22 09:35:05 3303] DEBUG (image:526) args: std-vga, val: 0 [2008-02-22 09:35:05 3303] DEBUG (image:526) args: isa, val: 0 [2008-02-22 09:35:05 3303] DEBUG (image:526) args: acpi, val: 1 [2008-02-22 09:35:05 3303] DEBUG (image:526) args: usb, val: 1 [2008-02-22 09:35:05 3303] DEBUG (image:526) args: usbdevice, val: tablet [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:1977) _initDomain:shadow_memory=0x0, memory_static_max=0x20000000, memory_static_min=0x0. [2008-02-22 09:35:05 3303] DEBUG (balloon:138) Balloon: 129776 KiB free; 0 to scrub; need 538624; retries: 20. [2008-02-22 09:35:05 3303] DEBUG (balloon:153) Balloon: setting dom0 target to 3485 MiB. [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:891) Setting memory target of domain Domain-0 (0) to 3485 MiB. [2008-02-22 09:35:05 3303] DEBUG (balloon:138) Balloon: 490224 KiB free; 0 to scrub; need 538624; retries: 20. [2008-02-22 09:35:05 3303] DEBUG (balloon:153) Balloon: setting dom0 target to 3461 MiB. [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:891) Setting memory target of domain Domain-0 (0) to 3461 MiB. [2008-02-22 09:35:05 3303] DEBUG (balloon:132) Balloon: 563372 KiB free; need 538624; done. [2008-02-22 09:35:05 3303] INFO (image:139) buildDomain os=hvm dom=1 vcpus=2 [2008-02-22 09:35:05 3303] DEBUG (image:573) domid = 1 [2008-02-22 09:35:05 3303] DEBUG (image:574) image = /usr/lib/xen/boot/hvmloader [2008-02-22 09:35:05 3303] DEBUG (image:575) store_evtchn = 3 [2008-02-22 09:35:05 3303] DEBUG (image:576) memsize = 512 [2008-02-22 09:35:05 3303] DEBUG (image:577) vcpus = 2 [2008-02-22 09:35:05 3303] DEBUG (image:578) acpi = 1 [2008-02-22 09:35:05 3303] DEBUG (image:579) apic = 1 [2008-02-22 09:35:05 3303] INFO (XendDomainInfo:1734) createDevice: vfb : {''vncunused'': 1, ''other_config'': {''vncunused'': 1, ''type'': ''vnc''}, ''type'': ''vnc'', ''uuid'': ''b29863db-e71f-88a5-017e-10aba419b7d7''} [2008-02-22 09:35:05 3303] DEBUG (DevController:118) DevController: writing {''state'': ''1'', ''backend-id'': ''0'', ''backend'': ''/local/domain/0/backend/vfb/1/0''} to /local/domain/1/device/vfb/0. [2008-02-22 09:35:05 3303] DEBUG (DevController:120) DevController: writing {''vncunused'': ''1'', ''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/vfb/0'', ''uuid'': ''b29863db-e71f-88a5-017e-10aba419b7d7'', ''state'': ''1'', ''online'': ''1'', ''frontend-id'': ''1'', ''type'': ''vnc''} to /local/domain/0/backend/vfb/1/0. [2008-02-22 09:35:05 3303] INFO (XendDomainInfo:1734) createDevice: vbd : {''uuid'': ''98f6dbde-2b7e-849d-ab6e-4aee25b63bdf'', ''bootable'': 1, ''driver'': ''paravirtualised'', ''dev'': ''xvda'', ''uname'': ''phy:/dev/vg0/lv5'', ''mode'': ''w''} [2008-02-22 09:35:05 3303] DEBUG (DevController:118) DevController: writing {''backend-id'': ''0'', ''virtual-device'': ''51712'', ''device-type'': ''disk'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vbd/1/51712''} to /local/domain/1/device/vbd/51712. [2008-02-22 09:35:05 3303] DEBUG (DevController:120) DevController: writing {''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/vbd/51712'', ''uuid'': ''98f6dbde-2b7e-849d-ab6e-4aee25b63bdf'', ''dev'': ''xvda'', ''state'': ''1'', ''params'': ''/dev/vg0/lv5'', ''mode'': ''w'', ''online'': ''1'', ''frontend-id'': ''1'', ''type'': ''phy''} to /local/domain/0/backend/vbd/1/51712. [2008-02-22 09:35:05 3303] INFO (XendDomainInfo:1734) createDevice: vbd : {''uuid'': ''afb0ff6a-7194-b0f5-1f6c-05cc60f5e9c9'', ''bootable'': 0, ''driver'': ''paravirtualised'', ''dev'': ''xvdc:cdrom'', ''uname'': ''file:/win.iso'', ''mode'': ''r''} [2008-02-22 09:35:05 3303] DEBUG (DevController:118) DevController: writing {''backend-id'': ''0'', ''virtual-device'': ''51744'', ''device-type'': ''cdrom'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vbd/1/51744''} to /local/domain/1/device/vbd/51744. [2008-02-22 09:35:05 3303] DEBUG (DevController:120) DevController: writing {''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/vbd/51744'', ''uuid'': ''afb0ff6a-7194-b0f5-1f6c-05cc60f5e9c9'', ''dev'': ''xvdc'', ''state'': ''1'', ''params'': ''/win.iso'', ''mode'': ''r'', ''online'': ''1'', ''frontend-id'': ''1'', ''type'': ''file''} to /local/domain/0/backend/vbd/1/51744. [2008-02-22 09:35:05 3303] INFO (XendDomainInfo:1734) createDevice: vif : {''mac'': ''00:16:3e:3f:d5:7c'', ''type'': ''netfront'', ''uuid'': ''d83ecb62-6270-9528-65a4-cad557d9507e''} [2008-02-22 09:35:05 3303] DEBUG (DevController:118) DevController: writing {''backend-id'': ''0'', ''mac'': ''00:16:3e:3f:d5:7c'', ''handle'': ''0'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vif/1/0''} to /local/domain/1/device/vif/0. [2008-02-22 09:35:05 3303] DEBUG (DevController:120) DevController: writing {''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/vif/0'', ''uuid'': ''d83ecb62-6270-9528-65a4-cad557d9507e'', ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16:3e:3f:d5:7c'', ''frontend-id'': ''1'', ''state'': ''1'', ''online'': ''1'', ''handle'': ''0'', ''type'': ''netfront''} to /local/domain/0/backend/vif/1/0. [2008-02-22 09:35:05 3303] INFO (XendDomainInfo:1734) createDevice: vif : {''mac'': ''00:16:3e:3f:d5:7d'', ''type'': ''netfront'', ''uuid'': ''8b4b6f5b-ff0f-96e3-2638-471cc32ce97a''} [2008-02-22 09:35:05 3303] DEBUG (DevController:118) DevController: writing {''backend-id'': ''0'', ''mac'': ''00:16:3e:3f:d5:7d'', ''handle'': ''1'', ''state'': ''1'', ''backend'': ''/local/domain/0/backend/vif/1/1''} to /local/domain/1/device/vif/1. [2008-02-22 09:35:05 3303] DEBUG (DevController:120) DevController: writing {''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/vif/1'', ''uuid'': ''8b4b6f5b-ff0f-96e3-2638-471cc32ce97a'', ''script'': ''/etc/xen/scripts/vif-bridge'', ''mac'': ''00:16:3e:3f:d5:7d'', ''frontend-id'': ''1'', ''state'': ''1'', ''online'': ''1'', ''handle'': ''1'', ''type'': ''netfront''} to /local/domain/0/backend/vif/1/1. [2008-02-22 09:35:05 3303] INFO (image:297) spawning device models: /usr/lib64/xen/bin/qemu-dm [''/usr/lib64/xen/bin/qemu-dm'', ''-d'', ''1'', ''-domain-name'', ''win2'', ''-m'', ''512'', ''-k'', ''fr'', ''-vnc'', ''0.0.0.0:0'', ''-vncunused'', ''-vcpus'', ''2'', ''-boot'', ''dc'', ''-localtime'', ''-serial'', ''pty'', ''-acpi'', ''-usb'', ''-usbdevice'', ''tablet'', ''-M'', ''xenfv''] [2008-02-22 09:35:05 3303] INFO (image:301) device model pid: 3876 [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:2436) Storing VM details: {''on_xend_stop'': ''ignore'', ''shadow_memory'': ''6'', ''uuid'': ''e77ed04c-2335-80fd-ceeb-59800d7340e0'', ''on_reboot'': ''restart'', ''start_time'': ''1203669305.97'', ''on_poweroff'': ''destroy'', ''on_xend_start'': ''ignore'', ''on_crash'': ''destroy'', ''xend/restart_count'': ''0'', ''vcpus'': ''2'', ''vcpu_avail'': ''3'', ''image'': ''(hvm (kernel ) (acpi 1) (apic 1) (boot dc) (device_model /usr/lib64/xen/bin/qemu-dm) (loader /usr/lib/xen/boot/hvmloader) (keymap fr) (isa 0) (localtime 1) (monitor 0) (nographic 0) (pae 0) (rtc_timeoffset 3600) (serial pty) (sdl 0) (stdvga 0) (usb 1) (usbdevice tablet) (hpet 0) (vnc 1) (timer_mode 0) (vncunused 1) (xauthority /root/.Xauthority) (pci ()) (guest_os_type default) (hap 1) (notes (SUSPEND_CANCEL 1)))'', ''name'': ''win2''} [2008-02-22 09:35:05 3303] DEBUG (XendDomainInfo:1211) Storing domain details: {''console/port'': ''4'', ''name'': ''win2'', ''console/limit'': ''1048576'', ''store/port'': ''3'', ''vm'': ''/vm/e77ed04c-2335-80fd-ceeb-59800d7340e0'', ''domid'': ''1'', ''image/suspend-cancel'': ''1'', ''cpu/0/availability'': ''online'', ''memory/target'': ''524288'', ''control/platform-feature-multiprocessor-suspend'': ''1'', ''store/ring-ref'': ''131070'', ''cpu/1/availability'': ''online'', ''console/type'': ''ioemu''} [2008-02-22 09:35:06 3303] DEBUG (DevController:118) DevController: writing {''state'': ''1'', ''backend-id'': ''0'', ''backend'': ''/local/domain/0/backend/console/1/0''} to /local/domain/1/device/console/0. [2008-02-22 09:35:06 3303] DEBUG (DevController:120) DevController: writing {''domain'': ''win2'', ''frontend'': ''/local/domain/1/device/console/0'', ''uuid'': ''fafb4c72-f90e-fc21-861d-42cc402c8fd6'', ''frontend-id'': ''1'', ''state'': ''1'', ''location'': ''4'', ''online'': ''1'', ''protocol'': ''vt100''} to /local/domain/0/backend/console/1/0. [2008-02-22 09:35:06 3303] DEBUG (XendDomainInfo:1211) Storing domain details: {''console/port'': ''4'', ''name'': ''win2'', ''console/limit'': ''1048576'', ''store/port'': ''3'', ''vm'': ''/vm/e77ed04c-2335-80fd-ceeb-59800d7340e0'', ''domid'': ''1'', ''image/suspend-cancel'': ''1'', ''cpu/0/availability'': ''online'', ''memory/target'': ''524288'', ''control/platform-feature-multiprocessor-suspend'': ''1'', ''store/ring-ref'': ''131070'', ''cpu/1/availability'': ''online'', ''console/type'': ''ioemu''} [2008-02-22 09:35:06 3303] DEBUG (XendDomainInfo:1295) XendDomainInfo.handleShutdownWatch [2008-02-22 09:35:06 3303] DEBUG (DevController:151) Waiting for devices vif. [2008-02-22 09:35:06 3303] DEBUG (DevController:156) Waiting for 0. [2008-02-22 09:35:06 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vif/1/0/hotplug-status. [2008-02-22 09:35:06 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vif/1/0/hotplug-status. [2008-02-22 09:35:06 3303] DEBUG (DevController:617) hotplugStatusCallback 1. [2008-02-22 09:35:06 3303] DEBUG (DevController:156) Waiting for 1. [2008-02-22 09:35:06 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vif/1/1/hotplug-status. [2008-02-22 09:35:06 3303] DEBUG (DevController:617) hotplugStatusCallback 1. [2008-02-22 09:35:06 3303] DEBUG (DevController:151) Waiting for devices vbd. [2008-02-22 09:35:06 3303] DEBUG (DevController:156) Waiting for 51712. [2008-02-22 09:35:06 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vbd/1/51712/hotplug-status. [2008-02-22 09:35:06 3303] DEBUG (DevController:617) hotplugStatusCallback 1. [2008-02-22 09:35:06 3303] DEBUG (DevController:156) Waiting for 51744. [2008-02-22 09:35:06 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vbd/1/51744/hotplug-status. [2008-02-22 09:35:07 3303] DEBUG (DevController:603) hotplugStatusCallback /local/domain/0/backend/vbd/1/51744/hotplug-status. [2008-02-22 09:35:07 3303] DEBUG (DevController:617) hotplugStatusCallback 1. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices irq. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices vkbd. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices vfb. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices console. [2008-02-22 09:35:07 3303] DEBUG (DevController:156) Waiting for 0. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices pci. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices ioports. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices tap. [2008-02-22 09:35:07 3303] DEBUG (DevController:151) Waiting for devices vtpm. [2008-02-22 09:35:07 3303] INFO (XendDomain:1167) Domain win2 (1) unpaused. So, if someone have already encountered this issue or have informations on how to resolve it, let me know... Ce message et toutes les pieces jointes sont etablis a l''attention exclusive de ses destinataires et sont strictement confidentiels. Pour en savoir plus cliquer ici This message and any attachments are confidential to the ordinary user of the e-mail address to which it was addressed and may also be privileged. More information _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-22 15:45 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
This is what I did in order - Boot in non-PV mode - Remove xen network interface (1) - Remove xen network interface root (enum) - Remove XenBlk SCSI driver - Remove XEN PCI Driver - Replace Xen Hide Driver with Stanrd PCI Bus driver. - delete c:\windows\inf\oem* (your system may have other Oem drivers, so if you care about them, then I recommend finding the xen related oemX files and only delete those) - delete c:\windows\system32\drivers\xen*.sys (xen device drivers, I suggest deleting one by one, in case you have some other driver starting with "xen") - reboot - Start in non-PV mode - (delete the .sys files now if it was unsuccessful last time) - install new drivers as mentioned in installing.txt file. Emre On Thu, Feb 21, 2008 at 5:21 PM, Nick Couchman <Nick.Couchman@seakr.com> wrote:> A couple of questions: > 1) You say to remove all traces of previous versions - any > recommendations/procedures on doing that? > 2) When I try to install this version of the driver, my Xen Enum Device > Driver goes to error code 39. Works fine in 0.6.x, but 0.8.0 generates > that error code and my other XEN devices disappear. > > Thanks! > -Nick > > >>> On 2008/02/21 at 03:05, "James Harper" <james.harper@bendigoit.com.au> > wrote: > I''ve just uploaded the latest release of the GPL PV drivers for Windows. > The changes in this version are: > > . Bug fixes to xennet. > . Bug fixes elsewhere. > . Better prevention of the qemu block device loading when /gplpv is > specified > . pvSCSI support (eg can pass through tape drives and cd burners, > although I haven''t tested the latter). This hasn''t seen nearly as much > testing as everything else, but I''ve run the tests in the HP Library & > Tape Tools successfully, including a firmware update. > . very basic installer - just run install.bat and then click lots of > times. > > I highly recommend uninstalling all trace of any previous version before > installing this one. Particularly put the Windows PCI Bus driver back > the way it was (eg update drivers and just click next a few times). > > Also, if you boot up with /GPLPV, and notice any qemu block devices, > destroy the domain immediately as there is a high chance of data > corruption! Also please let me know if this happens. > > Still outstanding bugs: > . If windows bug checks, it still won''t write out a bug check file, just > hangs instead. > . pvSCSI appears to have a race in it that leaves windows stuck at the > loading screen sometimes. > . SMP hasn''t been thoroughly tested > . XP doesn''t support the synchronisation calls I am using so they have > been omitted - very slight chance of races during device enumeration. > > If you want to make use of the pvSCSI drivers, see the patches posted to > -dev by Jun Kamada. I have managed to build it under Debian by applying > the patches to the Debian Xen source and then making the hypervisor and > tools packages, and then building the backend out of tree. I needed to > greatly increase the scsi timeout for my tape drive to work reliably > (default is 5 seconds, needs to be more like 10 minutes or more - I made > it an hour). > > Download at http://www.meadowcourt.org/WindowsXenPV-0.8.0.zip . Any and > all feedback appreciated. > > Thanks > > James > > > ------------------------------ > This e-mail may contain confidential and privileged material for the sole > use of the intended recipient. If this email is not intended for you, or you > are not responsible for the delivery of this message to the intended > recipient, please note that this message may contain SEAKR Engineering > (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly > prohibited from downloading, photocopying, distributing or otherwise using > this message, its contents or attachments in any way. If you have received > this message in error, please notify us immediately by replying to this > e-mail and delete the message from your mailbox. Information contained in > this message that does not relate to the business of SEAKR is neither > endorsed by nor attributable to SEAKR. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Couchman
2008-Feb-22 15:57 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
2) This is in Windows XP SP2. I finally got it to work, but it involved force the drivers back to the old versions and reinstalling. One other question - when I try to boot with the /GPLPV option the Windows XP boot screen just sits and spins (bar rolling across the bottom) and never boots into Windows. Any hints on this? I''m running Xen 3.0.4 (SLES10SP1), so maybe that''s part of it, but didn''t know if you have any suggestions on why it might be hanging on boot. Nick Couchman Manager, Information Technology SEAKR Engineering, Inc. 6221 South Racine Circle Centennial, CO 80111 Main: (303) 790-8499 Fax: (303) 790-8720 Web: http://www.seakr.com ( http://www.seakr.com/ )>>> On 2008/02/21 at 21:00, "James Harper" <james.harper@bendigoit.com.au> wrote: > A couple of questions: > 1) You say to remove all traces of previous versions - any > recommendations/procedures on doing that?I just reinstalled the Windows PCI driver (see previous emails), deleted c:\Windows\System32\drivers\xen*.sys, then ran install.bat> 2) When I try to install this version of the driver, my Xen EnumDevice> Driver goes to error code 39. Works fine in 0.6.x, but 0.8.0generates> that error code and my other XEN devices disappear.What version of windows? James This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Seitz
2008-Feb-22 16:14 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
jim burns schrieb:> On Thursday 21 February 2008 05:05:56 am James Harper wrote: >> I''ve just uploaded the latest release of the GPL PV drivers for Windows. > > Well, the results are in. No BSODs. Still minor differences in file copy times > between booting w/ /gplpv and w/o. (File backed vbd on the dom0 disk, 100Mb/s > physical ethernet.) Still can''t access a physical cd. Not trying pvSCSI.I can confirm the problem w/ unaccessable CD. Besides this, "sometimes" i''ve got TWO CDRom Devices, never noticed this for HDD on 0.8.0. Another problem i found during benchmarking (I posted the results in an earlier thread): After running the system /gplpv a reboot into non-/gplpv causes heavy chkdsk action, also a (never before seen) warning pops up after windows finished to start, to remind running chkdsk again manually during next reboot. But, as far as I can see, no data really lost, but this is scary. Tests were done on XP Pro 64bit running vcpu=1 on Xen 3.2.0 / 2.6.18.8 dom0 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Couchman
2008-Feb-22 16:21 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Using the PV tools with Xen 3.0.4 (heavily patched by Novell) seems to work okay for me - definitely no bad effects on the physical host. The XP guests don''t always boot correctly with the /GPLPV option for me, but no crashing the host.>>> On 2008/02/21 at 21:01, "James Harper" <james.harper@bendigoit.com.au> wrote: > Hi James, > > A dumb question, do the GPL PV Drivers for Windows require > any special xen version on the dom0 side, will vanilla xen-3.1 kernelwork> or i need xen-3.2 kernel? >I''m using hypervisor 3.1.2 (Debian package), and I''ve heard that 3.2 works too. 3.0.x hypervisor didn''t work last time I tried it, in fact it caused the whole physical machine to reboot. James This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2008-Feb-25 03:08 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi James, James Harper wrote:> I''ve just uploaded the latest release of the GPL PV drivers for Windows. >> Download at http://www.meadowcourt.org/WindowsXenPV-0.8.0.zip . Any and > all feedback appreciated. > >I''ve just tested 0.8.0 with WinXP/RHEL5/xen-3.1. Network performance is the same as with 0.7.0, 100-something kbps, tested with one and two CPUs. This is with apic and acpi enabled. Changing any of those two requires reinstalling windows, so I''d rather not do it just yet :) Is there any recipe to get a decent network performance? Host OS/xen version/config combo that is tested to work? Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-25 03:28 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sunday 24 February 2008 10:08:45 pm Fajar A. Nugraha wrote:> I''ve just tested 0.8.0 with WinXP/RHEL5/xen-3.1. > Network performance is the same as with 0.7.0, 100-something kbps, > tested with one and two CPUs. > > This is with apic and acpi enabled. Changing any of those two requires > reinstalling windows, so I''d rather not do it just yet :) > Is there any recipe to get a decent network performance? Host OS/xen > version/config combo that is tested to work?Don''t know what''s limiting you. See the iometer benchmarks I posted early today in the thread ''New binary release of GPL PV drivers for Windows''. I''m getting 1Mbps on 4k read/writes, and 4Mbps on 32k, file backed vbd. (Actually doesn''t seem to matter whether I''m using PV drivers or Qemu.) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2008-Feb-25 03:34 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi Jim, jim burns wrote:> On Sunday 24 February 2008 10:08:45 pm Fajar A. Nugraha wrote: > >> I''ve just tested 0.8.0 with WinXP/RHEL5/xen-3.1. >> Network performance is the same as with 0.7.0, 100-something kbps, >> tested with one and two CPUs. >> > Don''t know what''s limiting you. See the iometer benchmarks I posted early > today in the thread ''New binary release of GPL PV drivers for Windows''. I''m > getting 1Mbps on 4k read/writes, and 4Mbps on 32k, file backed vbd. (Actually > doesn''t seem to matter whether I''m using PV drivers or Qemu.) > >You''ve posted disk I/O benchmark result. How is the network performance? Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-25 03:53 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sunday 24 February 2008 10:34:29 pm Fajar A. Nugraha wrote:> You''ve posted disk I/O benchmark result. How is the network performance?Sorry - brain freeze! Don''t know of a good network benchmark, so doing informal file copies from domu to dom0. I''m getting ~1.5Mpbs. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-25 04:05 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> On Sunday 24 February 2008 10:08:45 pm Fajar A. Nugraha wrote: > > I''ve just tested 0.8.0 with WinXP/RHEL5/xen-3.1. > > Network performance is the same as with 0.7.0, 100-something kbps, > > tested with one and two CPUs. > > > > This is with apic and acpi enabled. Changing any of those tworequires> > reinstalling windows, so I''d rather not do it just yet :) > > Is there any recipe to get a decent network performance? Host OS/xen > > version/config combo that is tested to work? > > Don''t know what''s limiting you. See the iometer benchmarks I postedearly> today in the thread ''New binary release of GPL PV drivers forWindows''.> I''m > getting 1Mbps on 4k read/writes, and 4Mbps on 32k, file backed vbd. > (Actually > doesn''t seem to matter whether I''m using PV drivers or Qemu.) >I''m pretty sure that the qemu backed disk uses the caching in Dom0, so there is no real way of doing performance tests, at least for small amounts of data. For the same reason, running any production stuff is probably a bad idea too. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Klaus Steinberger
2008-Feb-25 09:19 UTC
[Xen-users] Re: Release 0.8.0 of GPL PV Drivers for Windows
Hello,> Sorry - brain freeze! Don''t know of a good network benchmark, so doing > informal file copies from domu to dom0. I''m getting ~1.5Mpbs.ttcp is not too bad, and as I remember it''s available also for Windows. One a recent FSC RX300S3 host with 3 Ghz Dual Quad Core CPU''S I get around 6 Gigabit/s from a Scientificlinux 5.1 DomU (paravirtualised) to a 5.1 Dom0 (both 64 bit) with flipping receive path. Around the half with copying receive path. As far as I get time to the test Windows HVM Domains I will try to get network performance numbers. Sincerly, Klaus Steinberger -- Klaus Steinberger Beschleunigerlaboratorium Phone: (+49 89)289 14287 Am Coulombwall 6, D-85748 Garching, Germany FAX: (+49 89)289 14280 EMail: Klaus.Steinberger@Physik.Uni-Muenchen.DE URL: http://www.physik.uni-muenchen.de/~Klaus.Steinberger/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-25 10:01 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sunday 24 February 2008 10:53:03 pm Jim Burns wrote:> On Sunday 24 February 2008 10:34:29 pm Fajar A. Nugraha wrote: > > You''ve posted disk I/O benchmark result. How is the network performance? > > Sorry - brain freeze! Don''t know of a good network benchmark, so doing > informal file copies from domu to dom0. I''m getting ~1.5Mpbs.Sorry again. In all cases, that''s mega bytes per sec (MBps), bridged networking. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2008-Feb-26 01:47 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
jim burns wrote:> On Sunday 24 February 2008 10:53:03 pm Jim Burns wrote: > >> On Sunday 24 February 2008 10:34:29 pm Fajar A. Nugraha wrote: >> >>> You''ve posted disk I/O benchmark result. How is the network performance? >>> >> Sorry - brain freeze! Don''t know of a good network benchmark, so doing >> informal file copies from domu to dom0. I''m getting ~1.5Mpbs. >> > > Sorry again. In all cases, that''s mega bytes per sec (MBps), bridged > networking. > >That seems decent. Although to be honest, I was expecting several hundred Mbps (which is what I get with Xen/Linux PV) or at least close to 100 Mbps (which is what I get with VMWare). But at least your results look a lot better than mine. What system does it run on (distro/xen versions)? Did you add anything special to xen config file (something like acpi=1)? Last time I check, without that switch, Windows will be unable to use SMP. Changing that switch also requires reinstalling Windows. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-26 02:03 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Monday 25 February 2008 08:47:17 pm Fajar A. Nugraha wrote:> That seems decent. Although to be honest, I was expecting several > hundred Mbps (which is what I get with Xen/Linux PV) or at least close > to 100 Mbps (which is what I get with VMWare). But at least your results > look a lot better than mine.I wish. Maybe when I upgrade to a 64 bit system, domu to dom0 communication will be faster. I suppose at least that is independent of the physical nic (which is just 100Mbps), being a software only (apparently ipv6) nic.> What system does it run on (distro/xen versions)?From the previously mentioned benchmark results post: Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 Tested hvm: XP Pro SP2, 2002, tested w/ iometer [...]> Did you add anything special to xen config file (something like acpi=1)? > Last time I check, without that switch, Windows will be unable to use > SMP. Changing that switch also requires reinstalling Windows.I checked everything it would let me, including acpi & smp: name = "winxp" builder=''hvm'' memory = "512" uuid = "6c7de04e-df10-caa8-bb2a-8368246225c1" ostype = "hvm" on_reboot = "restart" on_crash = "restart" on_poweroff = "destroy" vcpus = "2" # kernel = "/usr/lib/xen/boot/hvmloader" acpi=1 apic=1 boot= "cda" device_model = "/usr/lib/xen/bin/qemu-dm" keymap=''en-us'' localtime=0 #rtc_timeoffset=-14400 rtc_timeoffset=-18000 pae=1 serial=''pty'' #serial = "/dev/ttyS0" # enable sound card support, [sb16|es1370|all|..,..], default none soundhw=''es1370'' # enable stdvga, default = 0 (use cirrus logic device model) stdvga=0 #usbdevice="mouse" usbdevice="tablet" # #disk=[ ''tap:/var/lib/xen/images/winxp,ioemu:hda,w'', ''phy:/dev/cdrom,hdc:cdrom,r'' ] disk=[ ''file:/var/lib/xen/images/winxp,ioemu:hda,w'', ''phy:/dev/cdrom,hdc:cdrom,r'' ] # vif = [ ''mac=00:16:3e:23:1d:36, type=ioemu, script=vif-bridge, bridge = eth0'' ] #vif = [ ''mac=00:16:3e:23:1d:36, type=netfront, script=vif-bridge, bridge = eth0'' ] # sdl=0 vnc=1 vnclisten="0.0.0.0" #vnclisten="192.168.1.0/24" # set VNC display number, default = domid #vncdisplay=1 # try to find an unused port for the VNC server, default = 1 #vncunused=1 vncconsole=1 monitor=1 vncpasswd="" _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-26 10:54 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi Jim, I see ioemu in the vif= line in your config file. did you try using netfront instead (possibly with a different MAC ID)? This performance is extremely bad if it''s around 1,4 Mbytes/sec. I''ll try to test tonight on my own system. Thanks, Emre On Tue, Feb 26, 2008 at 3:03 AM, jim burns <jim_burn@bellsouth.net> wrote:> On Monday 25 February 2008 08:47:17 pm Fajar A. Nugraha wrote: > > That seems decent. Although to be honest, I was expecting several > > hundred Mbps (which is what I get with Xen/Linux PV) or at least close > > to 100 Mbps (which is what I get with VMWare). But at least your results > > look a lot better than mine. > > I wish. Maybe when I upgrade to a 64 bit system, domu to dom0 > communication > will be faster. I suppose at least that is independent of the physical nic > (which is just 100Mbps), being a software only (apparently ipv6) nic. > > > What system does it run on (distro/xen versions)? > > From the previously mentioned benchmark results post: > > Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 > System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 > Tested hvm: XP Pro SP2, 2002, tested w/ iometer [...] > > > Did you add anything special to xen config file (something like acpi=1)? > > Last time I check, without that switch, Windows will be unable to use > > SMP. Changing that switch also requires reinstalling Windows. > > I checked everything it would let me, including acpi & smp: > > name = "winxp" > builder=''hvm'' > memory = "512" > uuid = "6c7de04e-df10-caa8-bb2a-8368246225c1" > ostype = "hvm" > on_reboot = "restart" > on_crash = "restart" > on_poweroff = "destroy" > vcpus = "2" > # > kernel = "/usr/lib/xen/boot/hvmloader" > acpi=1 > apic=1 > boot= "cda" > device_model = "/usr/lib/xen/bin/qemu-dm" > keymap=''en-us'' > localtime=0 > #rtc_timeoffset=-14400 > rtc_timeoffset=-18000 > pae=1 > serial=''pty'' > #serial = "/dev/ttyS0" > # enable sound card support, [sb16|es1370|all|..,..], default none > soundhw=''es1370'' > # enable stdvga, default = 0 (use cirrus logic device model) > stdvga=0 > #usbdevice="mouse" > usbdevice="tablet" > # > #disk=[ ''tap:/var/lib/xen/images/winxp,ioemu:hda,w'', > ''phy:/dev/cdrom,hdc:cdrom,r'' ] > disk=[ ''file:/var/lib/xen/images/winxp,ioemu:hda,w'', > ''phy:/dev/cdrom,hdc:cdrom,r'' ] > # > vif = [ ''mac=00:16:3e:23:1d:36, type=ioemu, script=vif-bridge, bridge > eth0'' ] > #vif = [ ''mac=00:16:3e:23:1d:36, type=netfront, script=vif-bridge, bridge > > eth0'' ] > # > sdl=0 > vnc=1 > vnclisten="0.0.0.0" > #vnclisten="192.168.1.0/24" > # set VNC display number, default = domid > #vncdisplay=1 > # try to find an unused port for the VNC server, default = 1 > #vncunused=1 > vncconsole=1 > monitor=1 > vncpasswd="" > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-27 10:06 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi Jim, (James), Yesterday, I tried to copy a file around 700 MB from Dom0 to DomU, and got similar results, it was around 2 MByte/sec. Seemed pretty slow to me. Emre On Wed, Feb 27, 2008 at 1:25 AM, jim burns <jim_burn@bellsouth.net> wrote:> On Tuesday 26 February 2008 05:54:16 am you wrote: > > I see ioemu in the vif= line in your config file. did you try using > > netfront instead (possibly with a different MAC ID)? > > I didn''t change the MAC - don''t want Windows to think I have new hardware > - > but I tried all the different drivers. I copied a 40 MB (42,757,000) from > domu to dom0, and rebooted the domu inbetween each copy, waiting till the > cpu > load dropped after the reboot for the next copy: > > xennet took 26 secs (1.64MBs) > Realtek took 37 secs (1.16MBs) > > And then I ran into a wall. I have 0.8.3 loaded now, and the xennet nic > doesn''t appear unless you boot with /gplpv, unlike previous versions. It > seems I can''t bring up a netfront nic anymore either - don''t know if it is > related. So, repeating the tests with my backup hvm, which has Halsign > installed, and lives on a samba mount: > > Halsign took 41 secs (1.04 MBs) > Realtek took 60 secs (0.71 MBs) > > and again, I couldn''t load netfront. I''ve gotten a couple of kernel > xen/xen.gz > updates in quick succession in the last week. It''s still xen.gz 3.1.0-rc7, > but I won''t swear it''s the same changeset, or set of patches since the > last > time I loaded netfront, and that kernel is long gone (since fedora only > keeps > the last two). Sorry I couldn''t verify your hunch, but I think you are > right. >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-27 10:43 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wednesday 27 February 2008 05:06:00 am Emre ERENOGLU wrote:> Yesterday, I tried to copy a file around 700 MB from Dom0 to DomU, and got > similar results, it was around 2 MByte/sec. > > Seemed pretty slow to me.But at least it''s not a couple of 100 kBps. Still a lot of overhead in hvm, intercepting privileged instructions. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Feb-27 11:09 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wed, Feb 27, 2008 at 05:43:39AM -0500, jim burns wrote:> On Wednesday 27 February 2008 05:06:00 am Emre ERENOGLU wrote: > > Yesterday, I tried to copy a file around 700 MB from Dom0 to DomU, and got > > similar results, it was around 2 MByte/sec. > > > > Seemed pretty slow to me. > > But at least it''s not a couple of 100 kBps. Still a lot of overhead in hvm, > intercepting privileged instructions. >These 2 MByte/sec result sounds really poor.. Xensource and VirtualIron claim "near bare metal performance" with PV drivers (for Windows).. What''s wrong with these drivers? Or is the test somehow wrong? How *exactly* do you measure the performance? What''s the *exact* setup and configuration? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-27 22:54 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi, Just did two other tests. Simple ones again :) 1) I installed a high performance copy handler utility. Then selected 20 meg buffer size, then tried to copy a 700 MB file from the dom0. Checked windows task manager for network utilization. It reports 100% CPU usage (all kernel) and 2% utilization on network interface (reports gigabit eth). 2) To avoid disk impact on measurement, I opened a cmd prompt, and did the same copy, but this time copying to the NUL device. z:\>copy image.iso NUL Same, 100% CPU utilization, 2% network bandwidth utilization, ~1000%2=20 Mbps. My PV drivers version is 0.8.0 Emre On Wed, Feb 27, 2008 at 12:09 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Wed, Feb 27, 2008 at 05:43:39AM -0500, jim burns wrote: > > On Wednesday 27 February 2008 05:06:00 am Emre ERENOGLU wrote: > > > Yesterday, I tried to copy a file around 700 MB from Dom0 to DomU, and > got > > > similar results, it was around 2 MByte/sec. > > > > > > Seemed pretty slow to me. > > > > But at least it''s not a couple of 100 kBps. Still a lot of overhead in > hvm, > > intercepting privileged instructions. > > > > These 2 MByte/sec result sounds really poor.. > > Xensource and VirtualIron claim "near bare metal performance" with PV > drivers (for Windows).. > > What''s wrong with these drivers? > > Or is the test somehow wrong? How *exactly* do you measure the > performance? > What''s the *exact* setup and configuration? > > -- Pasi > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-27 23:31 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wednesday 27 February 2008 05:54:06 pm Emre ERENOGLU wrote:> 1) I installed a high performance copy handler utility. Then selected 20 > meg buffer size, then tried to copy a 700 MB file from the dom0. Checked > windows task manager for network utilization. It reports 100% CPU usage > (all kernel) and 2% utilization on network interface (reports gigabit eth).Yeah, windows task manager consistently reports higher bit rates than what I actually measured by file size/clock time, and how long it takes is the only thing really meaningful. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-27 23:48 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wednesday 27 February 2008 06:09:24 am Pasi Kärkkäinen wrote:> These 2 MByte/sec result sounds really poor.. > > Xensource and VirtualIron claim "near bare metal performance" with PV > drivers (for Windows).. > > What''s wrong with these drivers?Yeah, I''m disappointed at the open source, or at least free and non-proprietary solutions offered so far. I''d love to know how the proprietary drivers get better performance.> Or is the test somehow wrong? How *exactly* do you measure the performance? > What''s the *exact* setup and configuration?My network benchmark post seems to have disappeared from the archives (unless I accidently sent that as a private mail to Emre). Basically, it''s a very simplistic timed file copy from domu to dom0. I reboot the domu with the driver I''m testing, and after the cpu load dies down from initialization, I copied a 40MB file and divided the file size by the wall clock time in secs. Then I rebooted and tried the next driver. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-28 01:09 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > Xensource and VirtualIron claim "near bare metal performance" withPV> > drivers (for Windows).. > > > > What''s wrong with these drivers? > > Yeah, I''m disappointed at the open source, or at least free and > non-proprietary solutions offered so far. I''d love to know how the > proprietary drivers get better performance. >Well... feel free to submit any performance patches you like :) My initial testing showed much lower latency and better throughput, but I''ve not done any testing in a while so performance may have dropped. Some brief testing I just did showed I was able to copy a 200MB file from a DomU (W2K3 running my PV drivers) to my xp laptop (1gb Ethernet all the way) in 42 seconds, which is about 5MB/second. CPU Load was 100% on 1 CPU. The same operation using non-PV drivers was around 65 seconds. CPU load was all over the place, but around 60% on one CPU and 50% on the other. I think performance could be a little bit better than that! Andy: Any suggestions? Do you think your last cleanup commit could have reduced performance at all? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-28 01:29 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wednesday 27 February 2008 08:09:56 pm James Harper wrote:> Some brief testing I just did showed I was able to copy a 200MB file > from a DomU (W2K3 running my PV drivers) to my xp laptop (1gb Ethernet > all the way) in 42 seconds, which is about 5MB/second. CPU Load was 100% > on 1 CPU.Just out of curiousity, is this a 64bit system? dom0 or domu? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Feb-28 01:38 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Jim, you don''t need to use these drivers at all. So let''s be more supportive to James, at least he''s doing whatever possible. I''m sure we''ll find the reason of the low performance. I would also love to see some independent confirmation that the "proprietary solutions" work faster. My system is fully 32-bit, 1 cpu domu. 2 cpu dom0. James, your system being a multiprocessor domU may explain the 2 mbyte/sec I see and 5 mbyte/sec you see. Why is this network driver so dependent on the CPU? Is it normal? I saw such behaviour in wireless drivers in the past. Emre On Thu, Feb 28, 2008 at 2:29 AM, jim burns <jim_burn@bellsouth.net> wrote:> On Wednesday 27 February 2008 08:09:56 pm James Harper wrote: > > Some brief testing I just did showed I was able to copy a 200MB file > > from a DomU (W2K3 running my PV drivers) to my xp laptop (1gb Ethernet > > all the way) in 42 seconds, which is about 5MB/second. CPU Load was 100% > > on 1 CPU. > > Just out of curiousity, is this a 64bit system? dom0 or domu? > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andy Grover
2008-Feb-28 01:46 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
James Harper wrote:>>> Xensource and VirtualIron claim "near bare metal performance" with PV >>> > I think performance could be a little bit better than that! > > Andy: Any suggestions? Do you think your last cleanup commit could have > reduced performance at all? >It''d be great to get real numbers instead of vague performance claims. I think eval versions of at least some other PV drivers are available -- Novell? XenSource? -- if someone was willing to do some benchmarking to see how things fall out for both send and receive. I would not be at all surprised if our drivers performed worse, since we haven''t tuned them much at all yet. But we need data. For TX, we are performing a copy to linearize the packet because for each packet Windows gives us 3 little + 1 big buffer, and I thought we wouldn''t want to spend 4 grant refs per packet. I think that''s the right thing to do but who knows. We also don''t implement large sends. Other culprits might be lock contention, or maybe we need to add interrupt moderation to reduce the number of interrupts per packet. Regards -- Andy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-28 01:50 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> On Wednesday 27 February 2008 08:09:56 pm James Harper wrote: > > Some brief testing I just did showed I was able to copy a 200MB file > > from a DomU (W2K3 running my PV drivers) to my xp laptop (1gbEthernet> > all the way) in 42 seconds, which is about 5MB/second. CPU Load was100%> > on 1 CPU. > > Just out of curiousity, is this a 64bit system? dom0 or domu?64 bit xen 64 bit dom0 32 bit domu James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-28 01:56 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Wednesday 27 February 2008 08:38:59 pm Emre ERENOGLU wrote:> you don''t need to use these drivers at all. So let''s be more supportive to > James, at least he''s doing whatever possible. I''m sure we''ll find the > reason of the low performance. I would also love to see some independent > confirmation that the "proprietary solutions" work faster.I''m more than supportive of the development of hvm pv drivers, and praise James for the effort he''s putting in. I hope that my critique is taken as constructive about what areas need more work. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-28 04:16 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> My system is fully 32-bit, 1 cpu domu. 2 cpu dom0. > > James, your system being a multiprocessor domU may explain the 2mbyte/sec> I see and 5 mbyte/sec you see. Why is this network driver so dependenton> the CPU? Is it normal? I saw such behaviour in wireless drivers inthe> past. >I''m getting around the same results with vcpus=1 too. If the copy speeds are CPU bound then the results will be highly dependant on CPU speed... Can anyone suggest any network benchmarking software that works under Windows and Linux? Thanks James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-28 06:23 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > Can anyone suggest any network benchmarking software that works under > Windows and Linux? >Following up on myself... I am going to use this: http://www.ars.de/ars/ars.nsf/docs/netio unless someone can suggest anything better. It comes with windows and linux binaries and appears to be dead easy to use. It also just hangs on the Rx test when I run it using the PV drivers, which might be a clue as to a problem... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2008-Feb-28 07:19 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
James Harper wrote:>> Can anyone suggest any network benchmarking software that works under >> Windows and Linux? >> >> > > Following up on myself... I am going to use this: > > http://www.ars.de/ars/ars.nsf/docs/netio > > unless someone can suggest anything better.Try iperf The home page seems to be dead, but windows binary is here http://www.noc.ucf.edu/Tools/Iperf/default.htm Linux (RHEL/CentOS, at least) binary should be available from dag/rpmforge -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Feb-28 11:07 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thu, Feb 28, 2008 at 02:19:40PM +0700, Fajar A. Nugraha wrote:> James Harper wrote: > >>Can anyone suggest any network benchmarking software that works under > >>Windows and Linux? > >> > >> > > > >Following up on myself... I am going to use this: > > > >http://www.ars.de/ars/ars.nsf/docs/netio > > > >unless someone can suggest anything better. > > Try iperf > > The home page seems to be dead, but windows binary is here > http://www.noc.ucf.edu/Tools/Iperf/default.htm > > Linux (RHEL/CentOS, at least) binary should be available from dag/rpmforge >I can recommend iperf too. Make sure you use the same iperf version everywhere. With iperf you can measure TCP throughpuh with one or more threads, and also UDP throughput.. which also gives you packet loss statistics, which might be good to know to figure out there performance problems.. (TCP automatically corrects/retransmits error packets so with TCP you just see poor performance in case of network/driver problems. With UDP you can get the actual statistics about packets transferred and dropped and figure out the reasons). -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Feb-28 11:09 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thu, Feb 28, 2008 at 02:38:59AM +0100, Emre ERENOGLU wrote:> Jim, > > you don''t need to use these drivers at all. So let''s be more supportive to > James, at least he''s doing whatever possible. I''m sure we''ll find the reason > of the low performance. I would also love to see some independent > confirmation that the "proprietary solutions" work faster. > > My system is fully 32-bit, 1 cpu domu. 2 cpu dom0. > > James, your system being a multiprocessor domU may explain the 2 mbyte/sec I > see and 5 mbyte/sec you see. Why is this network driver so dependent on the > CPU? Is it normal? I saw such behaviour in wireless drivers in the past. >Is there a way to profile these drivers on windows? To see what actually uses the CPU so much.. -- Pasi> Emre > > On Thu, Feb 28, 2008 at 2:29 AM, jim burns <jim_burn@bellsouth.net> wrote: > > > On Wednesday 27 February 2008 08:09:56 pm James Harper wrote: > > > Some brief testing I just did showed I was able to copy a 200MB file > > > from a DomU (W2K3 running my PV drivers) to my xp laptop (1gb Ethernet > > > all the way) in 42 seconds, which is about 5MB/second. CPU Load was 100% > > > on 1 CPU. > > > > Just out of curiousity, is this a 64bit system? dom0 or domu? > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Feb-28 11:40 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > Is there a way to profile these drivers on windows? To see what actually > uses the CPU so much.. >I''ve actually just started doing this. The only thing I could think of is to use the tsc via the KeQueryPerformanceCounter function. I''m not sure how well that works under virtualisation, and if a context switch occurs under windows it will throw things off, but as an average it might still be useful. I can definitely see some indication of the routines we should be looking at... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Feb-29 01:48 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote:> On Thu, Feb 28, 2008 at 02:19:40PM +0700, Fajar A. Nugraha wrote: > > Try iperf > > > > The home page seems to be dead, but windows binary is here > > http://www.noc.ucf.edu/Tools/Iperf/default.htm > > > > Linux (RHEL/CentOS, at least) binary should be available from > > dag/rpmforge > > I can recommend iperf too. > > Make sure you use the same iperf version everywhere. > > With iperf you can measure TCP throughpuh with one or more threads, and > also UDP throughput.. which also gives you packet loss statistics, which > might be good to know to figure out there performance problems.. > > (TCP automatically corrects/retransmits error packets so with TCP you just > see poor performance in case of network/driver problems. With UDP you can > get the actual statistics about packets transferred and dropped and figure > out the reasons).The home page seems to be up now ( http://dast.nlanr.net/Projects/Iperf/ ). It makes reference to ''patch for linux-2.6.21 kernel and above''. Did you guys do that, or just install the binary? Thanx. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Feb-29 08:53 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thu, Feb 28, 2008 at 08:48:32PM -0500, jim burns wrote:> On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote: > > On Thu, Feb 28, 2008 at 02:19:40PM +0700, Fajar A. Nugraha wrote: > > > Try iperf > > > > > > The home page seems to be dead, but windows binary is here > > > http://www.noc.ucf.edu/Tools/Iperf/default.htm > > > > > > Linux (RHEL/CentOS, at least) binary should be available from > > > dag/rpmforge > > > > I can recommend iperf too. > > > > Make sure you use the same iperf version everywhere. > > > > With iperf you can measure TCP throughpuh with one or more threads, and > > also UDP throughput.. which also gives you packet loss statistics, which > > might be good to know to figure out there performance problems.. > > > > (TCP automatically corrects/retransmits error packets so with TCP you just > > see poor performance in case of network/driver problems. With UDP you can > > get the actual statistics about packets transferred and dropped and figure > > out the reasons). > > The home page seems to be up now ( http://dast.nlanr.net/Projects/Iperf/ ). It > makes reference to ''patch for linux-2.6.21 kernel and above''. Did you guys do > that, or just install the binary? Thanx. >I''ve been using the binary (v2.0.2) without any kernel patching, mostly with 2.6.18 kernels. It is available in Debian etch 4.0 with apt-get. iperf rpm packages for rhel/centos: http://dag.wieers.com/rpm/packages/iperf/ -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joris Dobbelsteen
2008-Feb-29 22:36 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
I used netperf. It seems to be available within linux too, but I tried only the Windows part of it yet. <http://www.netperf.org/netperf/> It contains a lot of tests, including TCP and UDP. Performance is highly dependent, on the same network and with the same hardware I achieved 250, 50 and 800 Mbps doing TCP over Gigabit with varying driver versions (and yes, that''s really 50 Mbps over gigabit). - Joris>-----Original Message----- >From: xen-users-bounces@lists.xensource.com >[mailto:xen-users-bounces@lists.xensource.com] On Behalf Of >James Harper >Sent: Thursday, 28 February 2008 5:17 >To: Emre ERENOGLU; jim burns >Cc: xen-users@lists.xensource.com >Subject: RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows > >> My system is fully 32-bit, 1 cpu domu. 2 cpu dom0. >> >> James, your system being a multiprocessor domU may explain the 2 >mbyte/sec >> I see and 5 mbyte/sec you see. Why is this network driver so >dependent >on >> the CPU? Is it normal? I saw such behaviour in wireless drivers in >the >> past. >> > >I''m getting around the same results with vcpus=1 too. If the >copy speeds are CPU bound then the results will be highly >dependant on CPU speed... > >Can anyone suggest any network benchmarking software that >works under Windows and Linux? > >Thanks > >James > >_______________________________________________ >Xen-users mailing list >Xen-users@lists.xensource.com >http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Mar-01 14:21 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote:> I can recommend iperf too. > > Make sure you use the same iperf version everywhere.Ok, here''s my results. Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 Tested hvm: XP Pro SP2, 2002 Method: The version tested was 1.7.0, to avoid having to apply the kernel patch that comes with 2.0.2. The binaries downloaded were from the project homepage http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the ''Linux libc 2.3 binary and (on fc8 at least) I still had to install the compat-libstdc++-33 package to get it to run. The server/listening side was always the dom0, invoked with ''iperf -s''. The first machine is a linux fc8 pv domu, the second is another machine on my subnet with a 100Mbps nic pipeline inbetween, and the rest are the various drivers on a winxp hvm. The invoked command was ''iperf -c dom0-hostname -t 60''. ''-t 60'' sets the runtime to 60 secs. I used the default buffer size (8k), mss/mtu, and window size (which actually varies between the client and the server). I averaged 3 tcp runs. For the udp tests, the default bandwidth is 1 Mbps (add the ''-b 1000000'' flag to the command above). I added or subtracted a 0 till I got a packet loss percentage of more than 0% and less than 5%, or an observed throughput significantly less than the request (in other words, a stress test). In the table below, ''udp Mpbs'' is the observed, and ''-b Mpbs'' is the requested rate. (The server has to be invoked with ''iperf -s -u''.) machine | tcp Mbps| udp Mbps| -b Mbps | udp packet loss fc8 domu | 1563 | 48.6| 100 | .08% on subnet| 79.8| 5.4| 10 | 3.5% gplpv | 19.8| 2.0| 10 | 0.0% realtek | 9.6| 1.8| 10 | 0.0% Conclusions: The pv domu tcp rate is a blistering 1.5 Gbps, showing that a software nic *can* be even faster than a 100 Mpbs hardware nic, at least for pv. The machine on the same subnet (''on subnet'') achieved 80% of the max rate supported by the hardware. Presumably, since the udp rates are consistently less than the tcp ones, there was a lot of tcp retransmits. gplpv is twice as fast as realtek for tcp, about the same for udp. 19.8/8 = ~2.5 MBps, which is about the rate I was getting with my domu to dom0 file copies. I don''t expect pv data rates from an hvm, but it should be interesting to see how much faster James & Andy can get this to go. Btw, this was gplpv 0.8.4. Actually, pretty good work so far guys! _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Mar-01 14:50 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Hi Jim, Thanks for the test, great to have this information. I''m really wondering the performance of the "unmodified_drivers" in the xen package, which can be compiled in a "HVM Linux DomU" to get Paravirtual Drivers for disks and ethernet card. When I tested these on Xen 3.1 on Pardus Linux DomU, I was getting very similar performance on -disks- with hdparm. No other "reliable" tests were performed. I also didn''t test the network card. Emre On Sat, Mar 1, 2008 at 3:21 PM, jim burns <jim_burn@bellsouth.net> wrote:> On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote: > > I can recommend iperf too. > > > > Make sure you use the same iperf version everywhere. > > Ok, here''s my results. > > Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 > System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 > Tested hvm: XP Pro SP2, 2002 > > Method: > > The version tested was 1.7.0, to avoid having to apply the kernel patch > that > comes with 2.0.2. The binaries downloaded were from the project homepage > http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the > ''Linux > libc 2.3 binary and (on fc8 at least) I still had to install the > compat-libstdc++-33 package to get it to run. > > The server/listening side was always the dom0, invoked with ''iperf -s''. > The > first machine is a linux fc8 pv domu, the second is another machine on my > subnet with a 100Mbps nic pipeline inbetween, and the rest are the various > drivers on a winxp hvm. The invoked command was ''iperf -c dom0-hostname -t > 60''. ''-t 60'' sets the runtime to 60 secs. I used the default buffer size > (8k), mss/mtu, and window size (which actually varies between the client > and > the server). I averaged 3 tcp runs. > > For the udp tests, the default bandwidth is 1 Mbps (add the ''-b 1000000'' > flag > to the command above). I added or subtracted a 0 till I got a packet loss > percentage of more than 0% and less than 5%, or an observed throughput > significantly less than the request (in other words, a stress test). In > the > table below, ''udp Mpbs'' is the observed, and ''-b Mpbs'' is the requested > rate. > (The server has to be invoked with ''iperf -s -u''.) > > machine | tcp Mbps| udp Mbps| -b Mbps | udp packet loss > fc8 domu | 1563 | 48.6| 100 | .08% > on subnet| 79.8| 5.4| 10 | 3.5% > gplpv | 19.8| 2.0| 10 | 0.0% > realtek | 9.6| 1.8| 10 | 0.0% > > Conclusions: The pv domu tcp rate is a blistering 1.5 Gbps, showing that a > software nic *can* be even faster than a 100 Mpbs hardware nic, at least > for > pv. The machine on the same subnet (''on subnet'') achieved 80% of the max > rate > supported by the hardware. Presumably, since the udp rates are > consistently > less than the tcp ones, there was a lot of tcp retransmits. gplpv is twice > as > fast as realtek for tcp, about the same for udp. 19.8/8 = ~2.5 MBps, which > is > about the rate I was getting with my domu to dom0 file copies. I don''t > expect > pv data rates from an hvm, but it should be interesting to see how much > faster James & Andy can get this to go. Btw, this was gplpv 0.8.4. > > Actually, pretty good work so far guys! > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Mar-01 15:51 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Saturday 01 March 2008 09:50:13 am Emre ERENOGLU wrote:> When I tested these on Xen 3.1 on Pardus Linux DomU, I was getting very > similar performance on -disks- with hdparm. No other "reliable" tests were > performed. I also didn''t test the network card.Did you compare the Pardus standard domu performance with Pardus + ''unmodified drivers''? Btw, thanx for mentioning those drivers. The README was confusing to me about whether those drivers go into the domu or dom0, but in either case, it looked like they wouldn''t help with a Windows domu. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Mar-01 19:32 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
Well, I didn''t compare a real Paravirtual Pardus DomU (I''m sure it matches your values previously posted), wrt the unmodified_drivers. I can do that if I can get the system back up :) Emre On Sat, Mar 1, 2008 at 4:51 PM, jim burns <jim_burn@bellsouth.net> wrote:> On Saturday 01 March 2008 09:50:13 am Emre ERENOGLU wrote: > > When I tested these on Xen 3.1 on Pardus Linux DomU, I was getting very > > similar performance on -disks- with hdparm. No other "reliable" tests > were > > performed. I also didn''t test the network card. > > Did you compare the Pardus standard domu performance with Pardus + > ''unmodified > drivers''? > > Btw, thanx for mentioning those drivers. The README was confusing to me > about > whether those drivers go into the domu or dom0, but in either case, it > looked > like they wouldn''t help with a Windows domu. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Emre Erenoglu erenoglu@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-01 22:38 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sat, Mar 01, 2008 at 09:21:24AM -0500, jim burns wrote:> On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote: > > I can recommend iperf too. > > > > Make sure you use the same iperf version everywhere. > > Ok, here''s my results. > > Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100 > System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21 > Tested hvm: XP Pro SP2, 2002 >What NIC you have? What driver and version of the driver? "ethtool -i device" Did you try disabling checksum offloading? "ethtool -K ethX tx off" Try that on dom0 and/or on domU. Maybe also "ethtool -K ethX tso off" Does your ethX interface have errors? Check with "ifconfig ethX". Do you have tcp retransmits? Check with "netstat -s".> Method: > > The version tested was 1.7.0, to avoid having to apply the kernel patch that > comes with 2.0.2. The binaries downloaded were from the project homepage > http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the ''Linux > libc 2.3 binary and (on fc8 at least) I still had to install the > compat-libstdc++-33 package to get it to run. > > The server/listening side was always the dom0, invoked with ''iperf -s''. The > first machine is a linux fc8 pv domu, the second is another machine on my > subnet with a 100Mbps nic pipeline inbetween, and the rest are the various > drivers on a winxp hvm. The invoked command was ''iperf -c dom0-hostname -t > 60''. ''-t 60'' sets the runtime to 60 secs. I used the default buffer size > (8k), mss/mtu, and window size (which actually varies between the client and > the server). I averaged 3 tcp runs. >I think it might be a good idea to "force" good/big tcp window size to get comparable results..> For the udp tests, the default bandwidth is 1 Mbps (add the ''-b 1000000'' flag > to the command above). I added or subtracted a 0 till I got a packet loss > percentage of more than 0% and less than 5%, or an observed throughput > significantly less than the request (in other words, a stress test). In the > table below, ''udp Mpbs'' is the observed, and ''-b Mpbs'' is the requested rate. > (The server has to be invoked with ''iperf -s -u''.) > > machine | tcp Mbps| udp Mbps| -b Mbps | udp packet loss > fc8 domu | 1563 | 48.6| 100 | .08% > on subnet| 79.8| 5.4| 10 | 3.5% > gplpv | 19.8| 2.0| 10 | 0.0% > realtek | 9.6| 1.8| 10 | 0.0% > > Conclusions: The pv domu tcp rate is a blistering 1.5 Gbps, showing that a > software nic *can* be even faster than a 100 Mpbs hardware nic, at least for > pv. The machine on the same subnet (''on subnet'') achieved 80% of the max rate > supported by the hardware. Presumably, since the udp rates are consistently > less than the tcp ones, there was a lot of tcp retransmits. gplpv is twice as > fast as realtek for tcp, about the same for udp. 19.8/8 = ~2.5 MBps, which is > about the rate I was getting with my domu to dom0 file copies. I don''t expect > pv data rates from an hvm, but it should be interesting to see how much > faster James & Andy can get this to go. Btw, this was gplpv 0.8.4. > > Actually, pretty good work so far guys! >Thanks for the benchmarks! I find it weird that you get "only" 80 Mbit/sec from physical network to dom0.. You should be able to easily reach near 100 Mbit/sec from/to LAN. And UDP results are really weird.. something is causing a lot of errors.. Some things to check: - txqueuelen of ethX device. I guess 1000 is the default nowadays.. try with bigger values too. This applies to dom0 and to linux domU. - txqueuelen of vifX.Y devices on dom0. Default has been really small, so make sure to configure that bigger too.. This applies to both linux and windows vm''s. - Check sysctl net.core.netdev_max_backlog setting.. it should be at least 1000, possibly even more.. this applies to dom0 and linux domU. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Mar-02 03:25 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Saturday 01 March 2008 05:38:55 pm you wrote:> What NIC you have?The numbers for my physical nic were just included for comparison purposes, to show what the range of possible rates could be. I don''t really care about those numbers because all my active vms are on file backed vbds on the xen server''s (fc8) disk. All the other, more significant #s were for software nics, with no intermediate hardware nics. However:> What driver and version of the driver? > "ethtool -i device"On the client side (SuSE 10.3): [815] > ethtool -i eth0 driver: e100 version: 3.5.17-k4-NAPI firmware-version: N/A bus-info: 0000:02:08.0 [816] > lspci|grep 02:08.0 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (LOM) Ethernet Controller (rev 81) and on the server side (fc8): [717] > ethtool -i peth0 driver: b44 version: 1.01 firmware-version: bus-info: 0000:03:00.0 jimb@Insp6400 03/01/08 9:58PM:~ [718] > lspci|grep 03:00.0 03:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev 02)> Did you try disabling checksum offloading? "ethtool -K ethX tx off" > Try that on dom0 and/or on domU. Maybe also "ethtool -K ethX tso off"Why would I do that? That''s not how I operate normally. Doesn''t that take checksumming out of the hardware, and put it in software, slowing things down? What are the advantages here? However, my current settings are: SuSE: [817] > ethtool -k eth0 Offload parameters for eth0: Cannot get device rx csum settings: Operation not supported Cannot get device tx csum settings: Operation not supported Cannot get device scatter-gather settings: Operation not supported Cannot get device tcp segmentation offload settings: Operation not supported Cannot get device udp large send offload settings: Operation not supported rx-checksumming: off tx-checksumming: off scatter-gather: off tcp segmentation offload: off udp fragmentation offload: off generic segmentation offload: off fc8: [720] > ethtool -k peth0 Offload parameters for peth0: Cannot get device rx csum settings: Operation not supported Cannot get device tx csum settings: Operation not supported Cannot get device scatter-gather settings: Operation not supported Cannot get device tcp segmentation offload settings: Operation not supported Cannot get device udp large send offload settings: Operation not supported rx-checksumming: off tx-checksumming: off scatter-gather: off tcp segmentation offload: off udp fragmentation offload: off generic segmentation offload: off> Does your ethX interface have errors? Check with "ifconfig ethX". > > Do you have tcp retransmits? Check with "netstat -s".Before a test, on the fc8 side: [721] > ifconfig peth0; netstat -s peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 RX packets:10566824 errors:0 dropped:219 overruns:0 frame:0 TX packets:12540392 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070940466 (3.7 GiB) TX bytes:3443253043 (3.2 GiB) Interrupt:22 Ip: 16735946 total packets received 5 with invalid headers 1 with invalid addresses 0 forwarded 0 incoming packets discarded 16516069 incoming packets delivered 16444246 requests sent out 8 dropped because of missing route 217 fragments dropped after timeout 363681 reassemblies required 163328 packets reassembled ok 37025 packet reassembles failed 11 fragments received ok 22 fragments created Icmp: 125737 ICMP messages received 232 input ICMP message failed. ICMP input histogram: destination unreachable: 7181 echo requests: 59164 echo replies: 59160 59195 ICMP messages sent 0 ICMP messages failed ICMP output histogram: destination unreachable: 31 echo replies: 59164 Tcp: 6370 active connections openings 278 passive connection openings 5616 failed connection attempts 19 connection resets received 9 connections established 15612021 segments received 16291429 segments send out 296 segments retransmited 0 bad segments received. 5834 resets sent Udp: 758744 packets received 29 packets to unknown port received. 10611 packet receive errors 29592 packets sent RcvbufErrors: 10611 UdpLite: TcpExt: 5 packets pruned from receive queue because of socket buffer overrun 422 TCP sockets finished time wait in fast timer 241 packets rejects in established connections because of timestamp 234266 delayed acks sent 1245 delayed acks further delayed because of locked socket Quick ack mode was activated 720 times 2278561 packets directly queued to recvmsg prequeue. 69369816 packets directly received from backlog 3065751728 packets directly received from prequeue 5402811 packets header predicted 2206227 packets header predicted and directly queued to user 418685 acknowledgments not containing data received 7621128 predicted acknowledgments 5 times recovered from packet loss due to SACK data Detected reordering 3 times using FACK 1 congestion windows fully recovered 3 congestion windows partially recovered using Hoe heuristic TCPDSACKUndo: 3 37 congestion windows recovered after partial ack 0 TCP data loss events 3 fast retransmits 6 forward retransmits 1 retransmits in slow start 113 other TCP timeouts 1 sack retransmits failed 307 times receiver scheduled too late for direct processing 113 packets collapsed in receive queue due to low socket buffer 730 DSACKs sent for old packets 37 DSACKs received 10 connections reset due to unexpected data 160 connections reset due to early user close 19 connections aborted due to timeout and after: [722] > iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.100 port 5001 connected with 192.168.1.101 port 26433 [ ID] Interval Transfer Bandwidth [ 4] 0.0-60.1 sec 539 MBytes 75.3 Mbits/sec jimb@Insp6400 03/01/08 10:08PM:~ [723] > ifconfig peth0; netstat -s peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 RX packets:10962754 errors:0 dropped:237 overruns:0 frame:0 TX packets:12715673 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:369386256 (352.2 MiB) TX bytes:3459549868 (3.2 GiB) Interrupt:22 Ip: 17132793 total packets received 5 with invalid headers 1 with invalid addresses 0 forwarded 0 incoming packets discarded 16912908 incoming packets delivered 16620447 requests sent out 8 dropped because of missing route 217 fragments dropped after timeout 363681 reassemblies required 163328 packets reassembled ok 37025 packet reassembles failed 11 fragments received ok 22 fragments created Icmp: 125741 ICMP messages received 232 input ICMP message failed. ICMP input histogram: destination unreachable: 7185 echo requests: 59164 echo replies: 59160 59195 ICMP messages sent 0 ICMP messages failed ICMP output histogram: destination unreachable: 31 echo replies: 59164 Tcp: 6382 active connections openings 279 passive connection openings 5628 failed connection attempts 19 connection resets received 9 connections established 16008836 segments received 16467614 segments send out 296 segments retransmited 0 bad segments received. 5846 resets sent Udp: 758760 packets received 29 packets to unknown port received. 10611 packet receive errors 29606 packets sent RcvbufErrors: 10611 UdpLite: TcpExt: 5 packets pruned from receive queue because of socket buffer overrun 422 TCP sockets finished time wait in fast timer 286 packets rejects in established connections because of timestamp 234432 delayed acks sent 1270 delayed acks further delayed because of locked socket Quick ack mode was activated 765 times 2539167 packets directly queued to recvmsg prequeue. 79852248 packets directly received from backlog 3460281584 packets directly received from prequeue 5505468 packets header predicted 2487426 packets header predicted and directly queued to user 418832 acknowledgments not containing data received 7624029 predicted acknowledgments 5 times recovered from packet loss due to SACK data Detected reordering 3 times using FACK 1 congestion windows fully recovered 3 congestion windows partially recovered using Hoe heuristic TCPDSACKUndo: 3 37 congestion windows recovered after partial ack 0 TCP data loss events 3 fast retransmits 6 forward retransmits 1 retransmits in slow start 113 other TCP timeouts 1 sack retransmits failed 354 times receiver scheduled too late for direct processing 113 packets collapsed in receive queue due to low socket buffer 775 DSACKs sent for old packets 37 DSACKs received 10 connections reset due to unexpected data 160 connections reset due to early user close 19 connections aborted due to timeout Which shows a modest increase in drops in ifconfig, and no real significant differences in netstat, except for the TcpExt: section.> I think it might be a good idea to "force" good/big tcp window size to get > comparable results..I did a couple of 32k window size tests, with not much significant difference.> Some things to check: > > - txqueuelen of ethX device. I guess 1000 is the default nowadays.. try > with bigger values too. This applies to dom0 and to linux domU. > > - txqueuelen of vifX.Y devices on dom0. Default has been really small, so > make sure to configure that bigger too.. This applies to both linux > and windows vm''s.I guess this is done on ifconfig, since it appears in an ifconfig output. Is it done in ipconfig for windows?> - Check sysctl net.core.netdev_max_backlog setting.. it should be at least > 1000, possibly even more.. this applies to dom0 and linux domU.Where is this set, and what do I have to restart to make it effective? /etc/sysctl.conf? In general, are there any downsides in changing these values? Thanx for your interest. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-02 09:51 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sat, Mar 01, 2008 at 10:25:18PM -0500, jim burns wrote:> On Saturday 01 March 2008 05:38:55 pm you wrote: > > What NIC you have? > > The numbers for my physical nic were just included for comparison purposes, to > show what the range of possible rates could be. I don''t really care about > those numbers because all my active vms are on file backed vbds on the xen > server''s (fc8) disk. All the other, more significant #s were for software > nics, with no intermediate hardware nics. However: >Yep. I asked because of the "bad" 80 Mbit/sec result on a 100 Mbit network..> > What driver and version of the driver? > > "ethtool -i device" > > On the client side (SuSE 10.3): > > [815] > ethtool -i eth0 > driver: e100 > version: 3.5.17-k4-NAPI > firmware-version: N/A > bus-info: 0000:02:08.0 > [816] > lspci|grep 02:08.0 > 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (LOM) > Ethernet Controller (rev 81) > > and on the server side (fc8): > [717] > ethtool -i peth0 > driver: b44 > version: 1.01 > firmware-version: > bus-info: 0000:03:00.0 > jimb@Insp6400 03/01/08 9:58PM:~ > [718] > lspci|grep 03:00.0 > 03:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev > 02) >OK. e100 NIC is a good one, b44 one is not one of the best NICs out there..> > Did you try disabling checksum offloading? "ethtool -K ethX tx off" > > Try that on dom0 and/or on domU. Maybe also "ethtool -K ethX tso off" > > Why would I do that? That''s not how I operate normally. Doesn''t that take > checksumming out of the hardware, and put it in software, slowing things > down? What are the advantages here? However, my current settings are: >That''s because with some version of xen and/or drivers (I''m not sure actually) it was a known fact that performance got bad when you had hw checksum calculations turned on.. So just to see if that''s the case here.. I guess this was mostly for domU..> SuSE: > [817] > ethtool -k eth0 > Offload parameters for eth0: > Cannot get device rx csum settings: Operation not supported > Cannot get device tx csum settings: Operation not supported > Cannot get device scatter-gather settings: Operation not supported > Cannot get device tcp segmentation offload settings: Operation not supported > Cannot get device udp large send offload settings: Operation not supported > rx-checksumming: off > tx-checksumming: off > scatter-gather: off > tcp segmentation offload: off > udp fragmentation offload: off > generic segmentation offload: off >Maybe try turning on offloading/checksumming settings here?> fc8: > [720] > ethtool -k peth0 > Offload parameters for peth0: > Cannot get device rx csum settings: Operation not supported > Cannot get device tx csum settings: Operation not supported > Cannot get device scatter-gather settings: Operation not supported > Cannot get device tcp segmentation offload settings: Operation not supported > Cannot get device udp large send offload settings: Operation not supported > rx-checksumming: off > tx-checksumming: off > scatter-gather: off > tcp segmentation offload: off > udp fragmentation offload: off > generic segmentation offload: off >OK. Maybe try turning on here too, at least for testing with the physical suse box.. just to see if it has any effect?> > Does your ethX interface have errors? Check with "ifconfig ethX". > > > > Do you have tcp retransmits? Check with "netstat -s". > > Before a test, on the fc8 side: > [721] > ifconfig peth0; netstat -s > peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F > inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 > RX packets:10566824 errors:0 dropped:219 overruns:0 frame:0 > TX packets:12540392 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:4070940466 (3.7 GiB) TX bytes:3443253043 (3.2 GiB) > Interrupt:22 > > Tcp: > 6370 active connections openings > 278 passive connection openings > 5616 failed connection attempts > 19 connection resets received > 9 connections established > 15612021 segments received > 16291429 segments send out > 296 segments retransmited > 0 bad segments received. > 5834 resets sent > > and after: > [722] > iperf -s > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 4] local 192.168.1.100 port 5001 connected with 192.168.1.101 port 26433 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-60.1 sec 539 MBytes 75.3 Mbits/sec > jimb@Insp6400 03/01/08 10:08PM:~ > [723] > ifconfig peth0; netstat -s > peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F > inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 > RX packets:10962754 errors:0 dropped:237 overruns:0 frame:0 > TX packets:12715673 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:369386256 (352.2 MiB) TX bytes:3459549868 (3.2 GiB) > Interrupt:22 >Ok, so a bit more drops.. no errors at least.> Tcp: > 6382 active connections openings > 279 passive connection openings > 5628 failed connection attempts > 19 connection resets received > 9 connections established > 16008836 segments received > 16467614 segments send out > 296 segments retransmited > 0 bad segments received. > 5846 resets sentBut no more retransmits..> > Which shows a modest increase in drops in ifconfig, and no real significant > differences in netstat, except for the TcpExt: section. >Yep..> > I think it might be a good idea to "force" good/big tcp window size to get > > comparable results.. > > I did a couple of 32k window size tests, with not much significant difference. >I usually use at least 256k window sizes :) Did you try with multiple threads at the same time? Did it have any effect?> > Some things to check: > > > > - txqueuelen of ethX device. I guess 1000 is the default nowadays.. try > > with bigger values too. This applies to dom0 and to linux domU. > > > > - txqueuelen of vifX.Y devices on dom0. Default has been really small, so > > make sure to configure that bigger too.. This applies to both linux > > and windows vm''s. > > I guess this is done on ifconfig, since it appears in an ifconfig output. Is > it done in ipconfig for windows? >"ifconfig eth0 txqueuelen <value>" on linux.. I don''t know how to do that in windows. But it''s important to do that on dom0 for vifX.Y devices.. those are the dom0 sides of the virtual machine virtual NICs.> > - Check sysctl net.core.netdev_max_backlog setting.. it should be at least > > 1000, possibly even more.. this applies to dom0 and linux domU. > > Where is this set, and what do I have to restart to make it > effective? /etc/sysctl.conf? >Yep, modify /etc/sysctl.conf and run "sysctl -p /etc/sysctl.conf".> In general, are there any downsides in changing these values? >http://kb.pert.geant2.net/PERTKB/InterfaceQueueLength There''s something about these settings btw. what was the CPU usage for dom0 and for domU when you did these iperf tests? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Mar-02 16:52 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sunday 02 March 2008 04:51:47 am Pasi Kärkkäinen wrote:> Yep. I asked because of the "bad" 80 Mbit/sec result on a 100 Mbit network..My guess is my cpu(s) don''t have enough raw power to saturate the nic, but let''s see what happens after your suggestions are implemented.> That''s because with some version of xen and/or drivers (I''m not sure > actually) it was a known fact that performance got bad when you had hw > checksum calculations turned on.. > > So just to see if that''s the case here.. I guess this was mostly for domU..Ahh, because there''s no hardware - makes sense. Alright - let''s try one change/set of related changes at a time to isolate their effect.> > rx-checksumming: off > > tx-checksumming: off > > scatter-gather: off > > tcp segmentation offload: off > > udp fragmentation offload: off > > generic segmentation offload: off > > Maybe try turning on offloading/checksumming settings here?Ok - before any changes, ''iperf -c insp6400 -t 60'' gives 78.7 Mbps. On SuSE: [830] > sudo ethtool -K eth0 tx on Cannot set device tx csum settings: Operation not supported [2] 23551 exit 85 sudo ethtool -K eth0 tx on jimb@Dell4550 03/02/08 10:36AM:~ [831] > sudo ethtool -K eth0 tso on Cannot set device tcp segmentation offload settings: Operation not supported [2] 23552 exit 88 sudo ethtool -K eth0 tso on on fc8: [742] > sudo ethtool -K peth0 tx on Password: Cannot set device tx csum settings: Operation not supported zsh: exit 85 sudo ethtool -K peth0 tx on jimb@Insp6400 03/02/08 10:38AM:~ [743] > sudo ethtool -K peth0 tso on Cannot set device tcp segmentation offload settings: Operation not supported zsh: exit 88 sudo ethtool -K peth0 tso on Ahem - moving on!> I usually use at least 256k window sizes :)Trying adding ''-w 262144'' on both server and client side, iperf gets 72.1 Mpbs. Worse.> Did you try with multiple threads at the same time? Did it have any effect?Adding ''-P 4'' to client, iperf gets an aggregate rate of 74.5 Mpbs. Worse.> "ifconfig eth0 txqueuelen <value>" on linux.. I don''t know how to do that > in windows. > > But it''s important to do that on dom0 for vifX.Y devices.. those are the > dom0 sides of the virtual machine virtual NICs.[740] > ifconfig [...] peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 RX packets:11545320 errors:0 dropped:237 overruns:0 frame:0 TX packets:13476839 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:421343991 (401.8 MiB) TX bytes:4224204231 (3.9 GiB) Interrupt:22 tap0 Link encap:Ethernet HWaddr 0E:92:BB:CA:D8:DA inet6 addr: fe80::c92:bbff:feca:d8da/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:14090 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:7860525 (7.4 MiB) vif4.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2595441 errors:0 dropped:0 overruns:0 frame:0 TX packets:923150 errors:0 dropped:795 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:3366016758 (3.1 GiB) TX bytes:166299319 (158.5 MiB) vif16.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9848 errors:0 dropped:0 overruns:0 frame:0 TX packets:20828 errors:0 dropped:4698 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:834219 (814.6 KiB) TX bytes:16017488 (15.2 MiB) Yowza! 32? Those *are* small! 500 isn''t much better for tap0, either.> > > - Check sysctl net.core.netdev_max_backlog setting.. it should be at > > > least 1000, possibly even more.. this applies to dom0 and linux domU. > > > > Where is this set, and what do I have to restart to make it > > effective? /etc/sysctl.conf? > > Yep, modify /etc/sysctl.conf and run "sysctl -p /etc/sysctl.conf". > > > In general, are there any downsides in changing these values? > > http://kb.pert.geant2.net/PERTKB/InterfaceQueueLengthInteresting link - thanx. Ok - setting ''ifconfig eth0 txqueuelen 2500'' (peth0 on fc8) and net.core.netdev_max_backlog = 2500 on both machines, iperf gets 65.9 Mpbs. Worse. Probably only useful for Gpbs links. Removing changes, as in all cases above.> There''s something about these settings > > btw. what was the CPU usage for dom0 and for domU when you did these iperf > tests?About 75%. I''ve noticed, at least on my SuSE box, that multimedia playback suffers over 50%. (wine) On my WIndows guest, setting tap0''s txqueuelen to 1000 had no effect (it probably wouldn''t since it''s receiving); setting window size to 256k hung my guest; after rebooting the guest, it had no effect on 2nd try; and changing sysctl had no effect. Cpu % was about 80-85% w/o any changes (negligible on fc8) averaged over 2 vcpus. With any of the changes above, cpu % went down to 65-75% - the only change noticed. Have fun digesting this :-) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Mar-02 17:04 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sunday 02 March 2008 11:52:07 am you wrote:> My guess is my cpu(s) don''t have enough raw power to saturate the nic, but > let''s see what happens after your suggestions are implemented.To test this out, I reversed the flow of data - server on SuSE, client on fc8, and I got 93.1 Bpbs. The SuSE processor is just a Pentium 4. My Core Duo on fc8 is definitely faster. Compiles are about 4x faster. (None of the changes we discussed were used in this test.) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-02 19:11 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sun, Mar 02, 2008 at 11:52:07AM -0500, jim burns wrote:> On Sunday 02 March 2008 04:51:47 am Pasi Kärkkäinen wrote: > > Yep. I asked because of the "bad" 80 Mbit/sec result on a 100 Mbit network.. > > My guess is my cpu(s) don''t have enough raw power to saturate the nic, but > let''s see what happens after your suggestions are implemented. >I don''t think that.. saturating 100 Mbit NIC/link with a single thread was/is possible with 10 years old CPUs..> > That''s because with some version of xen and/or drivers (I''m not sure > > actually) it was a known fact that performance got bad when you had hw > > checksum calculations turned on.. > > > > So just to see if that''s the case here.. I guess this was mostly for domU.. > > Ahh, because there''s no hardware - makes sense. > > Alright - let''s try one change/set of related changes at a time to isolate > their effect. >Yep.> > > rx-checksumming: off > > > tx-checksumming: off > > > scatter-gather: off > > > tcp segmentation offload: off > > > udp fragmentation offload: off > > > generic segmentation offload: off > > > > Maybe try turning on offloading/checksumming settings here? > > Ok - before any changes, ''iperf -c insp6400 -t 60'' gives 78.7 Mbps. > > On SuSE: > [830] > sudo ethtool -K eth0 tx on > Cannot set device tx csum settings: Operation not supported > [2] 23551 exit 85 sudo ethtool -K eth0 tx on > jimb@Dell4550 03/02/08 10:36AM:~ > [831] > sudo ethtool -K eth0 tso on > Cannot set device tcp segmentation offload settings: Operation not supported > [2] 23552 exit 88 sudo ethtool -K eth0 tso on > > on fc8: > [742] > sudo ethtool -K peth0 tx on > Password: > Cannot set device tx csum settings: Operation not supported > zsh: exit 85 sudo ethtool -K peth0 tx on > jimb@Insp6400 03/02/08 10:38AM:~ > [743] > sudo ethtool -K peth0 tso on > Cannot set device tcp segmentation offload settings: Operation not supported > zsh: exit 88 sudo ethtool -K peth0 tso on > > Ahem - moving on! > > > I usually use at least 256k window sizes :) > > Trying adding ''-w 262144'' on both server and client side, iperf gets 72.1 > Mpbs. Worse. >OK.> > Did you try with multiple threads at the same time? Did it have any effect? > > Adding ''-P 4'' to client, iperf gets an aggregate rate of 74.5 Mpbs. Worse. >OK.> > "ifconfig eth0 txqueuelen <value>" on linux.. I don''t know how to do that > > in windows. > > > > But it''s important to do that on dom0 for vifX.Y devices.. those are the > > dom0 sides of the virtual machine virtual NICs. > > [740] > ifconfig > [...] > peth0 Link encap:Ethernet HWaddr 00:15:C5:04:7D:4F > inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1 > RX packets:11545320 errors:0 dropped:237 overruns:0 frame:0 > TX packets:13476839 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:421343991 (401.8 MiB) TX bytes:4224204231 (3.9 GiB) > Interrupt:22 > > tap0 Link encap:Ethernet HWaddr 0E:92:BB:CA:D8:DA > inet6 addr: fe80::c92:bbff:feca:d8da/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:14090 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:500 > RX bytes:0 (0.0 b) TX bytes:7860525 (7.4 MiB) > > vif4.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF > inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:2595441 errors:0 dropped:0 overruns:0 frame:0 > TX packets:923150 errors:0 dropped:795 overruns:0 carrier:0 > collisions:0 txqueuelen:32 > RX bytes:3366016758 (3.1 GiB) TX bytes:166299319 (158.5 MiB) > > vif16.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF > inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:9848 errors:0 dropped:0 overruns:0 frame:0 > TX packets:20828 errors:0 dropped:4698 overruns:0 carrier:0 > collisions:0 txqueuelen:32 > RX bytes:834219 (814.6 KiB) TX bytes:16017488 (15.2 MiB) > > Yowza! 32? Those *are* small! > 500 isn''t much better for tap0, either. >Yep..> > > > - Check sysctl net.core.netdev_max_backlog setting.. it should be at > > > > least 1000, possibly even more.. this applies to dom0 and linux domU. > > > > > > Where is this set, and what do I have to restart to make it > > > effective? /etc/sysctl.conf? > > > > Yep, modify /etc/sysctl.conf and run "sysctl -p /etc/sysctl.conf". > > > > > In general, are there any downsides in changing these values? > > > > http://kb.pert.geant2.net/PERTKB/InterfaceQueueLength > > Interesting link - thanx. > > Ok - setting ''ifconfig eth0 txqueuelen 2500'' (peth0 on fc8) and > net.core.netdev_max_backlog = 2500 on both machines, iperf gets 65.9 Mpbs. > Worse. Probably only useful for Gpbs links. Removing changes, as in all cases > above. >Ok. Well, it was worth checking and testing :)> > There''s something about these settings > > > > btw. what was the CPU usage for dom0 and for domU when you did these iperf > > tests? > > About 75%. I''ve noticed, at least on my SuSE box, that multimedia playback > suffers over 50%. (wine) > > On my WIndows guest, setting tap0''s txqueuelen to 1000 had no effect (it > probably wouldn''t since it''s receiving); setting window size to 256k hung my > guest; after rebooting the guest, it had no effect on 2nd try; and changing > sysctl had no effect. Cpu % was about 80-85% w/o any changes (negligible on > fc8) averaged over 2 vcpus. With any of the changes above, cpu % went down to > 65-75% - the only change noticed. > > Have fun digesting this :-) >Yeah it''s interesting.. I''ve measured 600+ Mbit/sec throughput from linux domU to external network.. on a 3 GHz P4. So I don''t think CPU is the problem here.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-02 19:33 UTC
Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
On Sun, Mar 02, 2008 at 12:04:42PM -0500, jim burns wrote:> On Sunday 02 March 2008 11:52:07 am you wrote: > > My guess is my cpu(s) don''t have enough raw power to saturate the nic, but > > let''s see what happens after your suggestions are implemented. > > To test this out, I reversed the flow of data - server on SuSE, client on fc8, > and I got 93.1 Bpbs. The SuSE processor is just a Pentium 4. My Core Duo on > fc8 is definitely faster. Compiles are about 4x faster. (None of the changes > we discussed were used in this test.) >Maybe the problem is that b44 NIC.. I wouldn''t be surprised. Then again your NIC shouldn''t have anything to do with dom0 <-> domU tests.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Mar-03 02:29 UTC
RE: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
> > > That''s because with some version of xen and/or drivers (I''m not sure > > actually) it was a known fact that performance got bad when you hadhw> > checksum calculations turned on.. > > > > So just to see if that''s the case here.. I guess this was mostly for > domU.. > > Ahh, because there''s no hardware - makes sense.Actually, csum offload in theory should work just fine when there is no hardware. If all ''virtual'' parties attached to the bridge don''t care about checksums (because blkback has told them that the data is validated) then nobody has to check and things can go faster. Otherwise, the sender has to calculate a correct checksum for outgoing packets, and the receiver has to also calculate the correct checksum for incoming packets and confirm that it is correct. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users