Scott McKenzie
2008-Mar-18 08:44 UTC
[Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers for Windows
The drivers have installed OK on my system (SBS2003) but do not seem to be activated after booting with the /gplpv option. I still have a "QEMU HARDDISK" listed under disk drives in Device Manager. With version 0.8.4 I was getting the two disk drives/file system corruption problem. -Scott _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Mar-18 10:18 UTC
RE: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
> The drivers have installed OK on my system (SBS2003) but do not seemto> be activated after booting with the /gplpv option. I still have a"QEMU> HARDDISK" listed under disk drives in Device Manager. > > With version 0.8.4 I was getting the two disk drives/file system > corruption problem.Well... at least you aren''t getting corruption anymore :) Something must be going wrong with xenhide attaching to the PCI bus. I''m trying to sort out some major network performance issues for 0.8.6, and I''ll get back to these sort of issues after that. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-18 11:27 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
On Tue, Mar 18, 2008 at 09:18:08PM +1100, James Harper wrote:> > The drivers have installed OK on my system (SBS2003) but do not seem > to > > be activated after booting with the /gplpv option. I still have a > "QEMU > > HARDDISK" listed under disk drives in Device Manager. > > > > With version 0.8.4 I was getting the two disk drives/file system > > corruption problem. > > Well... at least you aren''t getting corruption anymore :) Something must > be going wrong with xenhide attaching to the PCI bus. > > I''m trying to sort out some major network performance issues for 0.8.6, > and I''ll get back to these sort of issues after that. >Any idea yet about the culprit for the network performance issues? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
James Harper
2008-Mar-18 11:35 UTC
RE: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
> On Tue, Mar 18, 2008 at 09:18:08PM +1100, James Harper wrote: > > > The drivers have installed OK on my system (SBS2003) but do not seem > > to > > > be activated after booting with the /gplpv option. I still have a > > "QEMU > > > HARDDISK" listed under disk drives in Device Manager. > > > > > > With version 0.8.4 I was getting the two disk drives/file system > > > corruption problem. > > > > Well... at least you aren''t getting corruption anymore :) Something must > > be going wrong with xenhide attaching to the PCI bus. > > > > I''m trying to sort out some major network performance issues for 0.8.6, > > and I''ll get back to these sort of issues after that. > > > > Any idea yet about the culprit for the network performance issues? >Yes. Windows isn''t flexible enough to like what Linux is giving it :) I think high CPU when testing network between DomU and Dom0 is a bit of a red herring. There is no external NIC involved, performance is all about how fast the CPU can move the bits, so all things being equal, CPU utilization should be high. When testing between DomU and an external host the CPU utilization should be more respectable though... James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Emre ERENOGLU
2008-Mar-18 12:43 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
Hi James,> Any idea yet about the culprit for the network performance issues? > > > > Yes. Windows isn''t flexible enough to like what Linux is giving it :) > > I think high CPU when testing network between DomU and Dom0 is a bit of a > red herring. There is no external NIC involved, performance is all about how > fast the CPU can move the bits, so all things being equal, CPU utilization > should be high. > > When testing between DomU and an external host the CPU utilization should > be more respectable though... > > James >Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU power? I mean, somewhere in the design, there''s something wrong that forces us to make possibly too many context switches between DomU, Hypervisor and Dom0. ??? Emre _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tom Brown
2008-Mar-18 19:02 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
> > Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU > power? I mean, somewhere in the design, there''s something wrong that forces > us to make possibly too many context switches between DomU, Hypervisor and > Dom0. ??? > > Emrewhat, something like the 1500 byte maximum transmission unit (MTU) from back in the days when 10 MILLION bits per second was so insanely fast we connected everything to the same cable!? (remember 1200 baud modems?) Yes, there might be some "design" decisions that don''t work all that well today. AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it would only be a relatively small improvement (jumbo frames only going up to about 8k AFAIK). -Tom ---------------------------------------------------------------------- tbrown@BareMetal.com | Courage is doing what you''re afraid to do. http://BareMetal.com/ | There can be no courage unless you''re scared. | - Eddie Rickenbacker _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-18 21:02 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
On Tue, Mar 18, 2008 at 12:02:52PM -0700, Tom Brown wrote:> > > >Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU > >power? I mean, somewhere in the design, there''s something wrong that forces > >us to make possibly too many context switches between DomU, Hypervisor and > >Dom0. ??? > > > >Emre > > what, something like the 1500 byte maximum transmission unit (MTU) from > back in the days when 10 MILLION bits per second was so insanely fast we > connected everything to the same cable!? (remember 1200 baud modems?) Yes, > there might be some "design" decisions that don''t work all that well > today. > > AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for > a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it > would only be a relatively small improvement (jumbo frames only going up > to about 8k AFAIK). >Afaik Xen itself supports jumbo frames as long as everything in both dom0 and domU is configured correctly. Do you have more information about the opposite? "Standard" jumbo frames are 9000 bytes.. Something that might be interesting: http://www.vmware.com/pdf/hypervisor_performance.pdf Especially the "Netperf" section.. "VMware ESX Server delivers near native performance for both one- and two-client tests. The Xen hypervisor, on the other hand, is extremely slow, performing at only 3.6 percent of the native performance." "VMware ESX Server does very well, too: the throughput for two-client tests goes up 1.9-.2 times compared to the one-client tests. Xen is almost CPU saturated for the one-client case, hence it does not get much scaling and even slows down for the send case." "The Netperf results prove that by using its direct I/O architecture together with the paravirtualized vmxnet network driver approach, VMware ESX Server can successfully virtualize network I/O intensive datacenter applications such as Web servers, file servers, and mail servers. The very poor network performance makes the Xen hypervisor less suitable for any such applications." It seems VMware used Xen 3.0.3 _without_ paravirtualized drivers (using QEMU emulated NIC), so that explains the poor result for Xen.. Another test, this time with Xen Enterprise 3.2: http://www.vmware.com/pdf/Multi-NIC_Performance.pdf "With one NIC configured, the two hypervisors were each within a fraction of one percent of native throughput for both cases. Virtualization overhead had no effect for this lightly-loaded configuration." "With two NICs, ESX301 had essentially the same throughput as native, but XE320 was slower by 10% (send) and 12% (receive), showing the effect of CPU overhead." "With three NICs, ESX301 is close to its limit for a uniprocessor virtual machine, with a degradation compared to native of 4% for send and 3% for receive. XE320 is able to achieve some additional throughput using three NICs instead of two, but the performance degradation compared to native is substantial: 30% for send, 34% for receive." So using paravirtualized network drivers with Xen should make a huge difference, but there still seems to be something to optimize.. to catch up with VMware ESX. And some more bencnmark results by xensource: http://www.citrixxenserver.com/Documents/hypervisor_performance_comparison_1_0_5_with_esx-data.pdf Something I noticed about the benchmark configuration: "XenEnterprise 3.2 - Windows: Virtual Network adapters: XenSource Xen Tools Ethernet Adapter RTL8139 Family PCI Fast Ethernet NIC, Receive Buffer Size=64KB" Receive buffer size=64KB.. is that something that needs to be tweaked in the drivers for better performance? Or is that just some benchmarking tool related setting.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2008-Mar-18 21:17 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
On Tue, Mar 18, 2008 at 11:02:56PM +0200, Pasi Kärkkäinen wrote:> On Tue, Mar 18, 2008 at 12:02:52PM -0700, Tom Brown wrote: > > > > > >Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU > > >power? I mean, somewhere in the design, there''s something wrong that forces > > >us to make possibly too many context switches between DomU, Hypervisor and > > >Dom0. ??? > > > > > >Emre > > > > what, something like the 1500 byte maximum transmission unit (MTU) from > > back in the days when 10 MILLION bits per second was so insanely fast we > > connected everything to the same cable!? (remember 1200 baud modems?) Yes, > > there might be some "design" decisions that don''t work all that well > > today. > > > > AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for > > a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it > > would only be a relatively small improvement (jumbo frames only going up > > to about 8k AFAIK). > > > > Afaik Xen itself supports jumbo frames as long as everything in both dom0 > and domU is configured correctly. Do you have more information about the > opposite? > > "Standard" jumbo frames are 9000 bytes.. > > Something that might be interesting: http://www.vmware.com/pdf/hypervisor_performance.pdf > > Especially the "Netperf" section.. > > "VMware ESX Server delivers near native performance for both one- and > two-client tests. The Xen hypervisor, on the other hand, is extremely slow, > performing at only 3.6 percent of the native performance." > > "VMware ESX Server does very well, too: the throughput for two-client tests > goes up 1.9-.2 times compared to the one-client tests. Xen is almost CPU > saturated for the one-client case, hence it does not get much scaling and > even slows down for the send case." > > "The Netperf results prove that by using its direct I/O architecture > together with the paravirtualized vmxnet network driver approach, VMware ESX > Server can successfully virtualize network I/O intensive datacenter > applications such as Web servers, file servers, and mail servers. The very > poor network performance makes the Xen hypervisor less suitable for any such > applications." > > It seems VMware used Xen 3.0.3 _without_ paravirtualized drivers (using QEMU > emulated NIC), so that explains the poor result for Xen.. > > > Another test, this time with Xen Enterprise 3.2: http://www.vmware.com/pdf/Multi-NIC_Performance.pdf > > "With one NIC configured, the two hypervisors were each within a fraction of > one percent of native throughput for both cases. Virtualization overhead had no effect for this > lightly-loaded configuration." > > "With two NICs, ESX301 had essentially the same throughput as native, but > XE320 was slower by 10% (send) and 12% (receive), showing the effect of CPU overhead." > > "With three NICs, ESX301 is close to its limit for a uniprocessor virtual > machine, with a degradation compared to native of 4% for send and 3% for receive. XE320 is able to > achieve some additional throughput using three NICs instead of two, but the performance degradation > compared to native is substantial: 30% for send, 34% for receive." > > > So using paravirtualized network drivers with Xen should make a huge difference, but > there still seems to be something to optimize.. to catch up with VMware ESX. > >Replying to myself.. http://xen.org/files/xensummit_4/NetworkIO_Santos.pdf http://xen.org/files/xensummit_fall07/16_JoseRenatoSantos.pdf Papers from last fall about Xen network performance (with analysis and benchmarks) and optimization suggestions.. Worth reading. So I guess the summary would be that using PV network drivers you should be able to get near native performance with at least single CPU/NIC guests.. this is already the case with xensource windows pv network drivers. In the future with netchannel2 performance should scale much higher (10 gigabit). So now it''s only about figuring out how to make gplpv windows drivers perform as well as xensource drivers:) -- Pasi> And some more bencnmark results by xensource: http://www.citrixxenserver.com/Documents/hypervisor_performance_comparison_1_0_5_with_esx-data.pdf > > Something I noticed about the benchmark configuration: > > "XenEnterprise 3.2 - Windows: Virtual Network adapters: XenSource Xen Tools > Ethernet Adapter RTL8139 Family PCI Fast Ethernet NIC, Receive Buffer Size=64KB" > > Receive buffer size=64KB.. is that something that needs to be tweaked in the > drivers for better performance? Or is that just some benchmarking tool > related setting.. > > -- Pasi >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Scott McKenzie
2008-Mar-19 05:44 UTC
Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindows
On 18/03/08 21:18, James Harper wrote:>> The drivers have installed OK on my system (SBS2003) but do not seem to be activated after booting with the /gplpv option. I still have a "QEMU HARDDISK" listed under disk drives in Device Manager. >> >> With version 0.8.4 I was getting the two disk drives/file system >> corruption problem. >> > > Well... at least you aren''t getting corruption anymore :) Something must > be going wrong with xenhide attaching to the PCI bus. >Yeah, that''s a good thing.> I''m trying to sort out some major network performance issues for 0.8.6, > and I''ll get back to these sort of issues after that. >Please let me know if you need any more info about my environment that might assist in tracking down the problem. -Scott _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, 18 Mar 2008, Tom Brown wrote:>> >> Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU >> power? I mean, somewhere in the design, there''s something wrong that >> forces >> us to make possibly too many context switches between DomU, Hypervisor and >> Dom0. ??? >> >> Emre > > what, something like the 1500 byte maximum transmission unit (MTU) from back > in the days when 10 MILLION bits per second was so insanely fast we connected > everything to the same cable!? (remember 1200 baud modems?) Yes, there might > be some "design" decisions that don''t work all that well today. > > AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for a > lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it would > only be a relatively small improvement (jumbo frames only going up to about > 8k AFAIK). > > -TomMy bad, As Pasi pointed out, it turns out that XEN has supported jumbo frames since at least 3.0.4 ... of course, the AOE initiator support that actually uses it seems to not be available until kernels 2.6.19 ... which is too current for centos 5.1 so now I''m trying to boot 2.6.24.3 as a 32 PV guest on a 64 bit hypervisor, and it''s dieing at Checking if this processor honours the WP bit even in supervisor mode... Ok. installing Xen timer for CPU 0 ------------[ cut here ]------------ kernel BUG at arch/x86/xen/time.c:122! invalid opcode: 0000 [#1] SMP time.c 122 is the BUG line in the snippet below... static void setup_runstate_info(int cpu) { struct vcpu_register_runstate_memory_area area; area.addr.v = &per_cpu(runstate, cpu); if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu, &area)) BUG(); } Is this 32 bit on 64 bit hypervisor supposed to work for vanilla linux? -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
fyi; I''m using the latest AOE on 2.6.21 RH kernels .. Jumbo''s don''t work for me (!) (and I spent many days trying ...) ----- Original Message ----- From: "Tom Brown" <xensource.com@vmail.baremetal.com> To: "xen-users" <xen-users@lists.xensource.com> Sent: Thursday, March 20, 2008 5:00:07 PM GMT +00:00 GMT Britain, Ireland, Portugal Subject: [Xen-users] vanilla linux, jumbo frames On Tue, 18 Mar 2008, Tom Brown wrote:>> >> Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU >> power? I mean, somewhere in the design, there''s something wrong that >> forces >> us to make possibly too many context switches between DomU, Hypervisor and >> Dom0. ??? >> >> Emre > > what, something like the 1500 byte maximum transmission unit (MTU) from back > in the days when 10 MILLION bits per second was so insanely fast we connected > everything to the same cable!? (remember 1200 baud modems?) Yes, there might > be some "design" decisions that don''t work all that well today. > > AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for a > lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it would > only be a relatively small improvement (jumbo frames only going up to about > 8k AFAIK). > > -TomMy bad, As Pasi pointed out, it turns out that XEN has supported jumbo frames since at least 3.0.4 ... of course, the AOE initiator support that actually uses it seems to not be available until kernels 2.6.19 ... which is too current for centos 5.1 so now I''m trying to boot 2.6.24.3 as a 32 PV guest on a 64 bit hypervisor, and it''s dieing at Checking if this processor honours the WP bit even in supervisor mode... Ok. installing Xen timer for CPU 0 ------------[ cut here ]------------ kernel BUG at arch/x86/xen/time.c:122! invalid opcode: 0000 [#1] SMP time.c 122 is the BUG line in the snippet below... static void setup_runstate_info(int cpu) { struct vcpu_register_runstate_memory_area area; area.addr.v = &per_cpu(runstate, cpu); if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu, &area)) BUG(); } Is this 32 bit on 64 bit hypervisor supposed to work for vanilla linux? -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Mar 20, 2008 at 10:00:07AM -0700, Tom Brown wrote:> On Tue, 18 Mar 2008, Tom Brown wrote: > > >> > >> Isn''t these 100 mbps or 1000 mbps speeds funny numbers for today''s CPU > >> power? I mean, somewhere in the design, there''s something wrong that > >> forces > >> us to make possibly too many context switches between DomU, Hypervisor > >> and > >> Dom0. ??? > >> > >> Emre > > > >what, something like the 1500 byte maximum transmission unit (MTU) from > >back in the days when 10 MILLION bits per second was so insanely fast we > >connected everything to the same cable!? (remember 1200 baud modems?) Yes, > >there might be some "design" decisions that don''t work all that well today. > > > >AFAIK, XEN can''t do oversize (jumbo) frames, that would be a big help for > >a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it > >would only be a relatively small improvement (jumbo frames only going up > >to about 8k AFAIK). > > > >-Tom > > My bad, As Pasi pointed out, it turns out that XEN has supported jumbo > frames since at least 3.0.4 ... of course, the AOE initiator support that > actually uses it seems to not be available until kernels 2.6.19 ... which > is too current for centos 5.1 > > so now I''m trying to boot 2.6.24.3 as a 32 PV guest on a 64 bit > hypervisor, and it''s dieing at > > Checking if this processor honours the WP bit even in supervisor mode... Ok. > installing Xen timer for CPU 0 > ------------[ cut here ]------------ > kernel BUG at arch/x86/xen/time.c:122! > invalid opcode: 0000 [#1] SMP > > time.c 122 is the BUG line in the snippet below... > > static void setup_runstate_info(int cpu) > { > struct vcpu_register_runstate_memory_area area; > > area.addr.v = &per_cpu(runstate, cpu); > > if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, > cpu, &area)) > BUG(); > } > > > > Is this 32 bit on 64 bit hypervisor supposed to work for vanilla linux? >Xen (3.1) in CentOS 5.1 doesn''t support 32-on-64. RHEL 5.2 / CentOS 5.2 will have 32-on-64 as a technology preview and it should work.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users