VT by itself seems fine, but once a VT domain is running a workload that is network intensive combined with a disk/cpu intensive workload, things get incredibly slow. Operations that take less than a second with either workload running alone can now take many seconds, sometimes the better part of a minute! Is this some limitation of the qemu device model? -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Rik van Riel wrote:> VT by itself seems fine, but once a VT domain is running a workload that > is network intensive combined with a disk/cpu intensive workload, things > get incredibly slow. > > Operations that take less than a second with either workload running > alone can now take many seconds, sometimes the better part of a minute! > > Is this some limitation of the qemu device model?Looking at it a bit more closely, it appears that postgresql doing disk IO from inside a fully virtualized domain totally kills the CPU. It gets so bad that a simple "dmesg" takes 10-20 seconds to start, and after that it spews data maybe 7 or 8 lines every other second. Actually slower than serial console... This is totally unusable :( -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 3 Jul 2006, at 09:48, Rik van Riel wrote:> Looking at it a bit more closely, it appears that postgresql > doing disk IO from inside a fully virtualized domain totally > kills the CPU. > > It gets so bad that a simple "dmesg" takes 10-20 seconds to start, > and after that it spews data maybe 7 or 8 lines every other second. > Actually slower than serial console... > > This is totally unusable :(Might you be emulating PIO? That would certainly suck. The device model is supposed to support (virtual) DMA though. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, Jul 03, 2006 at 09:58:01AM +0100, Keir Fraser wrote:> > On 3 Jul 2006, at 09:48, Rik van Riel wrote: > > >Looking at it a bit more closely, it appears that postgresql > >doing disk IO from inside a fully virtualized domain totally > >kills the CPU. > > > >It gets so bad that a simple "dmesg" takes 10-20 seconds to start, > >and after that it spews data maybe 7 or 8 lines every other second. > >Actually slower than serial console... > > > >This is totally unusable :( > > Might you be emulating PIO? That would certainly suck. The device model > is supposed to support (virtual) DMA though.That''s the first thing I though about but apparently the kernel runs the IDE device in DMA mode if one believe the hdparm output in that guest. Daniel -- Daniel Veillard | Red Hat http://redhat.com/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel Veillard wrote:> On Mon, Jul 03, 2006 at 09:58:01AM +0100, Keir Fraser wrote: >> On 3 Jul 2006, at 09:48, Rik van Riel wrote: >> >>> Looking at it a bit more closely, it appears that postgresql >>> doing disk IO from inside a fully virtualized domain totally >>> kills the CPU. >>> >>> It gets so bad that a simple "dmesg" takes 10-20 seconds to start, >>> and after that it spews data maybe 7 or 8 lines every other second. >>> Actually slower than serial console... >>> >>> This is totally unusable :( >> Might you be emulating PIO? That would certainly suck. The device model >> is supposed to support (virtual) DMA though. > > That''s the first thing I though about but apparently the kernel > runs the IDE device in DMA mode if one believe the hdparm output in > that guest.The information is conflicting... While hdparm suggests DMA, the kernel profile has a suspicious amount of IDE in it. Not sure what''s going on... # hdparm /dev/hda /dev/hda: multcount = 16 (on) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 1 (on) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 65535/16/63, sectors = 85899345920, start = 0 # readprofile | sort -n | tail -20 148632 copy_user_generic 498.7651 163787 copy_page 731.1920 178136 __might_sleep 1017.9200 178409 ide_intr 141.7069 277185 ide_outsl 39597.8571 324592 ide_execute_command 772.8381 349538 sys_select 300.0326 440036 unmap_vmas 264.6037 478669 do_page_fault 303.7240 585427 handle_IRQ_event 6887.3765 1087766 ide_inw 120862.8889 1115456 ide_do_request 950.1329 2169643 do_wp_page 1662.5617 2945913 ide_outbsync 736478.2500 3888598 i8042_interrupt 6480.9967 5892214 do_no_page 2986.4237 11803041 thread_return 20890.3381 13074317 __do_softirq 80705.6605 81661803 poll_idle 983877.1446 130083897 total 52.4560 -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of > Rik van Riel > Sent: 03 July 2006 15:15 > To: veillard@redhat.com > Cc: xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] VT is comically slow > > Daniel Veillard wrote: > > On Mon, Jul 03, 2006 at 09:58:01AM +0100, Keir Fraser wrote: > >> On 3 Jul 2006, at 09:48, Rik van Riel wrote: > >> > >>> Looking at it a bit more closely, it appears that postgresql > >>> doing disk IO from inside a fully virtualized domain totally > >>> kills the CPU. > >>> > >>> It gets so bad that a simple "dmesg" takes 10-20 seconds to start, > >>> and after that it spews data maybe 7 or 8 lines every > other second. > >>> Actually slower than serial console... > >>> > >>> This is totally unusable :( > >> Might you be emulating PIO? That would certainly suck. The > device model > >> is supposed to support (virtual) DMA though. > > > > That''s the first thing I though about but apparently the kernel > > runs the IDE device in DMA mode if one believe the hdparm output in > > that guest. > > The information is conflicting... > > While hdparm suggests DMA, the kernel profile has a suspicious > amount of IDE in it. Not sure what''s going on... > > # hdparm /dev/hda > > /dev/hda: > multcount = 16 (on) > IO_support = 0 (default 16-bit) > unmaskirq = 0 (off) > using_dma = 1 (on) > keepsettings = 0 (off) > readonly = 0 (off) > readahead = 256 (on) > geometry = 65535/16/63, sectors = 85899345920, start = 0 > > > # readprofile | sort -n | tail -20 > 148632 copy_user_generic 498.7651 > 163787 copy_page 731.1920 > 178136 __might_sleep 1017.9200 > 178409 ide_intr 141.7069 > 277185 ide_outsl 39597.8571This ...> 324592 ide_execute_command 772.8381 > 349538 sys_select 300.0326 > 440036 unmap_vmas 264.6037 > 478669 do_page_fault 303.7240 > 585427 handle_IRQ_event 6887.3765 > 1087766 ide_inw 120862.8889And this...> 1115456 ide_do_request 950.1329 > 2169643 do_wp_page 1662.5617 > 2945913 ide_outbsync 736478.2500 > 3888598 i8042_interrupt 6480.9967 > 5892214 do_no_page 2986.4237 > 11803041 thread_return 20890.3381 > 13074317 __do_softirq 80705.6605 > 81661803 poll_idle 983877.1446 > 130083897 total 52.4560 >Are indications of IDE being used in PIO mode. Why this is, don''t ask me... Also, the read is 16-bit, making it twice as long as the read would be. That''s a "bug" in qemu, because it doesn''t fall-back to two 16-bit operations on reads, which it does for writes. [See default_ioport_readl vs default_ioport_writel - the latter calls the writew twice, but the former isn''t doing the corresponding translation - don''t know why this is. I hacked it a while back to do two 16-bit reads, and it seemed to work just fine for my miniature IDE-test application, but I don''t know if there''s something else somewhere that breaks if you try to do that... -- Mats> > > -- > "Debugging is twice as hard as writing the code in the first place. > Therefore, if you write the code as cleverly as possible, you are, > by definition, not smart enough to debug it." - Brian W. Kernighan > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 3 Jul 2006, at 09:28, Rik van Riel wrote:> VT by itself seems fine, but once a VT domain is running a workload > that > is network intensive combined with a disk/cpu intensive workload, > things > get incredibly slow. > > Operations that take less than a second with either workload running > alone can now take many seconds, sometimes the better part of a minute!You might want to try removing the call to pmtimer_init() in ioemu/hw/piix4acpi.c -- the pmtimer emulation is rather broken (burns 25% of a 3GHz cpu). I''ve just done this in xen-unstable. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Petersson, Mats wrote:> Are indications of IDE being used in PIO mode. Why this is, don''t ask > me...A friend is trying out OpenBSD inside Xen, which shows more clearly that HVM guests are indeed using PIO :( pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility wd0 at pciide0 channel 0 drive 0: <QEMU HARDDISK> wd0: 16-sector PIO, LBA, 20480MB, 41943040 sectors wd0(pciide0:0:0): using PIO mode 2 pciide0: channel 1 disabled (no drives) That explains a lot... -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 3 Jul 2006, at 20:16, Rik van Riel wrote:>> Are indications of IDE being used in PIO mode. Why this is, don''t ask >> me... > > A friend is trying out OpenBSD inside Xen, which shows more clearly > that HVM guests are indeed using PIO :( > > pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, > channel 0 wired to compatibility, channel 1 wired to compatibility > wd0 at pciide0 channel 0 drive 0: <QEMU HARDDISK> > wd0: 16-sector PIO, LBA, 20480MB, 41943040 sectors > wd0(pciide0:0:0): using PIO mode 2 > pciide0: channel 1 disabled (no drives)You might need to investigate ioemu/hw/ide.c to find out why Linux/*BSD are deciding to use PIO for data transfers. Intel put in considerable effort a while back to ensure that DMA was used. Maybe that''s got broken. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Rik van Riel wrote:> VT by itself seems fine, but once a VT domain is running a workload that > is network intensive combined with a disk/cpu intensive workload, things > get incredibly slow. > > Operations that take less than a second with either workload running > alone can now take many seconds, sometimes the better part of a minute! > > Is this some limitation of the qemu device model?We (Virtual Iron) are in a process of developing accelerated drivers for the HVM guests. Our goal for this effort is to get as close to native performance as possible and to make paravirtualization of guests unnecessary. The drivers currently support most flavors of RHEL, SLES and Windows. The early performance numbers are encouraging. Some numbers are many times faster than QEMU emulation and are close to native performance numbers (and we are just beginning to tune the performance). Just to give people a flavor of the performance that we are getting, here are some preliminary results on Intel Woodcrest (51xx series), with a Gigabit network, with SAN storage and all of the VMs were 1 CPU. These numbers are very early, disks numbers are very good and we are still tuning the network numbers. Bonnie-SAN - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) Write, KB/sec 52,106 49,500 Read, KB/sec 59,392 57,186 netperf - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) tcp req/resp (t/sec) 6,831 5,648 SPECjbb2000 - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) JRockit JVM thruput 43,061 40,364 This code is modeled on Xen backend/frontend architecture concepts and will be GPLed. -Alex V. Alex Vasilevsky Chief Technology Officer, Founder Virtual Iron Software, Inc _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, 06 Jul 2006 11:16:18 -0800, alex wrote:> We (Virtual Iron) are in a process of developing accelerated drivers for > the HVM guests. Our goal for this effort is to get as close to native > performance as possible and to make paravirtualization of guests > unnecessary. The drivers currently support most flavors of RHEL, SLES and > Windows. The early performance numbers are encouraging. Some numbers are > many times faster than QEMU emulation and are close to native performance > numbers (and we are just beginning to tune the performance).I don''t think paravirtual drivers are necessary for good performance. There are a number of things about QEMU''s device emulation that are less than ideal. Namely, QEMU''s disk emulation is IDE w/DMA. Apparently, DMA is busted right now but even if it worked, IDE only allows one outstanding request at a time (which doesn''t let the host scheduler do it''s thing properly). Emulating either SCSI (which is in QEMU CVS) or SATA would allow multiple outstanding requests which would be a big benefit. Also, and I suspect this has more to do with your performance numbers, QEMU currently does disk IO via read()/write() syscalls on an fd that''s open()''d without O_DIRECT. This means everything''s going through the page cache. I suspect that SCSI + linux-aio would result in close to native performance. Since SCSI is already in QEMU CVS, it''s not that far off. I think that the same applies to network IO but I''m not qualified to comment on what things are important there. Regards, Anthony Liguori> Just to give people a flavor of the performance that we are getting, > here are some preliminary results on Intel Woodcrest (51xx series), with > a Gigabit network, with SAN storage and all of the VMs were 1 CPU. These > numbers are very early, disks numbers are very good and we are still > tuning the network numbers. > > Bonnie-SAN - bigger is better RHEL-4.0 (32-bit) VI-accel > RHEL-4.0(32-bit) Write, KB/sec 52,106 > 49,500 Read, KB/sec 59,392 > 57,186 > > netperf - bigger is better RHEL-4.0 (32-bit) VI-accel > RHEL-4.0(32-bit) tcp req/resp (t/sec) 6,831 > 5,648 > > SPECjbb2000 - bigger is better RHEL-4.0 (32-bit) VI-accel > RHEL-4.0(32-bit) JRockit JVM thruput 43,061 > 40,364 > > This code is modeled on Xen backend/frontend architecture concepts and > will be GPLed. > > -Alex V. > > Alex Vasilevsky > Chief Technology Officer, Founder > Virtual Iron Software, Inc_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori wrote:> ... > > We (Virtual Iron) are in a process of developing accelerated drivers for > > the HVM guests. Our goal for this effort is to get as close to native > > performance as possible and to make paravirtualization of guests > > unnecessary. > ... > I don''t think paravirtual drivers are necessary for good performance. > There are a number of things about QEMU''s device emulation that are less > than ideal. >Before deciding to implement accelerated drivers for many different guest OSes, no trivial undertaking, we did quite a lot of analysis of QEMU and its capabilities. Our conclusion was that QEMU in the near future was not going to be able to reach performance goals that we set out for our product. Instead of hacking on QEMU in hope of getting better numbers out of it, we decided to design and implement accelerated drivers and the performance numbers we are getting proves that was the right decision to make. As I mentioned in my post before, these drvers will be freely available under GPL and everyone is welcome to use them.> >... > Also, and I suspect this has more to do with your performance numbers, > QEMU currently does disk IO via read()/write() syscalls on an fd that''s > open()''d without O_DIRECT. This means everything''s going through the page > cache.The QEMU code that we use doesn''t go through the dom0 buffer cache, we modified the code to use O_DIRECT. Can''t user buffer cache and accelerated drivers (they go right to the disk) together, it can cause disk corruption. The performance numbers we get from this version of QEMU is still 4 to 6 times slower that native disk I/O.> > I suspect that SCSI + linux-aio would result in close to native > performance. Since SCSI is already in QEMU CVS, it''s not that far off. >You might be right, however even with pipelining and async I/O, I don''t think it is going to get close to native I/O numbers. I guess we''ll just have to wait and see.> >Best, -Alex V. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Anthony Liguori wrote:> I don''t think paravirtual drivers are necessary for good performance.> I think that the same applies to network IO but I''m not qualified to > comment on what things are important there.Especially if we emulate a network card that does checksumming "in hardware" and knows how to do zero-copy sending... -- "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> The QEMU code that we use doesn''t go through the dom0 buffer cache, we modified the > code to use O_DIRECT. Can''t user buffer cache and accelerated drivers (they go right > to the disk) together, it can cause disk corruption. The performance numbers we get > from this version of QEMU is still 4 to 6 times slower that native disk I/O.I doubt O_DIRECT buys you much in the way of performance -- as you say it''s just a correctness thing. Still, the qemu block code is all completely synchronous -- the fact that you simply can''t have more than a single outstanding block request at a time is going to seriously harm performance. As Anthony explained, some form of asynchronous IO in the qemu drivers would clearly improve performance.> You might be right, however even with pipelining and async I/O, I don''t think it is going to get close to native I/O numbers. I guess we''ll just have to wait > and see.I''d expect that disk can be made to perform reasonably well with qemu, using dma emulation and async IO. The old vmware workstation paper on device virtualization does a pretty good job of demonstrating that trap and emulate device access sucks, and would seem to imply that it''s pretty unlikely to be practical for high-rate networking. a. [1] http://www.usenix.org/event/usenix01/sugerman/sugerman.pdf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Andrew Warfield wrote:> > The QEMU code that we use doesn''t go through the dom0 buffer cache, we modified the > > code to use O_DIRECT. Can''t user buffer cache and accelerated drivers (they go right > > to the disk) together, it can cause disk corruption. The performance numbers we get > > from this version of QEMU is still 4 to 6 times slower that native disk I/O. > > I doubt O_DIRECT buys you much in the way of performance -- as you say > it''s just a correctness thing. Still, the qemu block code is all > completely synchronous -- the fact that you simply can''t have more > than a single outstanding block request at a time is going to > seriously harm performance. As Anthony explained, some form of > asynchronous IO in the qemu drivers would clearly improve performance. >That was exactly my point, that O_DIRECT doesn''t improve performance. Anthony had a a point in his e-mail that buffered I/O could be one of the reasons that performance of QEMU is slow.> > > You might be right, however even with pipelining and async I/O, I don''t think it is going to >> > > get close to native I/O numbers. I guess we''ll just have to wait and see. > > I''d expect that disk can be made to perform reasonably well with qemu, > using dma emulation and async IO. The old vmware workstation paper on > device virtualization does a pretty good job of demonstrating that > trap and emulate device access sucks, and would seem to imply that > it''s pretty unlikely to be practical for high-rate networking. >I understand what you guys are proposing, and I look forward to see your implementation and to your performance numbers. In particular it would be very interesting to see what kind of CPU overhead you''ll get. With regard to networking I agree with the VMWare guys, it is not practical to do traps & emulation to achieve high rate networking throughput. For example, with our accel drivers on certain network benchmarks we can drive network almost at wire speeds from an HVM domain and consume very few CPU cycles in doing this. Cheers, -Alex V. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, 06 Jul 2006 17:43:50 -0800, alex wrote:> Anthony Liguori wrote: >>... >> Also, and I suspect this has more to do with your performance numbers, >> QEMU currently does disk IO via read()/write() syscalls on an fd that''s >> open()''d without O_DIRECT. This means everything''s going through the >> page cache. > The QEMU code that we use doesn''t go through the dom0 buffer cache, we > modified the code to use O_DIRECT. Can''t user buffer cache and > accelerated drivers (they go right to the disk) together, it can cause > disk corruption. The performance numbers we get from this version of > QEMU is still 4 to 6 times slower that native disk I/O.Sorry, I should have been more clear. I presume that your drivers are a lot like the normal paravirt drivers. This means that you''re injecting bio''s into the host that point directly to the memory in the guest. Just using O_DIRECT wouldn''t be enough in QEMU. You would also have to have functioning DMA (which appears broken in Xen). Proper async support would help too. Regards, Anthony Liguori>> I suspect that SCSI + linux-aio would result in close to native >> performance. Since SCSI is already in QEMU CVS, it''s not that far off. >> > You might be right, however even with pipelining and async I/O, I don''t > think it is going to get close to native I/O numbers. I guess we''ll > just have to wait and see>> > Best, > > -Alex V._______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hello,>This code is modeled on Xen backend/frontend architecture concepts and will be >GPLed.A little question for clarification, if I may: These accelerated drivers, as I underastand, are running in Dom0. (as domU runs unmodified OS kernels). I assume that you are talking about generic drivers (like a driver for IDE, driver a driver for Net, etc) which will work in conjunction with the real drivers ; am I right ? or are these hardware specific drivers (like one driver for e1000 nic, a driver for tg3 nic,a driver for realtek nic, etc.)? Regards, Rami Rosen On 7/6/06, alex@vasilevsky.name <alex@vasilevsky.name> wrote:> > Rik van Riel wrote: > > VT by itself seems fine, but once a VT domain is running a workload that > > is network intensive combined with a disk/cpu intensive workload, things > > get incredibly slow. > > > > Operations that take less than a second with either workload running > > alone can now take many seconds, sometimes the better part of a minute! > > > > Is this some limitation of the qemu device model? > > We (Virtual Iron) are in a process of developing accelerated drivers for the HVM guests. Our goal for this effort is to get as close to native performance as possible and to make paravirtualization of guests unnecessary. The drivers currently support most flavors of RHEL, SLES and Windows. The early performance numbers are encouraging. Some numbers are many times faster than QEMU emulation and are close to native performance numbers (and we are just beginning to tune the performance). > > Just to give people a flavor of the performance that we are getting, here are some preliminary results on Intel Woodcrest (51xx series), with a Gigabit network, with SAN storage and all of the VMs were 1 CPU. These numbers are very early, disks numbers are very good and we are still tuning the network numbers. > > Bonnie-SAN - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) > Write, KB/sec 52,106 49,500 > Read, KB/sec 59,392 57,186 > > netperf - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) > tcp req/resp (t/sec) 6,831 5,648 > > SPECjbb2000 - bigger is better RHEL-4.0 (32-bit) VI-accel RHEL-4.0(32-bit) > JRockit JVM thruput 43,061 40,364 > > This code is modeled on Xen backend/frontend architecture concepts and will be GPLed. > > -Alex V. > > Alex Vasilevsky > Chief Technology Officer, Founder > Virtual Iron Software, Inc > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Thu, 06 Jul 2006 18:35:42 -0800, alex@vasilevsky.name wrote:> I understand what you guys are proposing, and I look forward to see > your im= plementation and to=20 your performance numbers. In > particular it would be very interesting to se= e what kind of CPU > overhead you''ll get. With regard to networking I agree w= ith the > VMWare guys, it is not practical to do traps & emulation to achieve> high rate networking throughput. For example, with our accel drivers > on c= ertain network benchmarks we can drive network almost at wire > speeds from a= n HVM domain and consume very few CPU cycles in doing > this.Just a minor point, I take it that you are talking about gigabit here. -- Horms H: http://www.vergenet.net/~horms/ W: http://www.valinux.co.jp/en/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel