Disk I/O of VM is slow compared to native, I/O port access consumes time significantly, which could be partly improved with buffered I/O. The attached patch partly improves the performance of disk I/O through buffering Disk I/O requests. Signed-off-by: Zhiteng Huang <zhiteng.huang@intel.com> Signed-off-by: Weidong Han <weidong.han@intel.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sorry, attach the patch :) --Weidong Han, Weidong wrote:> Disk I/O of VM is slow compared to native, I/O port access consumes > time significantly, which could be partly improved with buffered I/O. > The attached patch partly improves the performance of disk I/O through > buffering Disk I/O requests. > > Signed-off-by: Zhiteng Huang <zhiteng.huang@intel.com> > Signed-off-by: Weidong Han <weidong.han@intel.com> > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
What sort of improvement does it achieve? It''s a pretty hacky thing to have to add... :-( -- Keir On 14/5/07 10:36, "Han, Weidong" <weidong.han@intel.com> wrote:> Disk I/O of VM is slow compared to native, I/O port access consumes time > significantly, which could be partly improved with buffered I/O. > The attached patch partly improves the performance of disk I/O through > buffering Disk I/O requests. > > Signed-off-by: Zhiteng Huang <zhiteng.huang@intel.com> > Signed-off-by: Weidong Han <weidong.han@intel.com> > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Buffered disk I/O improves disk I/O port access performance. It''s similar to buffered vga which is mmio. We ran SysBench random file I/O test and found it reached about 10% improvement. --Weidong Keir Fraser wrote:> What sort of improvement does it achieve? It''s a pretty hacky thing > to have to add... :-( > > -- Keir > > > On 14/5/07 10:36, "Han, Weidong" <weidong.han@intel.com> wrote: > >> Disk I/O of VM is slow compared to native, I/O port access consumes >> time significantly, which could be partly improved with buffered I/O. >> The attached patch partly improves the performance of disk I/O >> through buffering Disk I/O requests. >> >> Signed-off-by: Zhiteng Huang <zhiteng.huang@intel.com> >> Signed-off-by: Weidong Han <weidong.han@intel.com> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> Buffered disk I/O improves disk I/O port access performance. It''s > similar to buffered vga which is mmio. We ran SysBench random file I/O > test and found it reached about 10% improvement.How does it compare to just using the SCSI HBA support that got checked in a few days ago (in the qemu-dm 0.9.0 upgrade)? If we''re going to add support for enabling buffering of ioport accesses beyond what we currently special case for the VGA it should be via a generic interface used by qemu to register sets of ports with xen and configure how they will be handled. Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Pratt wrote:>> Buffered disk I/O improves disk I/O port access performance. It''s >> similar to buffered vga which is mmio. We ran SysBench random file >> I/O test and found it reached about 10% improvement. > > How does it compare to just using the SCSI HBA support that got > checked in a few days ago (in the qemu-dm 0.9.0 upgrade)? >In our test, the performance of SCSI HBA is better than our patch performance in qemu 0.9.0, But we find the total I/O preformance downgrade a lot after upgrade to qemu 0.9.0. We suspect there may be some issues in qemu 0.9.0.> If we''re going to add support for enabling buffering of ioport > accesses beyond what we currently special case for the VGA it should > be via a generic interface used by qemu to register sets of ports > with xen and configure how they will be handled. > > IanYes, if there are many these buffering cases, using a generic interface is a final solution. -- Weidong _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > How does it compare to just using the SCSI HBA support that got > > checked in a few days ago (in the qemu-dm 0.9.0 upgrade)? > > In our test, the performance of SCSI HBA is better than our patch > performance in qemu 0.9.0,Thanks for running the tests.> But we find the total I/O preformance > downgrade a lot after upgrade to qemu 0.9.0. We suspect there may be > some issues in qemu 0.9.0.Please can you explain in more detail.> > If we''re going to add support for enabling buffering of ioport > > accesses beyond what we currently special case for the VGA it should > > be via a generic interface used by qemu to register sets of ports > > with xen and configure how they will be handled. > > Yes, if there are many these buffering cases, using a genericinterface> is a final solution.I''d like to see this generic mechanism introduced for more than just whether writes are buffered or not -- it would be very useful to register ranges of port or mmio space for handling in different fashions, e.g.: * read: forward to handler domain X channel Y * read: read as zeros * write: forward to handler domain X channel Y (and flush any buffered) * write: buffer and forward to domain X channel Y * write: ignore writes These hooks would also be very useful for adding debugging/tracing. I severely dislike our current approach of forwarding anything that doesn''t get picked up in Xen to a single qemu-dm rather than registering explicit ranges. Best, Ian _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Pratt wrote:>>> How does it compare to just using the SCSI HBA support that got >>> checked in a few days ago (in the qemu-dm 0.9.0 upgrade)? >> >> In our test, the performance of SCSI HBA is better than our patch >> performance in qemu 0.9.0, > > Thanks for running the tests. > >> But we find the total I/O preformance >> downgrade a lot after upgrade to qemu 0.9.0. We suspect there may be >> some issues in qemu 0.9.0. > > Please can you explain in more detail. >We just found the preformance is down when we run the same test cases, but now we don''t know why.>>> If we''re going to add support for enabling buffering of ioport >>> accesses beyond what we currently special case for the VGA it should >>> be via a generic interface used by qemu to register sets of ports >>> with xen and configure how they will be handled. >> >> Yes, if there are many these buffering cases, using a generic >> interface is a final solution. > > I''d like to see this generic mechanism introduced for more than just > whether writes are buffered or not -- it would be very useful to > register ranges of port or mmio space for handling in different > fashions, e.g.: > * read: forward to handler domain X channel Y > * read: read as zeros > * write: forward to handler domain X channel Y (and flush any > buffered) > * write: buffer and forward to domain X channel Y > * write: ignore writes > > These hooks would also be very useful for adding debugging/tracing. I > severely dislike our current approach of forwarding anything that > doesn''t get picked up in Xen to a single qemu-dm rather than > registering explicit ranges. > > Best, > IanAgree. A generic mechanism should be introduced in future, because we have found buffering I/O port or MMIO is valuable. However I think our patch is still useful now, after all it obviously improves performance of IDE emulation disk I/O, Best regards, --Weidong _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 20/5/07 14:57, "Han, Weidong" <weidong.han@intel.com> wrote:>> These hooks would also be very useful for adding debugging/tracing. I >> severely dislike our current approach of forwarding anything that >> doesn''t get picked up in Xen to a single qemu-dm rather than >> registering explicit ranges. > > Agree. A generic mechanism should be introduced in future, because we > have found buffering I/O port or MMIO is valuable. However I think our > patch is still useful now, after all it obviously improves performance > of IDE emulation disk I/O,It''s too ugly for a 10% win. It''s not like the vga acceleration, which gave a much bigger win and was also a less ugly change to the hypervisor interfaces. It would be far more interesting to find out where the (presumably larger than 10%) performance loss in the move to qemu 0.9.0 comes from. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Pratt wrote:>>> How does it compare to just using the SCSI HBA support that got >>> checked in a few days ago (in the qemu-dm 0.9.0 upgrade)? >> In our test, the performance of SCSI HBA is better than our patch >> performance in qemu 0.9.0, > > Thanks for running the tests. > >> But we find the total I/O preformance >> downgrade a lot after upgrade to qemu 0.9.0. We suspect there may be >> some issues in qemu 0.9.0. > > Please can you explain in more detail.IDE emulation is largely bound by how many requests you can get our per second. In 0.8.2, DMA completion happened immediately after the IO request finished. In 0.9.0, DMA completion is triggered by a AIO completion of the event. This implies a trip through the main event loop. By default, QEMU only allows glibc AIO to use a single thread which basically turns all AIO requests into synchronous requests. You''ll get some performance back by changing that to allow multiple threads. I haven''t looked at the QEMU 0.9.0 port just yet but I suspect that''s the problem since I''ve seen this behavior in normal QEMU. Regards, Anthony Liguori>>> If we''re going to add support for enabling buffering of ioport >>> accesses beyond what we currently special case for the VGA it should >>> be via a generic interface used by qemu to register sets of ports >>> with xen and configure how they will be handled. >> Yes, if there are many these buffering cases, using a generic > interface >> is a final solution. > > I''d like to see this generic mechanism introduced for more than just > whether writes are buffered or not -- it would be very useful to > register ranges of port or mmio space for handling in different > fashions, e.g.: > * read: forward to handler domain X channel Y > * read: read as zeros > * write: forward to handler domain X channel Y (and flush any buffered) > * write: buffer and forward to domain X channel Y > * write: ignore writes > > These hooks would also be very useful for adding debugging/tracing. I > severely dislike our current approach of forwarding anything that > doesn''t get picked up in Xen to a single qemu-dm rather than registering > explicit ranges. > > Best, > Ian > > > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel