Hi, We have Intel VT enabled system loaded with Xen 3.0.2 and qemu based FC5+linux 2.6.16 as a guest operating system. We did some performance tests on this guest OS and the results are Sequential Write - IOPs IO size 512B 4K 16K 32K FC5 Native 162 163 156 147 FC5_DOM1(Linux 2.6.16 - Full Virtualization) - qemu based 1305 742 285 155 We have disabled the Write Cache using sdparm and hdparm utilities. But if you see the results in Linux Full Virtualization, the Sequential Write IOPs are better than Native performance. I am wondering how is this possible? Is there anywhere the caching is happening in the full virtualization case? If you have an idea about this behavior, please let me know. Thanks Priya _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
P M, Priya (STSD) wrote:> We have disabled the Write Cache using sdparm and hdparm utilities. But > if you see the results in Linux Full Virtualization, the Sequential > Write IOPs are better than Native performance. I am wondering how is > this possible? Is there anywhere the caching is happening in the full > virtualization case? If you have an idea about this behavior, please let > me know.QEMU does indeed use the buffer cache in Domain 0. In doing so, it also will take advantage of Domain 0 read ahead/write behind. The down side is that the disk write ordering guarantees expected by DomU filesystems are violated as well. If XEN or Domain 0 crashes, your DomU filesystems may be toast. To fix this you would need to patch QEMU to used O_DIRECT when accessing the virtual disk backing store object (block device, file, etc). We are currently testing just such a patch for the old QEMU device model. I haven''t looked at the new device model to see if it already handles this. Steve -- Steve Ofsthun - Virtual Iron Software, Inc. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, On Fri, Jul 14, 2006 at 03:41:23PM -0400, Steve Ofsthun wrote:> To fix this you would need to patch QEMU to used O_DIRECT when accessing > the virtual disk backing store object (block device, file, etc). We are > currently testing just such a patch for the old QEMU device model.Thanks!> I > haven''t looked at the new device model to see if it already handles this.I looked last week and it still had the same problem. Note that O_DIRECT doesn''t solve all of the problems either. You still end up with the _entire_ device model being blocked while the disk IO is in progress, which will leads to poor VT guest performance when trying to mix disk and CPU activity. But I''d rather have a performance problem than a data corrupter. :) --Stephen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel