I was wondering if anyone''s compiled a list of places to look to reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably acceptable performance from my setup (Dom0 as a iSCSI initiator, providing phy volumes to DomUs), at about 45MB/sec writes, and 80MB/sec reads (this is to a IET target running in blockio mode). As always, reducing latency for small disk operations would be nice, but I''m not sweating it. I just wondering if anyone''s experienced similar behavior, and if they''ve found ways to improve performance. Cheers cc -- Chris Chen <muffaleta@gmail.com> "The fact that yours is better than anyone else''s is not a guarantee that it''s any good." -- Seen on a wall _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Christopher Chen > Sent: Monday, July 20, 2009 8:26 PM > To: xen-users@lists.xensource.com > Subject: [Xen-users] Best Practices for PV Disk IO? > > I was wondering if anyone''s compiled a list of places to look to > reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably > acceptable performance from my setup (Dom0 as a iSCSI initiator, > providing phy volumes to DomUs), at about 45MB/sec writes, and > 80MB/sec reads (this is to a IET target running in blockio mode).For domU hosts, xenblk over phy: is the best I''ve found. I can get 166MB/s read performance from domU with O_DIRECT and 1024k blocks. Smaller block sizes yield progressively lower throughput, presumably due to read latency: 256k: 131MB/s 64k: 71MB/s 16k: 33MB/s 4k: 10MB/s Running the same tests on dom0 against the same block device yields only slightly faster throughput. If there''s any additional magic to boost disk I/O under Xen, I''d like to hear it too. I also pin my dom0 to an unused CPU so it is always available. My shared block storage runs the AoE protocol over a pair of 1GbE links. The good news is that there doesn''t seem to be much I/O penalty imposed by the hypervisor, so the domU hosts typically enjoy better disk I/O than an inexpensive server with a pair of SATA disks, at far less cost than the interconnects needed to couple a high-performance SAN to many individual hosts. Overall, the performance seems like a win for Xen virtualization. Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Jul 20, 2009 at 7:25 PM, Jeff Sturm<jeff.sturm@eprize.com> wrote:>> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com [mailto:xen-users- >> bounces@lists.xensource.com] On Behalf Of Christopher Chen >> Sent: Monday, July 20, 2009 8:26 PM >> To: xen-users@lists.xensource.com >> Subject: [Xen-users] Best Practices for PV Disk IO? >> >> I was wondering if anyone''s compiled a list of places to look to >> reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably >> acceptable performance from my setup (Dom0 as a iSCSI initiator, >> providing phy volumes to DomUs), at about 45MB/sec writes, and >> 80MB/sec reads (this is to a IET target running in blockio mode). > > For domU hosts, xenblk over phy: is the best I''ve found. I can get > 166MB/s read performance from domU with O_DIRECT and 1024k blocks. > > Smaller block sizes yield progressively lower throughput, presumably due > to read latency: > > 256k: 131MB/s > 64k: 71MB/s > 16k: 33MB/s > 4k: 10MB/s > > Running the same tests on dom0 against the same block device yields only > slightly faster throughput. > > If there''s any additional magic to boost disk I/O under Xen, I''d like to > hear it too. I also pin my dom0 to an unused CPU so it is always > available. My shared block storage runs the AoE protocol over a pair of > 1GbE links. > > The good news is that there doesn''t seem to be much I/O penalty imposed > by the hypervisor, so the domU hosts typically enjoy better disk I/O > than an inexpensive server with a pair of SATA disks, at far less cost > than the interconnects needed to couple a high-performance SAN to many > individual hosts. Overall, the performance seems like a win for Xen > virtualization. > > JeffJeff: That sounds about right. Those numbers I quoted were from a iozone latency test with 64k block sizes--80 is very close to your 71! I found that increasing readahead (to a point) really helps get me to 80MB/sec reads, and using a low nr_requests (in linux DomU) seems to influence the scheduler (cfq on the domU) to dispatch writes (up to 50MB/sec) faster, increasing write speed. Of course, on the Dom0, I see 110MB/sec writes and reads on the same block device at 64k. But yeah, I''d love to hear what other people are doing... Cheers! cc -- Chris Chen <muffaleta@gmail.com> "The fact that yours is better than anyone else''s is not a guarantee that it''s any good." -- Seen on a wall _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Maybe Matching Threads
- Possible to run iscsi-target and initiator on same server?
- best practices in using shared storage for XEN Virtual Machines and auto-failover?
- [Bug 39866] New: Incorrectly detects Analog outputs as TV out on Apple Cards
- problem with rspec_on_rails and @controller.should_receive(:render) in trunk
- streaming server on a virtual machine.