Dear all, I''m performing some tests in order to evaluate Xen DomUs disk performance, compared to vanilla linux disk performance, and I''m not sure about the causes of the results I obteined. I''m running such tests on this hardware configuration: HP Proliant DL380 dual processor Intel Xeon 2.8 GHz (HyperThreading enabled) 5 GB PC2100 RAM Os: Debian 5.0, 2.6.26-2-686 kernel I''m using iozone to perform tests, configured to use 8KB block-size on files ranging from 64KB to 2GB. Test comprises 6 cases: seq. read, seq. re-read, random read, seq. write, seq. re-write, random write I first performed my tests on a vanilla linux kernel (2.6.26-2-686), configured to use 1GB RAM, then i run the same tests with Xen3.4, on a domU with this configuration: Dom0: 2.6.26-2-xen-686, dom0_mem=1024MB DomU: name = "vm" memory = 1024 vcpus = 1 kernel = "/root/vm/xen-kernel/vmlinuz-2.6.24-19-xen" ramdisk = "/root/vm/xen-kernel/initrd.img-2.6.24-19-xen" disk = [ ''file:/root/vm/vm.img,sda1,w'', ''phy:/dev/VolGroup00/Test,sda2,w'' ] In both cases, tests were performed on a lvm partition, running on top of a scsi disk. I performed such tests on different lv configuration (pure lv, snapshotted lv, etc.), using ext3 filesystem. Attached to this mail there is a file with 3 graphs summurizing the results in the seq. write case. First and second graphs have the write speed in KB/s on the Y axis, the X axis is the file dimension in KB and each color rapresents a different LV configuration. The third graph is the difference in performance between the precedent two graphs, using the vanilla linux performance as 1. So, the Y axis is the fraction of the DomU performance in respect to the vanilla linux performance. (0,5 means 50% of the linux vanilla performance, 2,1 means 210% ...) I''m trying to justify the results. Can you help me? Looking at the First Graph (exluding the case for 64KB file that is, for some reasons, a biased test), we can easly see three performance level: ~250MB/s: the effects of the processor cache (for 128, 256, 512KB files), ~220MB/s: then we see the effect of the ram buffer till the 64MB file test, ~60MB/s: finally we see "degraded" performance, when access to the physical disk are performed (i.e. we have to wait for the RAM buffer to be written on the disk) as you can see, when the file dimension grow, the performance of snapshotted LVs goes down because of the need of multiple I/O accesses. Now, looking at what happens with the domU (just for the "pure" LV case for now), I''m not sure on how to interpret the results and maybe i need more knoweldge on how Xen and the OS work in handling IO requests (e.g. how processors are used).>From the second graph (again, exluding the case for the 64KB file), you cansee just two level of performance: ~300MB/s: till the 32MB file test ~58MB/s: when accesses to the physical disk are performed. The first strange thing is that domU seems to have better performance than vanilla linux and, more interesting, domU performance is not affected by the processor cache limit (!). The second thing is the result for the 64MB and 128MB tests, which are the only cases where domU performs worse. Even if domU ram is the same of the vanilla linux configuration, it seems like domU is not able to use all its RAM to buffer writes to disk. Actually i''m not able to figure out what is going on, can you help me? Thanks Roberto -- Roberto Bifulco, Ph.D. Student robertobifulco.it COMICS Lab - www.comics.unina.it _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
It appears Roberto has gone far beyond any testing I''ve ever done, but I''ve always been concerned about disk access. I have a Xen server with RAID 1 SATA drives internal using img files. The host is CentOS 5.5. I notice that whenever I build a new VM and it gets to where it is allocating the img file, the entire time it is allocating the space for it, all my other virtual machines effectly stop doing anything. One DomU is a windows terminal server. While logged into it, I started the Image allocation then tried to open MS Outlook on the terminal server. Everything hung until the disk allocation was completed, then Outlook finally opened 30 minutes later. Is there a way to do ''fair-sharing'' of disk IO time so that a single DomU cannot hog all the physical IO capacity? ________________________________ From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Roberto Bifulco Sent: Tuesday, December 14, 2010 9:22 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Xen disk performance over lvm Dear all, I''m performing some tests in order to evaluate Xen DomUs disk performance, compared to vanilla linux disk performance, and I''m not sure about the causes of the results I obteined. I''m running such tests on this hardware configuration: HP Proliant DL380 dual processor Intel Xeon 2.8 GHz (HyperThreading enabled) 5 GB PC2100 RAM Os: Debian 5.0, 2.6.26-2-686 kernel I''m using iozone to perform tests, configured to use 8KB block-size on files ranging from 64KB to 2GB. Test comprises 6 cases: seq. read, seq. re-read, random read, seq. write, seq. re-write, random write I first performed my tests on a vanilla linux kernel (2.6.26-2-686), configured to use 1GB RAM, then i run the same tests with Xen3.4, on a domU with this configuration: Dom0: 2.6.26-2-xen-686, dom0_mem=1024MB DomU: name = "vm" memory = 1024 vcpus = 1 kernel = "/root/vm/xen-kernel/vmlinuz-2.6.24-19-xen" ramdisk = "/root/vm/xen-kernel/initrd.img-2.6.24-19-xen" disk = [ ''file:/root/vm/vm.img,sda1,w'', ''phy:/dev/VolGroup00/Test,sda2,w'' ] In both cases, tests were performed on a lvm partition, running on top of a scsi disk. I performed such tests on different lv configuration (pure lv, snapshotted lv, etc.), using ext3 filesystem. Attached to this mail there is a file with 3 graphs summurizing the results in the seq. write case. First and second graphs have the write speed in KB/s on the Y axis, the X axis is the file dimension in KB and each color rapresents a different LV configuration. The third graph is the difference in performance between the precedent two graphs, using the vanilla linux performance as 1. So, the Y axis is the fraction of the DomU performance in respect to the vanilla linux performance. (0,5 means 50% of the linux vanilla performance, 2,1 means 210% ...) I''m trying to justify the results. Can you help me? Looking at the First Graph (exluding the case for 64KB file that is, for some reasons, a biased test), we can easly see three performance level: ~250MB/s: the effects of the processor cache (for 128, 256, 512KB files), ~220MB/s: then we see the effect of the ram buffer till the 64MB file test, ~60MB/s: finally we see "degraded" performance, when access to the physical disk are performed (i.e. we have to wait for the RAM buffer to be written on the disk) as you can see, when the file dimension grow, the performance of snapshotted LVs goes down because of the need of multiple I/O accesses. Now, looking at what happens with the domU (just for the "pure" LV case for now), I''m not sure on how to interpret the results and maybe i need more knoweldge on how Xen and the OS work in handling IO requests (e.g. how processors are used).>From the second graph (again, exluding the case for the 64KB file), youcan see just two level of performance: ~300MB/s: till the 32MB file test ~58MB/s: when accesses to the physical disk are performed. The first strange thing is that domU seems to have better performance than vanilla linux and, more interesting, domU performance is not affected by the processor cache limit (!). The second thing is the result for the 64MB and 128MB tests, which are the only cases where domU performs worse. Even if domU ram is the same of the vanilla linux configuration, it seems like domU is not able to use all its RAM to buffer writes to disk. Actually i''m not able to figure out what is going on, can you help me? Thanks Roberto -- Roberto Bifulco, Ph.D. Student robertobifulco.it COMICS Lab - www.comics.unina.it _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Dec 15, 2010 at 2:16 AM, Russ Purinton < rpurinton@voipnettechnologies.com> wrote:> It appears Roberto has gone far beyond any testing I’ve ever done, but > I’ve always been concerned about disk access. I have a Xen server with RAID > 1 SATA drives internal using img files. The host is CentOS 5.5. > > > > I notice that whenever I build a new VM and it gets to where it is > allocating the img file, the entire time it is allocating the space for it, > all my other virtual machines effectly stop doing anything. > > > > One DomU is a windows terminal server. While logged into it, I started the > Image allocation then tried to open MS Outlook on the terminal server. > Everything hung until the disk allocation was completed, then Outlook > finally opened 30 minutes later. > > > > Is there a way to do ‘fair-sharing’ of disk IO time so that a single DomU > cannot hog all the physical IO capacity? >Just curious... ... are you using AHCI mode? Command to check: # cat /etc/modprobe.conf | grep scsi_hostadapter Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''m guessing Yes based on this: [root@xen1 ~]# cat /etc/modprobe.conf | grep scsi_hostadapter alias scsi_hostadapter ahci alias scsi_hostadapter1 usb-storage Is this good for me or bad? Thank You ________________________________ From: Teck Choon Giam [mailto:giamteckchoon@gmail.com] Sent: Tuesday, December 14, 2010 2:10 PM To: Russ Purinton Cc: Roberto Bifulco; xen-users@lists.xensource.com Subject: Re: [Xen-users] Sharing Disk Access Time On Wed, Dec 15, 2010 at 2:16 AM, Russ Purinton <rpurinton@voipnettechnologies.com> wrote: It appears Roberto has gone far beyond any testing I''ve ever done, but I''ve always been concerned about disk access. I have a Xen server with RAID 1 SATA drives internal using img files. The host is CentOS 5.5. I notice that whenever I build a new VM and it gets to where it is allocating the img file, the entire time it is allocating the space for it, all my other virtual machines effectly stop doing anything. One DomU is a windows terminal server. While logged into it, I started the Image allocation then tried to open MS Outlook on the terminal server. Everything hung until the disk allocation was completed, then Outlook finally opened 30 minutes later. Is there a way to do ''fair-sharing'' of disk IO time so that a single DomU cannot hog all the physical IO capacity? Just curious... ... are you using AHCI mode? Command to check: # cat /etc/modprobe.conf | grep scsi_hostadapter Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Dec 16, 2010 at 12:26 AM, Russ Purinton < rpurinton@voipnettechnologies.com> wrote:> I’m guessing Yes based on this: > > > > [root@xen1 ~]# cat /etc/modprobe.conf | grep scsi_hostadapter > > alias scsi_hostadapter ahci > > alias scsi_hostadapter1 usb-storage > > > > Is this good for me or bad? >Some people will debate... ... personally I prefer to have that instead of using SATA in IDE mode if the BIOS support. And... I just read your original post:> In both cases, tests were performed on a lvm partition, running on top ofa scsi disk. Of course scsi disk won''t be using normal ata_piix or sata_... ... so what am I talking here... ... :p Sorry, doesn''t help much for your current problem... ... :( Thanks. Kindest regards, Giam Teck Choon _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 12/15/2010 05:26 PM, Russ Purinton wrote: > I’m guessing Yes based on this: > > [root@xen1 ~]# cat /etc/modprobe.conf | grep scsi_hostadapter ... > I notice that whenever I build a new VM and it gets to where it is > allocating the img file, the entire time it is allocating the space for > it, all my other virtual machines effectly stop doing anything. This is not only an issue on xen but also on kvm. I have seen big differrences in between installing windows vm''s on ext4 and installing on bare lvm volumes. file-space allocation on ext4 is very expensive. because of this I only use vm''s on bare lvm''s. Stacking two transactional filesystems (nfts on ext3/4 or ext3/4 on ext3/4) seems a bad idea to me anyway. -- Hans _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users