jim burns
2008-Jan-28 03:23 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O
Sami Dalouche writes:> So, conclusion, I am lost : > On the one side, it seems that Xen, when used on top of a raid array, is > wayyy slower, but when used on top a plain old disk, seems to be pretty > much native performance. Is there a potential link between Xen and RAID > vs non raid performance ? Or maybe the problem is caused by Xen + RAID + > LVM ?Hmm, interesting couple of paragraphs in the http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently, lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by raid, at least in your case? I quote: 2.7. Per-device dirty memory thresholds You can read this recommended article about the "per-device dirty thresholds" feature. When a process writes data to the disk, the data is stored temporally in ''dirty'' memory until the kernel decides to write the data to the disk (''cleaning'' the memory used to store the data). A process can ''dirty'' the memory faster than the data is written to the disk, so the kernel throttles processes when there''s too much dirty memory around. The problem with this mechanism is that the dirty memory thresholds are global, the mechanism doesn''t care if there are several storage devices in the system, much less if some of them are faster than others. There are a lot of scenarios where this design harms performance. For example, if there''s a very slow storage device in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the thresholds are hit very quickly - not allowing other processes that may be working in much faster local disk to progress. Stacked block devices (ex: LVM/DM) are much worse and even deadlock-prone (check the LWN article). In 2.6.24, the dirty thresholds are per-device, not global. The limits are variable, depending on the writeout speed of each device. This improves the performance greatly in many situations. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sami Dalouche
2008-Jan-28 19:31 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O
Hmm.. Thanks a lot for the tip ! Ok, so I guess it is theoretically possible that the RAID stuff creates problems with Xen in my case. So, to check that, I''ll go install more disks on the RAID array, create a new device without LVM and see if the performance is the same. Since I have done tests with : Xen + No RAID + No LVM, Xen + No RAID + LVM, and Xen + RAID + LVM, the only missing bench is Xen + RAID + No LVM. I''ll post back when I have the results of this bench (first need to go to the data center, etc..) And I''ll also check the performance improvements by upgrading the kernel to 2.6.24. What do you think of the container support in 2.6.24 ? Isn''t it a better-Xen for servers ? (I believe most people use virtualization on the server side just to isolate processes, so the containers seem like a better-xen in this case, isn''t it ?) Regards, Sami On Sun, 2008-01-27 at 22:23 -0500, jim burns wrote:> Sami Dalouche writes: > > So, conclusion, I am lost : > > On the one side, it seems that Xen, when used on top of a raid array, is > > wayyy slower, but when used on top a plain old disk, seems to be pretty > > much native performance. Is there a potential link between Xen and RAID > > vs non raid performance ? Or maybe the problem is caused by Xen + RAID + > > LVM ? > > Hmm, interesting couple of paragraphs in the > http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently, > lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by > raid, at least in your case? I quote: > > 2.7. Per-device dirty memory thresholds > > You can read this recommended article about the "per-device dirty thresholds" > feature. > > When a process writes data to the disk, the data is stored temporally > in ''dirty'' memory until the kernel decides to write the data to the disk > (''cleaning'' the memory used to store the data). A process can ''dirty'' the > memory faster than the data is written to the disk, so the kernel throttles > processes when there''s too much dirty memory around. The problem with this > mechanism is that the dirty memory thresholds are global, the mechanism > doesn''t care if there are several storage devices in the system, much less if > some of them are faster than others. There are a lot of scenarios where this > design harms performance. For example, if there''s a very slow storage device > in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the > thresholds are hit very quickly - not allowing other processes that may be > working in much faster local disk to progress. Stacked block devices (ex: > LVM/DM) are much worse and even deadlock-prone (check the LWN article). > > In 2.6.24, the dirty thresholds are per-device, not global. The limits are > variable, depending on the writeout speed of each device. This improves the > performance greatly in many situations. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns
2008-Jan-29 01:29 UTC
Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O
On Monday 28 January 2008 02:31:32 pm Sami Dalouche wrote:> And I''ll also check the performance improvements by upgrading the kernel > to 2.6.24. What do you think of the container support in 2.6.24 ? Isn''t > it a better-Xen for servers ? (I believe most people use virtualization > on the server side just to isolate processes, so the containers seem > like a better-xen in this case, isn''t it ?)You are talking about the section with the heading "2.8. PID and network namespaces"? I tend to wait until my distro makes new features available, writes the userspace config tools, etc. Since my Xen server is fedora, and they are still stuck on 2.6.21 for dom0 kernel, it will probably be about a year (fc10?) before I see the benefits of this. My more immediate problem is waiting for a 2.6.22 or .23 dom0 so I can start using my Intel wireless card with iwlwifi without rebooting into the non xen 2.6.23 kernel ;-) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users