Hi list, I have a fully updated CentOS 5.2 (Dom0+DomU). I wanted to do some performance tests with Databases within a DomU (compare raid-levels).>From the theoretical point of view raid10 should be best, but I can´teven start the DomU when the lv sits on a raid10. The Error I get within Dom0 is (several of those): raid10_make_request bug: can''t convert block across chunks or bigger than 64k 1024000507 3 The DomU sees the disk just as a defekt one. Changing from phy to tap:aio gets the DomU running but performancewise this is not the solution I am seeking. My google vodoo brought up: http://www.issociate.de/board/post/485110/Bug(?)_in_raid10_(raid10,f2_-_lvm2_-_xen).html http://www.issociate.de/board/post/423708/raid10_make_request_bug:_can''t_convert_block_across_chunks_or_bigger_%5B...%5D.html https://bugzilla.redhat.com/show_bug.cgi?id=224077 https://bugzilla.redhat.com/show_bug.cgi?id=223947 So for me it looks like this is still a NoGo - right? cheers Henry _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
henry ritzlmayr wrote:> > Hi list, > > I have a fully updated CentOS 5.2 (Dom0+DomU). I wanted to do some > performance tests with Databases within a DomU (compare raid-levels). > >From the theoretical point of view raid10 should be best, but I can´t > even start the DomU when the lv sits on a raid10. > > The Error I get within Dom0 is (several of those): > > raid10_make_request bug: can''t convert block across chunks or bigger > than 64k 1024000507 3 > > The DomU sees the disk just as a defekt one. > > Changing from phy to tap:aio gets the DomU running but performancewise > this is not the solution I am seeking. > > My google vodoo brought up: > > http://www.issociate.de/board/post/485110/Bug(?)_in_raid10_(ra > id10,f2_-_lvm2_-_xen).html > > http://www.issociate.de/board/post/423708/raid10_make_request_ > bug:_can''t_convert_block_across_chunks_or_bigger_%5B...%5D.html > > https://bugzilla.redhat.com/show_bug.cgi?id=224077 > > https://bugzilla.redhat.com/show_bug.cgi?id=223947 > > So for me it looks like this is still a NoGo - right?The kernel md raid10 driver is a little off. You could try striping LVs across md RAID1 PVs. Say you have 6 drives, create 3 MD RAID1s, convert them to PVs (whole disk is fine no need to partition) add them to a volume group then create the LVs with lvcreate -i 3 ... which will cause them to stripe across all 3 PVs. This will provide identical performance to the MD RAID10 and should work fine with Xen. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Montag, den 08.09.2008, 10:09 -0400 schrieb Ross S. W. Walker:> henry ritzlmayr wrote: > > > > Hi list, > > > > I have a fully updated CentOS 5.2 (Dom0+DomU). I wanted to do some > > performance tests with Databases within a DomU (compare raid-levels). > > >From the theoretical point of view raid10 should be best, but I can´t > > even start the DomU when the lv sits on a raid10. > > > > The Error I get within Dom0 is (several of those): > > > > raid10_make_request bug: can''t convert block across chunks or bigger > > than 64k 1024000507 3 > > > > The DomU sees the disk just as a defekt one. > > > > Changing from phy to tap:aio gets the DomU running but performancewise > > this is not the solution I am seeking. > > > > My google vodoo brought up: > > > > http://www.issociate.de/board/post/485110/Bug(?)_in_raid10_(ra > > id10,f2_-_lvm2_-_xen).html > > > > http://www.issociate.de/board/post/423708/raid10_make_request_ > > bug:_can''t_convert_block_across_chunks_or_bigger_%5B...%5D.html > > > > https://bugzilla.redhat.com/show_bug.cgi?id=224077 > > > > https://bugzilla.redhat.com/show_bug.cgi?id=223947 > > > > So for me it looks like this is still a NoGo - right? > > The kernel md raid10 driver is a little off. > > You could try striping LVs across md RAID1 PVs. > > Say you have 6 drives, create 3 MD RAID1s, convert them to PVs > (whole disk is fine no need to partition) add them to a volume > group then create the LVs with lvcreate -i 3 ... which will > cause them to stripe across all 3 PVs. > > This will provide identical performance to the MD RAID10 and > should work fine with Xen. > > -RossHi Ross, thanks for the reply, I will give this a shot. Do you have any information (link) on why the md raid10 driver is "a little off" or any information if there are any plans to change this. I guess this is probably an upstream issue, but on some test systems I only have three disks so the above solution works only for a subset of my machines. Henry _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
henry ritzlmayr wrote:> Am Montag, den 08.09.2008, 10:09 -0400 schrieb Ross S. W. Walker: > > > > The kernel md raid10 driver is a little off. > > > > You could try striping LVs across md RAID1 PVs. > > > > Say you have 6 drives, create 3 MD RAID1s, convert them to PVs > > (whole disk is fine no need to partition) add them to a volume > > group then create the LVs with lvcreate -i 3 ... which will > > cause them to stripe across all 3 PVs. > > > > This will provide identical performance to the MD RAID10 and > > should work fine with Xen. > > > > -Ross > > Hi Ross, > > thanks for the reply, I will give this a shot. Do you have any > information (link) on why the md raid10 driver is "a little off" or > any information if there are any plans to change this. I guess this > is probably an upstream issue, but on some test systems I only have > three disks so the above solution works only for a subset of my > machines.Nothing besides what has been reported in the bug databases. This one in particular: https://bugzilla.redhat.com/show_bug.cgi?id=223947 And comment #6 about an outstanding dm patch for max_hw_sectors not being preserved sounds like it MIGHT fit the bill here. If the upper stackable storage drivers don''t get the correct max_hw_sectors they can in fact create bios bigger then the underlying storage system can handle which can cause the type of errors that are reported. The unconfirmed workaround of using tap:aio also seems to fit this too as this will force io to go through the VFS layer which means it will be memory mapped and limited to PAGE_SIZE io requests which will always fit into the max_hw_sectors, but tt also means that throughput will be choked at PAGE_SIZE/sec. You can test this with iSCSI Enterprise Target, a blockio lun on a raid10 partition (with or without LVM) and an initiator that pumps io requests larger then the max_hw_sectors of your SATA/SAS controller (dd bs=X, X>max_hw_sectors). I know for a fact that the blockio storage module in IET uses max_hw_sectors for determining the largest bio size that can be sent to the hardware, cause I wrote that part of the module. Having said all this though, if all you want is to get your storage up and running without debugging the kernel, then I advise you avoid the dm raid10 driver for now. -Ross PS You can also create raid10 by creating a bunch of dm raid1s then creating a dm raid0 out of them as well. I have always felt the dm raid10 module wasn''t actually a raid10 module, but a completely new raid level that should have it''s own identifier (actually wikipedia feels the same way too). ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users