Hi, I think I found part of the problem. Our RAID array vendor explained to me that they have a problem of not getting enough data from ext3 to do full stripe writes when ext3 issues an cache flush command. Basically something to do with the journaling and cache flushing. To test this I replaced ext3 with ext2 and now I get the expected results: mkfs.ext2 -Tlargefile4 -b4096 /dev/dm-1 dd if=/dev/zero of=/mnt/test bs=1024K count=1024 conv=fsync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 6.85545 seconds, 157 MB/s So now I think I will just try to just have ext3 journal to a different device. See if that helps any. Thanks Rene On 8/8/07, GARDAIS Ionel <Ionel.Gardais at tech-advantage.com> wrote:> > stride is the chunk size of your raid whatever the number of physical > disks composing the RAID array. > So for 256k chunks with a block size of 4k, stride should be 256/4 = 64 > instead of 640. > > Maybe it will help. > > Ionel > > > -------- Message d'origine-------- > De: Rene Salmon [mailto:rsalmon74 at gmail.com <rsalmon74 at gmail.com>] > Date: mer. 08/08/2007 22:53 > ?: GARDAIS Ionel > Cc: ext3-users at redhat.com > Objet : Re: RE : Poor ext3 performance on RAID array > > Hi, > > Thanks for the reply. I tried using the -E stride=X option as follows: > > mkfs.ext3 -b4096 -Tlargefile4 -E stride=640 /dev/dm-1 > > and got the same results around 50 MBytes/sec. Maybe I have the wrong > number for stride so here is the math for that: > > stride=stripe-size > Configure the filesystem for a RAID array > with > stripe-size filesystem blocks per stripe. > > > Here is some more detail on the RAID array. > > RAID level : 5 (10 drives + 1 parity) > Chunk Size : 256 KB > Stripe Size : 2560 KB (10 drives * 256KB) > > stride=640 * 4096(byte blocks) = 2560KB > > I will try other stride options but they don't seem to change much. > > Thanks > Rene > > > > > On 8/8/07, GARDAIS Ionel <Ionel.Gardais at tech-advantage.com> wrote: > > > > Hi Rene, > > > > You should try to add the "-E stride=X" option to the mkfs command line. > > Where X is expalined in the man page. > > > > This will basically map ext3 "blocks" on the RAID stripe size. > > > > Ionel > > > > > > -------- Message d'origine-------- > > De: ext3-users-bounces at redhat.com de la part de Rene Salmon > > Date: mer. 08/08/2007 22:21 > > ?: ext3-users at redhat.com > > Objet : Poor ext3 performance on RAID array > > > > Hi list, > > > > > > I am having some strange performance issues with ext3 and I am hoping to > > get some advice/hints on how to make this better. First some background > > on the setup. > > > > We have a RAID 5 array 10+1+1 with one LUN. That is 10 SATA drives one > > parity drive and one dedicated spare. The LUN is about 6.5TB. > > > > Using a 2Gbit/sec fiber channel card I can do some dd writes to the raw > > device and get speeds close to 200Mbytes/sec which is more or less the > > max the card can do. > > > > Next I create an xfs file system on the LUN and do a dd to xfs and get > > speeds close to 150Mbytes/sec. > > > > I want to use ext3 not xfs so next I put ext3 on the lun. Now when I do > > the dd to the ext3 lun I get 25-50Mbytes/sec depending on whether I have > > the Raid controller cache turned on or off. Getting 50MBytes/sec with > > the raid controller cache turned off. > > > > I know that ext3 should perform better so I must be doing something > > wrong. Here is my mkfs.ext3 > > > > mkfs.ext3 -b4096 -Tlagefile4 /dev/dm-0 > > > > Thanks in advanced for any help on this. > > > > Rene > > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://listman.redhat.com/archives/ext3-users/attachments/20070808/ad8b8b46/attachment.htm>