I recently overhauled my RAID array - I now have 4 drives arranged as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two buses, which are actively terminated to give U160 speeds (and I have verified this). The card is a 5304 (128M cache) in a PCI-X slot. This replaces a set of 6 7200 rpm drives as RAID 5 which were running at 40meg speeds due to non LVD termination. I would expect to see a large speed increase wouldn't I ? But it remains about the same - around 45 meg/sec for reading a large file (3 gig or so) and half that for copying said file. These are 'real world' tests in the sense that I us the drive for building large ISo images and copying them around - I really dont care what benchmarks say, it's the speed of these two operatiosn that I want to make fats. I've tried all the possible stripe sizes (128k gives the best performance) but still I only get the above speeds. Just one of the 15k drives on it's own performs better than this! I would expect the RAID-0 to give me at least some speedup, or in the worst case be the same, surely ? Booting up Windowws and running some tests gives me far better performance however, so I am wondering if there is some driver issue here. Has anyone else seen the same kind of results ? I am running the latest stable for amd64 and the machine has twin opteron 242's with a gig of RAM each. surely it can do better than this ? -pcf.
Pete French wrote:> I recently overhauled my RAID array - I now have 4 drives arranged > as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two > buses, which are actively terminated to give U160 speeds (and I have > verified this). The card is a 5304 (128M cache) in a PCI-X slot. > > This replaces a set of 6 7200 rpm drives as RAID 5 which were running at > 40meg speeds due to non LVD termination. I would expect to see a large speed > increase wouldn't I ? But it remains about the same - around 45 meg/sec > for reading a large file (3 gig or so) and half that for copying said > file. These are 'real world' tests in the sense that I us the drive for > building large ISo images and copying them around - I really dont care what > benchmarks say, it's the speed of these two operatiosn that I want to make > fats. > > I've tried all the possible stripe sizes (128k gives the best performance) > but still I only get the above speeds. Just one of the 15k drives on it's > own performs better than this! I would expect the RAID-0 to give me at > least some speedup, or in the worst case be the same, surely ? > > Booting up Windowws and running some tests gives me far better performance > however, so I am wondering if there is some driver issue here. Has anyone > else seen the same kind of results ? I am running the latest stable for > amd64 and the machine has twin opteron 242's with a gig of RAM each. surely > it can do better than this ? >You might be able to speed up the read by playing with the vfs.read_max sysctl (try 16 or 32). cheers Mark
Pete French wrote:> I've tried all the possible stripe sizes (128k gives the best performance) > but still I only get the above speeds. Just one of the 15k drives on it's > own performs better than this! I would expect the RAID-0 to give me at > least some speedup, or in the worst case be the same, surely ?The checklist for tuning usually goes like this: - Is the controller cache enabled? - Do you have the battery for it and is write cache enabled? (You won't make full use of the cache without the battery) - How does your performance compare when using dd on the raw devices (in order: da0, da0s1, da0s1a...) vs when using it on the file system? (Poor performance might indicate FS vs stripe alignment issues) - What does vmstat -i say while running your benchmarks? Any interrupt storms? - If all of this fails, post dmesg, maybe someone will notice something unusual.