Hi, I have been wondering if my disk setup should perform better than it is now doing. I have a dual PIII 500 Mhz with intel server mother board. (a couple of years old). On that, I have DPT (or currently Adaptec) raid controller "DPT PM2654U2", which supports 40 Mhz SCSI bus, giving a theoretical data transfer speed of 80 MB/s. There are two physical disks, which have been mirrored (ie. raid-1). The disks are maxtor atlas 10K4, I think that maxtor tells that they should give sustained transfer rate up to 72MB/s. I have confirmed that SCSI bus is at 80MB/s speed with dptutil. The system is running FreeBSD 4.8. However, when reading raw device with dd like this: dd if=/dev/rda1s1a of=/dev/null bs=1m count=100 100+0 records in 100+0 records out 104857600 bytes transferred in 4.193832 secs (25002814 bytes/sec) So, I get only about 25MB/s. Shouldn't I be getting something like 70 MB/s, or even more since there are two disks that can server read requests ? Maybe there is something I could tune ? The BIOS doesn't have much, there is only setting to enable bus mastering (enabled) and another for pci latency timer values (was 40, I think) Ari S. -- Ari Suutari Lemi, Finland
Ari Suutari writes: > dd if=/dev/rda1s1a of=/dev/null bs=1m count=100 > 100+0 records in > 100+0 records out > 104857600 bytes transferred in 4.193832 secs (25002814 bytes/sec) > > So, I get only about 25MB/s. Shouldn't I be getting something > like 70 MB/s, or even more since there are two disks that > can server read requests ? Have you tried other block sizes? I think you may be able to get better results by going to a lower block size (e.g, 64k instead of 1m). Some experimentation will show which block size(s) work best.
--On 14 January 2004 14:53 +0200 Ari Suutari <ari@suutari.iki.fi> wrote:> I have a dual PIII 500 Mhz with intel server mother board. > (a couple of years old). On that, I have DPT (or currently Adaptec) > raid controller "DPT PM2654U2", which supports 40 Mhz SCSI bus, > giving a theoretical data transfer speed of 80 MB/s. There are > two physical disks, which have been mirrored (ie. raid-1). > The disks are maxtor atlas 10K4, I think that maxtor tells > that they should give sustained transfer rate up to 72MB/s.72Mbytes per second? - That seems a little high? If that's the 'up to rate' - you can guarantee it won't do that across the whole surface - it could be half that speed in places.> dd if=/dev/rda1s1a of=/dev/null bs=1m count=100 > 100+0 records in > 100+0 records out > 104857600 bytes transferred in 4.193832 secs (25002814 bytes/sec) > > So, I get only about 25MB/s. Shouldn't I be getting something > like 70 MB/s, or even more since there are two disks that > can server read requests ?Hmmm, what happens if you run two of those at once?> Maybe there is something I could tune ? The BIOS doesn't > have much, there is only setting to enable bus mastering (enabled) > and another for pci latency timer values (was 40, I think)In theory (and assuming nothing else fiddles, or overrides it etc.) - the higher you set the latency, the faster but more 'jerky' the PCI bus gets. i.e. devices spend longer talking on it, which means the initial setup for transfers etc. as an overhead goes down - at the expensive of devices 'hogging' the bus for longer... I think :) But, I can't remember seeing adjusting it making any real-world difference - what happens if you max it out (e.g. 128, or 256)? It'll be interesting to see what speed you get for 2 of the benchmarks running at the same time... Have you also tried different block sizes, e.g. 64k, 128k etc? -Karl
I've got to say this seems pretty bad since my laptop gets ... 104857600 bytes transferred in 4.491311 secs (23346770 bytes/sec) On Wed, 2004-01-14 at 22:53, Ari Suutari wrote:> Hi, > > I have been wondering if my disk setup should perform better than > it is now doing. > > I have a dual PIII 500 Mhz with intel server mother board. > (a couple of years old). On that, I have DPT (or currently Adaptec) > raid controller "DPT PM2654U2", which supports 40 Mhz SCSI bus, > giving a theoretical data transfer speed of 80 MB/s. There are > two physical disks, which have been mirrored (ie. raid-1). > The disks are maxtor atlas 10K4, I think that maxtor tells > that they should give sustained transfer rate up to 72MB/s. > I have confirmed that SCSI bus is at 80MB/s speed > with dptutil. > > The system is running FreeBSD 4.8. > > However, when reading raw device with dd like this: > > dd if=/dev/rda1s1a of=/dev/null bs=1m count=100 > 100+0 records in > 100+0 records out > 104857600 bytes transferred in 4.193832 secs (25002814 bytes/sec) > > So, I get only about 25MB/s. Shouldn't I be getting something > like 70 MB/s, or even more since there are two disks that > can server read requests ? > > Maybe there is something I could tune ? The BIOS doesn't > have much, there is only setting to enable bus mastering (enabled) > and another for pci latency timer values (was 40, I think) > > > Ari S.-- Mark Sergeant <msergeant@snsonline.net> SNSOnline Technical Services -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: This is a digitally signed message part Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20040114/56df6878/attachment.bin
> -----Original Message----- > From: owner-freebsd-stable@freebsd.org > [mailto:owner-freebsd-stable@freebsd.org] On Behalf Of Paul Mather > Sent: Thursday, January 15, 2004 9:39 AM > To: Mike Jakubik > Cc: freebsd-stable@freebsd.org > Subject: Re: Adaptect raid performance with FreeBSD > > On Wed, Jan 14, 2004 at 05:52:50PM -0500, Mike Jakubik wrote: > > => This sounds pretty poor for SCSI raid. Here are my results > on a single => Maxtor ATA drive. > => > => CPU: AMD Athlon(tm) Processor (1410.21-MHz 686-class CPU) > => ad0: 76345MB <MAXTOR 6L080L4> [155114/16/63] at > ata0-master UDMA100 => => # dd if=/dev/rad0s1a of=/dev/null > bs=1m count=100 => 100+0 records in => 100+0 records out => > 104857600 bytes transferred in 2.484640 secs (42202333 > bytes/sec) => => 5 dd's running simultaneously show the > following in iostast. > > What about 5 dd's running simultaneously but with slightly > staggered start times so that four of them aren't hitting the > drive's cache and hence only really testing its interface speed? :-)Here are result with a .3 second delay between each dd start: 104857600 bytes transferred in 9.572284 secs (10954293 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.261223 secs (11322220 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.262631 secs (11320499 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.263857 secs (11319000 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.265230 secs (11317323 bytes/sec) Im not sure if this was done properly, here Is the command I used: # dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100& sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 iostst -w 1: tty ad0 ad2 ad4 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 2 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 1 119 128.00 37 4.62 0.00 0 0.00 0.00 0 0.00 0 0 1 0 99 0 77 128.00 398 49.75 0.00 0 0.00 0.00 0 0.00 0 0 2 2 97 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 2 2 96 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 97 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 1 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 2 2 97 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 1 2 98 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 1 1 98 0 689 128.00 155 19.43 0.00 0 0.00 0.00 0 0.00 1 0 1 1 98 0 77 16.00 8 0.12 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 77 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100> Long seeks are the major time consumer in disk I/O (and > multiple spindle parallelism is one of this things in RAID > that helps minimise this penalty). The above dd test is not > a good test of performance in that regard. What it will give > you is a best-case performance, not an expected real-world > performance (which is more valuable to know, right?). > > Cheers, > > Paul. > > PS: Maybe you'll get faster transfers if you do the dd from > single-user mode, with no background system processes > interfering with the disk. :-) >Yes, I agree. Thanks.