Almost sent this to -current, I'm not used to 5.3 being stable yet... I recently installed an old DPT RAID controller in a test machine (5.3-REL, SMP) and saw some odd I/O behavior. The controller is: dpt0: <DPT Caching SCSI RAID Controller> port 0xef80-0xef9f irq 17 at device 16.0 on pci0 dpt0: DPT PM3334UW FW Rev. 07H1, 1 channel, 64 CCBs dpt0: [GIANT-LOCKED] da0 at dpt0 bus 0 target 0 lun 0 da0: <DPT RAID-5 07H1> Fixed Direct Access SCSI-2 device da0: 17365MB (35564544 512 byte sectors: 255H 63S/T 2213C) It's running RAID-5 and has an on-board cache (64MB worth of 72-pin SIMMs, heh). The drives I think are Seagate Barracuda ultrawide, but I'm not physically there to verify at the moment -- they're in a locked enclosure. The odd behavior surfaced when I went to zero out the array using dd with a 1MB block size. According to both gstat and iostat, the array is busy for 5 seconds or so, then everything drops to 0 for about 2 seconds. iostat -d -w 1 looks like this: da0 KB/t tps MB/s 128.00 14 1.73 128.00 20 2.48 128.00 20 2.48 128.00 19 2.35 128.00 19 2.35 0.00 0 0.00 128.00 6 0.74 128.00 20 2.48 128.00 20 2.48 128.00 19 2.35 128.00 19 2.35 128.00 5 0.62 128.00 1 0.12 I know sequential writes aren't a very good way to measure performance, especially with RAID-5, but it just seemed a little... odd. Is this to be expected? Craig