I have one storage server with 24 drives, spread across three controllers and split into three RAIDz2 pools. Unfortunately, I have no idea which bay holds which drive. Fortunately, this server is used for secondary storage so I can take it offline for a bit. My plan is to use zpool export to take each pool offline and then dd to do a sustained read off each drive in turn and watch the blinking lights to see which drive is which. In a nutshell: zpool export uberdisk1 zpool export uberdisk2 zpool export uberdisk3 dd if=/dev/rdsk/c9t0d0 of=/dev/null dd if=/dev/rdsk/c9t1d0 of=/dev/null [etc. 22 more times] zpool import uberdisk1 zpool import uberdisk2 zpool import uberdisk3 Are there any glaring errors in my reasoning here? My thinking is I should probably identify these disks before any problems develop, in case of erratic read errors that are enough to make me replace the drive without being enough to make the hardware ID it as bad. -- Dave Pooser, ACSA Manager of Information Services Alford Media http://www.alfordmedia.com
On Mon, Apr 26, 2010 at 6:21 AM, Dave Pooser <dave.zfs at alfordmedia.com> wrote:> I have one storage server with 24 drives, spread across three controllers > and split into three RAIDz2 pools. Unfortunately, I have no idea which bay > holds which drive. Fortunately, this server is used for secondary storage so > I can take it offline for a bit. My plan is to use zpool export to take each > pool offline and then dd to do a sustained read off each drive in turn and > watch the blinking lights to see which drive is which. In a nutshell: > zpool export uberdisk1 > zpool export uberdisk2 > zpool export uberdisk3 > dd if=/dev/rdsk/c9t0d0 of=/dev/null > dd if=/dev/rdsk/c9t1d0 of=/dev/null > ?[etc. 22 more times] > zpool import uberdisk1 > zpool import uberdisk2 > zpool import uberdisk3 > > Are there any glaring errors in my reasoning here? My thinking is I should > probably identify these disks before any problems develop, in case of > erratic read errors that are enough to make me replace the drive without > being enough to make the hardware ID it as bad.There should be no need to take pools offline or anything like that. If it''s just secondary storage then normal usage should be low enough to easily spot which drive you''re hammering. (Personally, format->analyze->read rather than dd.) And there ought to be a consistent pattern rather than locations being random. If you can see the serial numbers on the drives then cross-referencing those with the serial numbers from the OS (eg from iostat -En) would be a good idea. (You are, I presume, using regular scrubs to catch latent errors.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
luxadm(1m) has a led_blink subcommand you might find useful. -- richard On Apr 25, 2010, at 10:21 PM, Dave Pooser wrote:> I have one storage server with 24 drives, spread across three controllers > and split into three RAIDz2 pools. Unfortunately, I have no idea which bay > holds which drive. Fortunately, this server is used for secondary storage so > I can take it offline for a bit. My plan is to use zpool export to take each > pool offline and then dd to do a sustained read off each drive in turn and > watch the blinking lights to see which drive is which. In a nutshell: > zpool export uberdisk1 > zpool export uberdisk2 > zpool export uberdisk3 > dd if=/dev/rdsk/c9t0d0 of=/dev/null > dd if=/dev/rdsk/c9t1d0 of=/dev/null > [etc. 22 more times] > zpool import uberdisk1 > zpool import uberdisk2 > zpool import uberdisk3 > > Are there any glaring errors in my reasoning here? My thinking is I should > probably identify these disks before any problems develop, in case of > erratic read errors that are enough to make me replace the drive without > being enough to make the hardware ID it as bad. > -- > Dave Pooser, ACSA > Manager of Information Services > Alford Media http://www.alfordmedia.com > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com