Evaldas Auryla
2011-May-19 07:55 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c9t5000C50025D5AF66d0 ONLINE 0 0 0 c9t5000C50025E5A85Ad0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c9t5000C50025D591BEd0 ONLINE 0 0 0 c9t5000C50025E1BD56d0 ONLINE 0 0 0 ... Is there an easy way to map these sas-addresses to the physical disks in enclosure ? Thanks,
Hung-ShengTsao (Lao Tsao) Ph.D.
2011-May-19 13:04 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
what is output echo |format On 5/19/2011 3:55 AM, Evaldas Auryla wrote:> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, > single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible > with sas-addresses such as this in "zpool status" output: > > NAME STATE READ WRITE CKSUM > cuve ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c9t5000C50025D5AF66d0 ONLINE 0 0 0 > c9t5000C50025E5A85Ad0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c9t5000C50025D591BEd0 ONLINE 0 0 0 > c9t5000C50025E1BD56d0 ONLINE 0 0 0 > ... > > Is there an easy way to map these sas-addresses to the physical disks > in enclosure ? > > Thanks, > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 632 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110519/6f93307a/attachment.vcf>
Chris Ridd
2011-May-19 13:32 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
On 19 May 2011, at 08:55, Evaldas Auryla wrote:> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: > > NAME STATE READ WRITE CKSUM > cuve ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c9t5000C50025D5AF66d0 ONLINE 0 0 0 > c9t5000C50025E5A85Ad0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c9t5000C50025D591BEd0 ONLINE 0 0 0 > c9t5000C50025E1BD56d0 ONLINE 0 0 0 > ... > > Is there an easy way to map these sas-addresses to the physical disks in enclosure ?Does /usr/lib/scsi/sestopo or /usr/lib/fm/fmd/fmtopo help? I can''t recall how you work out the arg to pass to sestopo :-( Chris
Evaldas Auryla
2011-May-19 13:49 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
The same format as in zpool status: 3. c9t5000C50025D5A266d0 <SEAGATE-ST91000640SS-AS02-931.51GB> /pci at 0,0/pci10de,376 at e/pci1000,3080 at 0/iport at f/disk at w5000c50025d5a266,0 4. c9t5000C50025D5AF66d0 <SEAGATE-ST91000640SS-AS02-931.51GB> /pci at 0,0/pci10de,376 at e/pci1000,3080 at 0/iport at f/disk at w5000c50025d5af66,0 On 05/19/11 03:04 PM, Hung-ShengTsao (Lao Tsao) Ph.D. wrote:> what is output > echo |format > > > On 5/19/2011 3:55 AM, Evaldas Auryla wrote: >> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, >> single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible >> with sas-addresses such as this in "zpool status" output: >> >> NAME STATE READ WRITE CKSUM >> cuve ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> c9t5000C50025D5AF66d0 ONLINE 0 0 0 >> c9t5000C50025E5A85Ad0 ONLINE 0 0 0 >> mirror-1 ONLINE 0 0 0 >> c9t5000C50025D591BEd0 ONLINE 0 0 0 >> c9t5000C50025E1BD56d0 ONLINE 0 0 0 >> ... >> >> Is there an easy way to map these sas-addresses to the physical disks >> in enclosure ?
Edward Ned Harvey
2011-May-19 13:52 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Evaldas Auryla > > Is there an easy way to map these sas-addresses to the physical disks in > enclosure ?Of course in the ideal world, when a disk needs to be pulled, hardware would know about it, and hardware would blink the light red. But that doesn''t always happen (that''s half the point of ZFS. Detecting problems that hardware didn''t detect.) Whenever I''ve had to do this, I would do something like this: First, offline the failed disk. So its lights will stay off. Then, if you''d really like to be sure, do something like this: while true ; do dd if=/dev/rdsk/c0t0d0 of=/dev/null bs=1024k count=8192 ; sleep 1 ; done I chose count=8192, because I figure that will put the light on for about a second. And then sleep for a second. So you get a nice steady 1-second blink happening, and that should help identify the right drive. If you have drives that don''t have individual lights on them ... Well, you''re basically boned. You would want to export, disconnect one at random, try to import, and see which disk is missing. Etc.
Chris Ridd
2011-May-19 13:52 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
On 19 May 2011, at 14:44, Evaldas Auryla wrote:> Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo -dV works nice, although it''s a bit "overkill" with manually parsing the output :)You need to install pkg:/system/io/tests. Chris
Eric D. Mudama
2011-May-19 15:20 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
On Thu, May 19 at 9:55, Evaldas Auryla wrote:> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, >single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are >visible with sas-addresses such as this in "zpool status" output: > > NAME STATE READ WRITE CKSUM > cuve ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c9t5000C50025D5AF66d0 ONLINE 0 0 0 > c9t5000C50025E5A85Ad0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c9t5000C50025D591BEd0 ONLINE 0 0 0 > c9t5000C50025E1BD56d0 ONLINE 0 0 0 > ... > >Is there an easy way to map these sas-addresses to the physical disks >in enclosure ?You should be able to match the ''000C50025D5AF66'' with the WWID printed on the label of the disk. It''s likely not visible, however, if you had a maintenance window you could pull the disks to write them down and just keep the paper handy. That, or use the trusty ''dd'' to read from it and find the solid light. --eric -- Eric D. Mudama edmudama at bounceswoosh.org
Hung-ShengTsao (Lao Tsao) Ph.D.
2011-May-19 15:27 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
IIRC there are tool/SW from lsi like MegaRaid SW that may display some info not sure you can use Common Array Manager On 5/19/2011 11:20 AM, Eric D. Mudama wrote:> On Thu, May 19 at 9:55, Evaldas Auryla wrote: >> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, >> single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are >> visible with sas-addresses such as this in "zpool status" output: >> >> NAME STATE READ WRITE CKSUM >> cuve ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> c9t5000C50025D5AF66d0 ONLINE 0 0 0 >> c9t5000C50025E5A85Ad0 ONLINE 0 0 0 >> mirror-1 ONLINE 0 0 0 >> c9t5000C50025D591BEd0 ONLINE 0 0 0 >> c9t5000C50025E1BD56d0 ONLINE 0 0 0 >> ... >> >> Is there an easy way to map these sas-addresses to the physical disks >> in enclosure ? > > You should be able to match the ''000C50025D5AF66'' with the WWID > printed on the label of the disk. It''s likely not visible, however, > if you had a maintenance window you could pull the disks to write them > down and just keep the paper handy. > > That, or use the trusty ''dd'' to read from it and find the solid light. > > --eric >-------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 632 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110519/1ff8a234/attachment.vcf>
Richard Elling
2011-May-19 17:57 UTC
[zfs-discuss] Mapping sas address to physical disk in enclosure
On May 19, 2011, at 12:55 AM, Evaldas Auryla wrote:> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: > > NAME STATE READ WRITE CKSUM > cuve ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c9t5000C50025D5AF66d0 ONLINE 0 0 0This really isn''t the SAS address, but it is similar.> c9t5000C50025E5A85Ad0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c9t5000C50025D591BEd0 ONLINE 0 0 0 > c9t5000C50025E1BD56d0 ONLINE 0 0 0 > ... > > Is there an easy way to map these sas-addresses to the physical disks in enclosure ?No. But there are several ways to map them. The easiest is to get the disk serial number and cross-reference to the fmtopo output. I''ve got a script that does this for NexentaStor. Perhaps someone can port to Solaris (NexentaStor has a utility, hdddiso, that provides disk info, such as the serial number, that is easily parseable in a script :-) fmtopo can talk to many enclosures (run with root privileges) and can get the mappings. A typical line looks like: hc://:product-id=QUANTA-Storage-JB7:server-id=:chassis-id=50016360001d0008:serial=WD-WMAY00547508:part=ATA-WDC-WD2003FYYS-0:revision=1D01/ses-enclosure=2/bay=21/disk=0 If you have more than one enclosure, each enclosure is enumerated, but the cross-reference needs to be done elsewhere (hint: use chassis-id) In these cases, there is an assumption that the "bay=21" logical representation correlates to the silk-screened labels on the enclosure. This assumption can trip you up. -- richard