David W. Smith
2011-Jun-22 19:49 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
I was recently running Solaris 10 U9 and I decided that I would like to go to Solaris 11 Express so I exported my zpool, hoping that I would just do an import once I had the new system installed with Solaris 11. Now when I try to do an import I''m getting the following: # /home/dws# zpool import pool: tank id: 13155614069147461689 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-72 config: tank FAULTED corrupted data logs mirror-6 ONLINE c9t57d0 ONLINE c9t58d0 ONLINE mirror-7 ONLINE c9t59d0 ONLINE c9t60d0 ONLINE Is there something else I can do to see what is wrong. Original attempt when specifying the name resulted in: # /home/dws# zpool import tank cannot import ''tank'': I/O error Destroy and re-create the pool from a backup source. I verified that I have all 60 of my luns. The controller numbers have changed, but I don''t believe that should matter. Any suggestions about getting additional information about what is happening would be greatly appreciated. Thanks, David
David Smith
2011-Jun-22 20:20 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
I was recently running Solaris 10 U9 and I decided that I would like to go to Solaris 11 Express so I exported my zpool, hoping that I would just do an import once I had the new system installed with Solaris 11. Now when I try to do an import I''m getting the following: # /home/dws# zpool import pool: tank id: 13155614069147461689 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-72 config: tank FAULTED corrupted data logs mirror-6 ONLINE c9t57d0 ONLINE c9t58d0 ONLINE mirror-7 ONLINE c9t59d0 ONLINE c9t60d0 ONLINE Is there something else I can do to see what is wrong. Original attempt when specifying the name resulted in: # /home/dws# zpool import tank cannot import ''tank'': I/O error Destroy and re-create the pool from a backup source. I verified that I have all 60 of my luns. The controller numbers have changed, but I don''t believe that should matter. Any suggestions about getting additional information about what is happening would be greatly appreciated. Thanks, David -- This message posted from opensolaris.org
David Smith
2011-Jun-23 01:21 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
An update: I had mirrored my boot drive when I installed Solaris 10U9 originally, so I went ahead and rebooted the system to this disk instead of my Solaris 11 install. After getting the system up, I imported the zpool, and everything worked normally. So I guess there is some sort of incompatibility between Solaris 10 and Solaris 11. I would have thought that Solaris 11 could import an older pool level. Any other insight on importing pools between these two versions of Solaris would be helpful. Thanks, David -- This message posted from opensolaris.org
Daniel Carosone
2011-Jun-23 01:32 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:> # /home/dws# zpool import > pool: tank > id: 13155614069147461689 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-72 > config: > > tank FAULTED corrupted data > logs > mirror-6 ONLINE > c9t57d0 ONLINE > c9t58d0 ONLINE > mirror-7 ONLINE > c9t59d0 ONLINE > c9t60d0 ONLINE > > Is there something else I can do to see what is wrong.Can you tell us more about the setup, in particular the drivers and hardware on the path? There may be labelling, block size, offset or even bad drivers or other issues getting in the way, preventing ZFS from doing what should otherwise be expected to work. Was there something else in the storage stack on the old OS, like a different volume manager or some multipathing? Can you show us the zfs labels with zdb -l /dev/foo ? Does import -F get any further?> Original attempt when specifying the name resulted in: > > # /home/dws# zpool import tank > cannot import ''tank'': I/O errorSome kind of underlying driver problem odour here. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110623/5f502835/attachment.bin>
David W. Smith
2011-Jun-23 02:28 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:> On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote: > > # /home/dws# zpool import > > pool: tank > > id: 13155614069147461689 > > state: FAULTED > > status: The pool metadata is corrupted. > > action: The pool cannot be imported due to damaged devices or data. > > see: http://www.sun.com/msg/ZFS-8000-72 > > config: > > > > tank FAULTED corrupted data > > logs > > mirror-6 ONLINE > > c9t57d0 ONLINE > > c9t58d0 ONLINE > > mirror-7 ONLINE > > c9t59d0 ONLINE > > c9t60d0 ONLINE > > > > Is there something else I can do to see what is wrong. > > Can you tell us more about the setup, in particular the drivers and > hardware on the path? There may be labelling, block size, offset or > even bad drivers or other issues getting in the way, preventing ZFS > from doing what should otherwise be expected to work. Was there > something else in the storage stack on the old OS, like a different > volume manager or some multipathing? > > Can you show us the zfs labels with zdb -l /dev/foo ? > > Does import -F get any further? > > > Original attempt when specifying the name resulted in: > > > > # /home/dws# zpool import tank > > cannot import ''tank'': I/O error > > Some kind of underlying driver problem odour here. > > -- > Dan.The system is an x4440 with two dual port Qlogic 8 Gbit FC cards connected to a DDN 9900 storage unit. There are 60 luns configured from the storage unit we using raidz1 across these luns in a 9+1 configuration. Under Solaris 10U9 multipathing is enabled. For example here is one of the devices: # luxadm display /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 Vendor: DDN Product ID: S2A 9900 Revision: 6.11 Serial Num: 10DC50AA002E Unformatted capacity: 15261576.000 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 /devices/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:c,raw Controller /dev/cfg/c5 Device Address 24000001ff051232,2e Host controller port WWN 2101001b32bfe1d3 Class secondary State ONLINE Controller /dev/cfg/c7 Device Address 28000001ff0510dc,2e Host controller port WWN 2101001b32bd4f8f Class primary State ONLINE Here is the output of the zdb command: # zdb -l /dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7879214599529115091 guid=9439709931602673823 vdev_children=8 vdev_tree type=''raidz'' id=5 guid=7879214599529115091 nparity=1 metaslab_array=35 metaslab_shift=40 ashift=12 asize=160028491776000 is_log=0 create_txg=22 children[0] type=''disk'' id=0 guid=15738823520260019536 path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' whole_disk=1 DTL=166 create_txg=22 children[1] type=''disk'' id=1 guid=7241121769141495862 path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' whole_disk=1 DTL=165 create_txg=22 children[2] type=''disk'' id=2 guid=2777230007222012140 path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' whole_disk=1 DTL=164 create_txg=22 children[3] type=''disk'' id=3 guid=5525323314985659974 path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' whole_disk=1 DTL=163 create_txg=22 children[4] type=''disk'' id=4 guid=9152185867089638590 path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' whole_disk=1 DTL=162 create_txg=22 children[5] type=''disk'' id=5 guid=15506884896416107740 path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' whole_disk=1 DTL=161 create_txg=22 children[6] type=''disk'' id=6 guid=13655161443342419149 path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' whole_disk=1 DTL=160 create_txg=22 children[7] type=''disk'' id=7 guid=8658338305352581764 path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' whole_disk=1 DTL=159 create_txg=22 children[8] type=''disk'' id=8 guid=16757152450233754713 path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' whole_disk=1 DTL=158 create_txg=22 children[9] type=''disk'' id=9 guid=9439709931602673823 path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' whole_disk=1 DTL=157 create_txg=22 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 1 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7879214599529115091 guid=9439709931602673823 vdev_children=8 vdev_tree type=''raidz'' id=5 guid=7879214599529115091 nparity=1 metaslab_array=35 metaslab_shift=40 ashift=12 asize=160028491776000 is_log=0 create_txg=22 children[0] type=''disk'' id=0 guid=15738823520260019536 path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' whole_disk=1 DTL=166 create_txg=22 children[1] type=''disk'' id=1 guid=7241121769141495862 path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' whole_disk=1 DTL=165 create_txg=22 children[2] type=''disk'' id=2 guid=2777230007222012140 path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' whole_disk=1 DTL=164 create_txg=22 children[3] type=''disk'' id=3 guid=5525323314985659974 path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' whole_disk=1 DTL=163 create_txg=22 children[4] type=''disk'' id=4 guid=9152185867089638590 path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' whole_disk=1 DTL=162 create_txg=22 children[5] type=''disk'' id=5 guid=15506884896416107740 path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' whole_disk=1 DTL=161 create_txg=22 children[6] type=''disk'' id=6 guid=13655161443342419149 path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' whole_disk=1 DTL=160 create_txg=22 children[7] type=''disk'' id=7 guid=8658338305352581764 path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' whole_disk=1 DTL=159 create_txg=22 children[8] type=''disk'' id=8 guid=16757152450233754713 path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' whole_disk=1 DTL=158 create_txg=22 children[9] type=''disk'' id=9 guid=9439709931602673823 path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' whole_disk=1 DTL=157 create_txg=22 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 2 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7879214599529115091 guid=9439709931602673823 vdev_children=8 vdev_tree type=''raidz'' id=5 guid=7879214599529115091 nparity=1 metaslab_array=35 metaslab_shift=40 ashift=12 asize=160028491776000 is_log=0 create_txg=22 children[0] type=''disk'' id=0 guid=15738823520260019536 path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' whole_disk=1 DTL=166 create_txg=22 children[1] type=''disk'' id=1 guid=7241121769141495862 path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' whole_disk=1 DTL=165 create_txg=22 children[2] type=''disk'' id=2 guid=2777230007222012140 path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' whole_disk=1 DTL=164 create_txg=22 children[3] type=''disk'' id=3 guid=5525323314985659974 path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' whole_disk=1 DTL=163 create_txg=22 children[4] type=''disk'' id=4 guid=9152185867089638590 path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' whole_disk=1 DTL=162 create_txg=22 children[5] type=''disk'' id=5 guid=15506884896416107740 path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' whole_disk=1 DTL=161 create_txg=22 children[6] type=''disk'' id=6 guid=13655161443342419149 path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' whole_disk=1 DTL=160 create_txg=22 children[7] type=''disk'' id=7 guid=8658338305352581764 path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' whole_disk=1 DTL=159 create_txg=22 children[8] type=''disk'' id=8 guid=16757152450233754713 path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' whole_disk=1 DTL=158 create_txg=22 children[9] type=''disk'' id=9 guid=9439709931602673823 path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' whole_disk=1 DTL=157 create_txg=22 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 3 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7879214599529115091 guid=9439709931602673823 vdev_children=8 vdev_tree type=''raidz'' id=5 guid=7879214599529115091 nparity=1 metaslab_array=35 metaslab_shift=40 ashift=12 asize=160028491776000 is_log=0 create_txg=22 children[0] type=''disk'' id=0 guid=15738823520260019536 path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' whole_disk=1 DTL=166 create_txg=22 children[1] type=''disk'' id=1 guid=7241121769141495862 path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' whole_disk=1 DTL=165 create_txg=22 children[2] type=''disk'' id=2 guid=2777230007222012140 path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' whole_disk=1 DTL=164 create_txg=22 children[3] type=''disk'' id=3 guid=5525323314985659974 path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' whole_disk=1 DTL=163 create_txg=22 children[4] type=''disk'' id=4 guid=9152185867089638590 path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' whole_disk=1 DTL=162 create_txg=22 children[5] type=''disk'' id=5 guid=15506884896416107740 path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' whole_disk=1 DTL=161 create_txg=22 children[6] type=''disk'' id=6 guid=13655161443342419149 path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' whole_disk=1 DTL=160 create_txg=22 children[7] type=''disk'' id=7 guid=8658338305352581764 path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' whole_disk=1 DTL=159 create_txg=22 children[8] type=''disk'' id=8 guid=16757152450233754713 path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' whole_disk=1 DTL=158 create_txg=22 children[9] type=''disk'' id=9 guid=9439709931602673823 path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' whole_disk=1 DTL=157 create_txg=22 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 When I tried out Solaris 11, I just exported the pool prior to the install of Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had installed Solaris 11 I still had the other disk in the mirror with Solaris 10 still installed. I didn''t install any additional software in either environments with regards to volume management, etc.>From the format command, I did remember seeing 60 luns coming from the DDN andas I recall I disk see multiple paths as well under Solaris 11. I think you are correct however in that for some reason Solaris 11 could not read the devices. Let me know if you need any further output. David -- ------------------------------------------------- David W. Smith Lawrence Livermore National Laboratory P.O. Box 808, L-556 Livermore, CA 94551-9900 EMail: smith107 at llnl.gov Phone: 925-422-9256 Fax: 925-423-8719 -------------------------------------------------
Fajar A. Nugraha
2011-Jun-23 05:28 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith <smith107 at llnl.gov> wrote:> When I tried out Solaris 11, I just exported the pool prior to the install of > Solaris 11. ?I was lucky in that I had mirrored the boot drive, so after I had > installed Solaris 11 I still had the other disk in the mirror with Solaris 10 still > installed. > > I didn''t install any additional software in either environments with regards to > volume management, etc. > > From the format command, I did remember seeing 60 luns coming from the DDN and > as I recall I disk see multiple paths as well under Solaris 11. ?I think you are > correct however in that for some reason Solaris 11 could not read the devices. >So you mean the root cause of the problem is Solaris Express failed to see the disks? Or are the disks available on solaris express as well? When you boot with Solaris Express Live CD, what does "zpool import" show? -- Fajar
Smith, David W.
2011-Jun-23 15:49 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On 6/22/11 10:28 PM, "Fajar A. Nugraha" <work at fajar.net> wrote:> On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith <smith107 at llnl.gov> wrote: >> When I tried out Solaris 11, I just exported the pool prior to the install of >> Solaris 11. ?I was lucky in that I had mirrored the boot drive, so after I >> had >> installed Solaris 11 I still had the other disk in the mirror with Solaris 10 >> still >> installed. >> >> I didn''t install any additional software in either environments with regards >> to >> volume management, etc. >> >> From the format command, I did remember seeing 60 luns coming from the DDN >> and >> as I recall I disk see multiple paths as well under Solaris 11. ?I think you >> are >> correct however in that for some reason Solaris 11 could not read the >> devices. >> > > So you mean the root cause of the problem is Solaris Express failed to > see the disks? Or are the disks available on solaris express as well? > > When you boot with Solaris Express Live CD, what does "zpool import" show?Under Solaris 11 express, disks were seen with the format command, or like luxadm probe, etc. So I''m not sure why zpool import failed, or why I assume could not read the devices. I have not tried the Solaris Express live CD, but I was booted off an installed version. David
Cindy Swearingen
2011-Jun-23 20:26 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
Hi David, I see some inconsistencies between the mirrored pool tank info below and the device info that you included. 1. The zpool status for tank shows some remnants of log devices (?), here: tank FAULTED corrupted data logs Generally, the log devices are listed after the pool devices. Did this pool have log devices at one time? Are they missing? # zpool status datap pool: datap state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datap ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 I would like to see this output: # zpool history tank 2. Can you include the zdb -l output for c9t57d0 because the zdb -l device output below is from a RAIDZ config, not a mirrored config, although the pool GUIDs match so I''m confused. I don''t think this has anything to do with moving from s10u9 to S11 express. My sense is that if you have remnants of the same pool name on some of your devices but as different pools, then you will see device problems like these. Thanks, Cindy On 06/22/11 20:28, David W. Smith wrote:> On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote: >> On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote: >>> # /home/dws# zpool import >>> pool: tank >>> id: 13155614069147461689 >>> state: FAULTED >>> status: The pool metadata is corrupted. >>> action: The pool cannot be imported due to damaged devices or data. >>> see: http://www.sun.com/msg/ZFS-8000-72 >>> config: >>> >>> tank FAULTED corrupted data >>> logs >>> mirror-6 ONLINE >>> c9t57d0 ONLINE >>> c9t58d0 ONLINE >>> mirror-7 ONLINE >>> c9t59d0 ONLINE >>> c9t60d0 ONLINE >>> >>> Is there something else I can do to see what is wrong. >> Can you tell us more about the setup, in particular the drivers and >> hardware on the path? There may be labelling, block size, offset or >> even bad drivers or other issues getting in the way, preventing ZFS >> from doing what should otherwise be expected to work. Was there >> something else in the storage stack on the old OS, like a different >> volume manager or some multipathing? >> >> Can you show us the zfs labels with zdb -l /dev/foo ? >> >> Does import -F get any further? >> >>> Original attempt when specifying the name resulted in: >>> >>> # /home/dws# zpool import tank >>> cannot import ''tank'': I/O error >> Some kind of underlying driver problem odour here. >> >> -- >> Dan. > > The system is an x4440 with two dual port Qlogic 8 Gbit FC cards connected to a > DDN 9900 storage unit. There are 60 luns configured from the storage unit we using > raidz1 across these luns in a 9+1 configuration. Under Solaris 10U9 multipathing > is enabled. > > For example here is one of the devices: > > > # luxadm display /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 > DEVICE PROPERTIES for disk: /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 > Vendor: DDN > Product ID: S2A 9900 > Revision: 6.11 > Serial Num: 10DC50AA002E > Unformatted capacity: 15261576.000 MBytes > Write Cache: Enabled > Read Cache: Enabled > Minimum prefetch: 0x0 > Maximum prefetch: 0x0 > Device Type: Disk device > Path(s): > > /dev/rdsk/c8t60001FF010DC50AA2E000800001D1BF1d0s2 > /devices/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:c,raw > Controller /dev/cfg/c5 > Device Address 24000001ff051232,2e > Host controller port WWN 2101001b32bfe1d3 > Class secondary > State ONLINE > Controller /dev/cfg/c7 > Device Address 28000001ff0510dc,2e > Host controller port WWN 2101001b32bd4f8f > Class primary > State ONLINE > > > Here is the output of the zdb command: > > # zdb -l /dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=22 > name=''tank'' > state=0 > txg=402415 > pool_guid=13155614069147461689 > hostid=799263814 > hostname=''Chaiten'' > top_guid=7879214599529115091 > guid=9439709931602673823 > vdev_children=8 > vdev_tree > type=''raidz'' > id=5 > guid=7879214599529115091 > nparity=1 > metaslab_array=35 > metaslab_shift=40 > ashift=12 > asize=160028491776000 > is_log=0 > create_txg=22 > children[0] > type=''disk'' > id=0 > guid=15738823520260019536 > path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' > whole_disk=1 > DTL=166 > create_txg=22 > children[1] > type=''disk'' > id=1 > guid=7241121769141495862 > path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' > whole_disk=1 > DTL=165 > create_txg=22 > children[2] > type=''disk'' > id=2 > guid=2777230007222012140 > path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' > whole_disk=1 > DTL=164 > create_txg=22 > children[3] > type=''disk'' > id=3 > guid=5525323314985659974 > path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' > whole_disk=1 > DTL=163 > create_txg=22 > children[4] > type=''disk'' > id=4 > guid=9152185867089638590 > path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' > whole_disk=1 > DTL=162 > create_txg=22 > children[5] > type=''disk'' > id=5 > guid=15506884896416107740 > path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' > whole_disk=1 > DTL=161 > create_txg=22 > children[6] > type=''disk'' > id=6 > guid=13655161443342419149 > path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' > whole_disk=1 > DTL=160 > create_txg=22 > children[7] > type=''disk'' > id=7 > guid=8658338305352581764 > path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' > whole_disk=1 > DTL=159 > create_txg=22 > children[8] > type=''disk'' > id=8 > guid=16757152450233754713 > path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' > whole_disk=1 > DTL=158 > create_txg=22 > children[9] > type=''disk'' > id=9 > guid=9439709931602673823 > path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' > whole_disk=1 > DTL=157 > create_txg=22 > rewind_txg_ts=1308690257 > bad config type 7 for seconds_of_rewind > verify_data_errors=0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=22 > name=''tank'' > state=0 > txg=402415 > pool_guid=13155614069147461689 > hostid=799263814 > hostname=''Chaiten'' > top_guid=7879214599529115091 > guid=9439709931602673823 > vdev_children=8 > vdev_tree > type=''raidz'' > id=5 > guid=7879214599529115091 > nparity=1 > metaslab_array=35 > metaslab_shift=40 > ashift=12 > asize=160028491776000 > is_log=0 > create_txg=22 > children[0] > type=''disk'' > id=0 > guid=15738823520260019536 > path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' > whole_disk=1 > DTL=166 > create_txg=22 > children[1] > type=''disk'' > id=1 > guid=7241121769141495862 > path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' > whole_disk=1 > DTL=165 > create_txg=22 > children[2] > type=''disk'' > id=2 > guid=2777230007222012140 > path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' > whole_disk=1 > DTL=164 > create_txg=22 > children[3] > type=''disk'' > id=3 > guid=5525323314985659974 > path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' > whole_disk=1 > DTL=163 > create_txg=22 > children[4] > type=''disk'' > id=4 > guid=9152185867089638590 > path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' > whole_disk=1 > DTL=162 > create_txg=22 > children[5] > type=''disk'' > id=5 > guid=15506884896416107740 > path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' > whole_disk=1 > DTL=161 > create_txg=22 > children[6] > type=''disk'' > id=6 > guid=13655161443342419149 > path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' > whole_disk=1 > DTL=160 > create_txg=22 > children[7] > type=''disk'' > id=7 > guid=8658338305352581764 > path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' > whole_disk=1 > DTL=159 > create_txg=22 > children[8] > type=''disk'' > id=8 > guid=16757152450233754713 > path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' > whole_disk=1 > DTL=158 > create_txg=22 > children[9] > type=''disk'' > id=9 > guid=9439709931602673823 > path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' > whole_disk=1 > DTL=157 > create_txg=22 > rewind_txg_ts=1308690257 > bad config type 7 for seconds_of_rewind > verify_data_errors=0 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=22 > name=''tank'' > state=0 > txg=402415 > pool_guid=13155614069147461689 > hostid=799263814 > hostname=''Chaiten'' > top_guid=7879214599529115091 > guid=9439709931602673823 > vdev_children=8 > vdev_tree > type=''raidz'' > id=5 > guid=7879214599529115091 > nparity=1 > metaslab_array=35 > metaslab_shift=40 > ashift=12 > asize=160028491776000 > is_log=0 > create_txg=22 > children[0] > type=''disk'' > id=0 > guid=15738823520260019536 > path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' > whole_disk=1 > DTL=166 > create_txg=22 > children[1] > type=''disk'' > id=1 > guid=7241121769141495862 > path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' > whole_disk=1 > DTL=165 > create_txg=22 > children[2] > type=''disk'' > id=2 > guid=2777230007222012140 > path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' > whole_disk=1 > DTL=164 > create_txg=22 > children[3] > type=''disk'' > id=3 > guid=5525323314985659974 > path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' > whole_disk=1 > DTL=163 > create_txg=22 > children[4] > type=''disk'' > id=4 > guid=9152185867089638590 > path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' > whole_disk=1 > DTL=162 > create_txg=22 > children[5] > type=''disk'' > id=5 > guid=15506884896416107740 > path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' > whole_disk=1 > DTL=161 > create_txg=22 > children[6] > type=''disk'' > id=6 > guid=13655161443342419149 > path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' > whole_disk=1 > DTL=160 > create_txg=22 > children[7] > type=''disk'' > id=7 > guid=8658338305352581764 > path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' > whole_disk=1 > DTL=159 > create_txg=22 > children[8] > type=''disk'' > id=8 > guid=16757152450233754713 > path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' > whole_disk=1 > DTL=158 > create_txg=22 > children[9] > type=''disk'' > id=9 > guid=9439709931602673823 > path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' > whole_disk=1 > DTL=157 > create_txg=22 > rewind_txg_ts=1308690257 > bad config type 7 for seconds_of_rewind > verify_data_errors=0 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=22 > name=''tank'' > state=0 > txg=402415 > pool_guid=13155614069147461689 > hostid=799263814 > hostname=''Chaiten'' > top_guid=7879214599529115091 > guid=9439709931602673823 > vdev_children=8 > vdev_tree > type=''raidz'' > id=5 > guid=7879214599529115091 > nparity=1 > metaslab_array=35 > metaslab_shift=40 > ashift=12 > asize=160028491776000 > is_log=0 > create_txg=22 > children[0] > type=''disk'' > id=0 > guid=15738823520260019536 > path=''/dev/dsk/c8t60001FF01232528037000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232528037000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232528037000800001d1bf1:a'' > whole_disk=1 > DTL=166 > create_txg=22 > children[1] > type=''disk'' > id=1 > guid=7241121769141495862 > path=''/dev/dsk/c8t60001FF010DC50C536000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50c536000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1:a'' > whole_disk=1 > DTL=165 > create_txg=22 > children[2] > type=''disk'' > id=2 > guid=2777230007222012140 > path=''/dev/dsk/c8t60001FF01232527935000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527935000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527935000800001d1bf1:a'' > whole_disk=1 > DTL=164 > create_txg=22 > children[3] > type=''disk'' > id=3 > guid=5525323314985659974 > path=''/dev/dsk/c8t60001FF010DC50BE34000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50be34000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1:a'' > whole_disk=1 > DTL=163 > create_txg=22 > children[4] > type=''disk'' > id=4 > guid=9152185867089638590 > path=''/dev/dsk/c8t60001FF01232527233000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232527233000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232527233000800001d1bf1:a'' > whole_disk=1 > DTL=162 > create_txg=22 > children[5] > type=''disk'' > id=5 > guid=15506884896416107740 > path=''/dev/dsk/c8t60001FF010DC50B832000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b832000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1:a'' > whole_disk=1 > DTL=161 > create_txg=22 > children[6] > type=''disk'' > id=6 > guid=13655161443342419149 > path=''/dev/dsk/c8t60001FF01232526B31000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff01232526b31000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff01232526b31000800001d1bf1:a'' > whole_disk=1 > DTL=160 > create_txg=22 > children[7] > type=''disk'' > id=7 > guid=8658338305352581764 > path=''/dev/dsk/c8t60001FF010DC50B030000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50b030000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1:a'' > whole_disk=1 > DTL=159 > create_txg=22 > children[8] > type=''disk'' > id=8 > guid=16757152450233754713 > path=''/dev/dsk/c8t60001FF0123252632F000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff0123252632f000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff0123252632f000800001d1bf1:a'' > whole_disk=1 > DTL=158 > create_txg=22 > children[9] > type=''disk'' > id=9 > guid=9439709931602673823 > path=''/dev/dsk/c8t60001FF010DC50AA2E000800001D1BF1d0s0'' > devid=''id1,sd at n60001ff010dc50aa2e000800001d1bf1/a'' > phys_path=''/scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1:a'' > whole_disk=1 > DTL=157 > create_txg=22 > rewind_txg_ts=1308690257 > bad config type 7 for seconds_of_rewind > verify_data_errors=0 > > > When I tried out Solaris 11, I just exported the pool prior to the install of > Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had > installed Solaris 11 I still had the other disk in the mirror with Solaris 10 still > installed. > > I didn''t install any additional software in either environments with regards to > volume management, etc. > > From the format command, I did remember seeing 60 luns coming from the DDN and > as I recall I disk see multiple paths as well under Solaris 11. I think you are > correct however in that for some reason Solaris 11 could not read the devices. > > Let me know if you need any further output. > > David > > > >
David W. Smith
2011-Jun-24 00:44 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On Thu, Jun 23, 2011 at 01:26:38PM -0700, Cindy Swearingen wrote:> Hi David, > > I see some inconsistencies between the mirrored pool tank info below > and the device info that you included. > > 1. The zpool status for tank shows some remnants of log devices (?), > here: > > tank FAULTED corrupted data > logs > > Generally, the log devices are listed after the pool devices. > Did this pool have log devices at one time? Are they missing?Yes the pool does have logs. I''ll include a zpool status -v below from when I''m booted in solaris 10 U9.> > # zpool status datap > pool: datap > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > datap ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > mirror-1 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > logs > mirror-2 ONLINE 0 0 0 > c1t5d0 ONLINE 0 0 0 > c1t8d0 ONLINE 0 0 0 > > I would like to see this output: > > # zpool history tank > > 2. Can you include the zdb -l output for c9t57d0 because > the zdb -l device output below is from a RAIDZ config, not > a mirrored config, although the pool GUIDs match so I''m > confused.The zpool is a Raidz. The logs were mirrored however.> > I don''t think this has anything to do with moving from s10u9 to S11 > express. > > My sense is that if you have remnants of the same pool name on some of > your devices but as different pools, then you will see device problems > like these. > > Thanks, > > Cindy ># zpool history tank History for ''tank'': 2011-02-03.10:39:16 zpool create tank raidz c8t60001FF0123251B20F000800001D1BF1d0 c8t60001FF010DC50410E000800001D1BF1d0 c8t60001FF0123251A70D000800001D1BF1d0 c8t60001FF010DC503B0C000800001D1BF1d0 c8t60001FF01232519B0B000800001D1BF1d0 c8t60001FF010DC50350A000800001D1BF1d0 c8t60001FF01232518F09000800001D1BF1d0 c8t60001FF010DC502F08000800001D1BF1d0 c8t60001FF01232518307000800001D1BF1d0 c8t60001FF010DC502A06000800001D1BF1d0 2011-02-03.10:39:23 zpool add tank raidz c8t60001FF01232517805000800001D1BF1d0 c8t60001FF010DC502404000800001D1BF1d0 c8t60001FF01232516C03000800001D1BF1d0 c8t60001FF010DC501F02000800001D1BF1d0 c8t60001FF01232516301000800001D1BF1d0 c8t60001FF010DC50731E000800001D1BF1d0 c8t60001FF0123252051D000800001D1BF1d0 c8t60001FF010DC506D1C000800001D1BF1d0 c8t60001FF0123251F91B000800001D1BF1d0 c8t60001FF010DC50661A000800001D1BF1d0 2011-02-03.10:39:29 zpool add tank raidz c8t60001FF0123251EC19000800001D1BF1d0 c8t60001FF010DC506018000800001D1BF1d0 c8t60001FF0123251E017000800001D1BF1d0 c8t60001FF010DC505916000800001D1BF1d0 c8t60001FF0123251D415000800001D1BF1d0 c8t60001FF010DC505314000800001D1BF1d0 c8t60001FF0123251C913000800001D1BF1d0 c8t60001FF010DC504D12000800001D1BF1d0 c8t60001FF0123251BD11000800001D1BF1d0 c8t60001FF010DC504710000800001D1BF1d0 2011-02-03.10:39:36 zpool add tank raidz c8t60001FF01232525C2D000800001D1BF1d0 c8t60001FF010DC50A32C000800001D1BF1d0 c8t60001FF0123252552B000800001D1BF1d0 c8t60001FF010DC509C2A000800001D1BF1d0 c8t60001FF01232524D29000800001D1BF1d0 c8t60001FF010DC509628000800001D1BF1d0 c8t60001FF01232524627000800001D1BF1d0 c8t60001FF010DC508E26000800001D1BF1d0 c8t60001FF01232523E25000800001D1BF1d0 c8t60001FF010DC508724000800001D1BF1d0 2011-02-03.10:39:43 zpool add tank raidz c8t60001FF01232523623000800001D1BF1d0 c8t60001FF010DC508122000800001D1BF1d0 c8t60001FF01232522C21000800001D1BF1d0 c8t60001FF010DC507A20000800001D1BF1d0 c8t60001FF01232521F1F000800001D1BF1d0 c8t60001FF010DC50D93C000800001D1BF1d0 c8t60001FF01232528E3B000800001D1BF1d0 c8t60001FF010DC50D33A000800001D1BF1d0 c8t60001FF01232528839000800001D1BF1d0 c8t60001FF010DC50CC38000800001D1BF1d0 2011-02-03.10:39:50 zpool add tank raidz c8t60001FF01232528037000800001D1BF1d0 c8t60001FF010DC50C536000800001D1BF1d0 c8t60001FF01232527935000800001D1BF1d0 c8t60001FF010DC50BE34000800001D1BF1d0 c8t60001FF01232527233000800001D1BF1d0 c8t60001FF010DC50B832000800001D1BF1d0 c8t60001FF01232526B31000800001D1BF1d0 c8t60001FF010DC50B030000800001D1BF1d0 c8t60001FF0123252632F000800001D1BF1d0 c8t60001FF010DC50AA2E000800001D1BF1d0 2011-02-03.10:49:06 zfs create tank/test1 2011-02-03.13:12:40 zfs create tank/other 2011-02-03.13:12:52 zfs create tank/other/testing-compression 2011-02-03.13:13:05 zfs create tank/other/testing-no-compression 2011-02-03.13:13:16 zfs create tank/other/iotesting 2011-02-04.11:17:07 zpool add tank cache c1t4d0 c1t5d0 c1t6d0 c1t7d0 2011-02-04.11:18:44 zpool add tank log mirror c3t57d0 c3t58d0 mirror c3t59d0 c3t60d0 2011-02-04.15:37:11 zfs set sharenfs=<stuff removed> 2011-02-04.15:41:57 zfs set sharenfs=<stuff removed> 2011-02-09.15:52:39 zfs set sharenfs=<stuff removed> 2011-02-11.09:29:24 zpool remove tank c1t4d0 c1t5d0 c1t6d0 c1t7d0 2011-02-11.09:31:54 zpool remove tank mirror-6 mirror-7 2011-02-14.10:15:49 zpool add tank cache c1t4d0 c1t5d0 c1t6d0 c1t7d0 2011-02-14.10:15:58 zpool add tank log mirror c3t57d0 c3t58d0 mirror c3t59d0 c3t60d0 2011-03-03.08:14:38 zfs create tank/g 2011-03-03.08:14:45 zfs create tank/g/g0 2011-03-03.08:19:13 zfs set mountpoint=/g/g0 tank/g/g0 2011-04-15.16:31:12 zpool scrub tank 2011-04-15.16:35:13 zpool clear tank 2011-04-15.16:35:33 zpool clear tank 2011-04-15.16:35:44 zpool scrub tank 2011-05-04.14:24:23 zpool remove tank mirror-6 mirror-7 2011-05-04.14:24:53 zpool add tank log mirror c3t57d0 c3t58d0 mirror c3t59d0 c3t60d0 2011-06-21.14:04:17 zpool export tank 2011-06-22.15:33:49 zpool import tank zpool status -v: ---------------- pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t60001FF0123251B20F000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50410E000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251A70D000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC503B0C000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232519B0B000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50350A000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232518F09000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC502F08000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232518307000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC502A06000800001D1BF1d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c8t60001FF01232517805000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC502404000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232516C03000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC501F02000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232516301000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50731E000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123252051D000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC506D1C000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251F91B000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50661A000800001D1BF1d0 ONLINE 0 0 0 raidz1-2 ONLINE 0 0 0 c8t60001FF0123251EC19000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC506018000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251E017000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC505916000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251D415000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC505314000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251C913000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC504D12000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123251BD11000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC504710000800001D1BF1d0 ONLINE 0 0 0 raidz1-3 ONLINE 0 0 0 c8t60001FF01232525C2D000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50A32C000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123252552B000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC509C2A000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232524D29000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC509628000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232524627000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC508E26000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232523E25000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC508724000800001D1BF1d0 ONLINE 0 0 0 raidz1-4 ONLINE 0 0 0 c8t60001FF01232523623000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC508122000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232522C21000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC507A20000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232521F1F000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50D93C000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232528E3B000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50D33A000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232528839000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50CC38000800001D1BF1d0 ONLINE 0 0 0 raidz1-5 ONLINE 0 0 0 c8t60001FF01232528037000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50C536000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232527935000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50BE34000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232527233000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50B832000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF01232526B31000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50B030000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF0123252632F000800001D1BF1d0 ONLINE 0 0 0 c8t60001FF010DC50AA2E000800001D1BF1d0 ONLINE 0 0 0 logs mirror-6 ONLINE 0 0 0 c3t57d0 ONLINE 0 0 0 c3t58d0 ONLINE 0 0 0 mirror-7 ONLINE 0 0 0 c3t59d0 ONLINE 0 0 0 c3t60d0 ONLINE 0 0 0 cache c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 Format output: -------------- # format < /dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 17830 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 0,0 1. c1t1d0 <DEFAULT cyl 17830 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 1,0 2. c1t2d0 <DEFAULT cyl 17831 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 2,0 3. c1t3d0 <DEFAULT cyl 17831 alt 2 hd 255 sec 63> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 3,0 4. c1t4d0 <Sun-STK RAID INT-V1.0-93.06GB> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 4,0 5. c1t5d0 <Sun-STK RAID INT-V1.0-93.06GB> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 5,0 6. c1t6d0 <Sun-STK RAID INT-V1.0-93.06GB> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 6,0 7. c1t7d0 <Sun-STK RAID INT-V1.0-93.06GB> /pci at 0,0/pci10de,375 at f/pci108e,286 at 0/disk at 7,0 8. c3t57d0 <ATA-STEC ZeusIOPS-0430-17.00GB> /pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 39,0 9. c3t58d0 <ATA-STEC ZeusIOPS-0430-17.00GB> /pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3a,0 10. c3t59d0 <ATA-STEC ZeusIOPS-0430-17.00GB> /pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3b,0 11. c3t60d0 <ATA-STEC ZeusIOPS-0430-17.00GB> /pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3c,0 12. c8t60001FF010DC50A32C000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50a32c000800001d1bf1 13. c8t60001FF010DC50AA2E000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50aa2e000800001d1bf1 14. c8t60001FF010DC50B030000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50b030000800001d1bf1 15. c8t60001FF010DC50B832000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50b832000800001d1bf1 16. c8t60001FF010DC50BE34000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50be34000800001d1bf1 17. c8t60001FF010DC50C536000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50c536000800001d1bf1 18. c8t60001FF010DC50CC38000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50cc38000800001d1bf1 19. c8t60001FF010DC50D33A000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50d33a000800001d1bf1 20. c8t60001FF010DC50D93C000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50d93c000800001d1bf1 21. c8t60001FF010DC501F02000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc501f02000800001d1bf1 22. c8t60001FF010DC502A06000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc502a06000800001d1bf1 23. c8t60001FF010DC502F08000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc502f08000800001d1bf1 24. c8t60001FF010DC503B0C000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc503b0c000800001d1bf1 25. c8t60001FF010DC504D12000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc504d12000800001d1bf1 26. c8t60001FF010DC506D1C000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc506d1c000800001d1bf1 27. c8t60001FF010DC507A20000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc507a20000800001d1bf1 28. c8t60001FF010DC508E26000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc508e26000800001d1bf1 29. c8t60001FF010DC509C2A000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc509c2a000800001d1bf1 30. c8t60001FF010DC50350A000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50350a000800001d1bf1 31. c8t60001FF010DC50410E000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50410e000800001d1bf1 32. c8t60001FF010DC50661A000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50661a000800001d1bf1 33. c8t60001FF010DC50731E000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc50731e000800001d1bf1 34. c8t60001FF010DC504710000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc504710000800001d1bf1 35. c8t60001FF010DC508122000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc508122000800001d1bf1 36. c8t60001FF010DC508724000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc508724000800001d1bf1 37. c8t60001FF010DC509628000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc509628000800001d1bf1 38. c8t60001FF010DC502404000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc502404000800001d1bf1 39. c8t60001FF010DC506018000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc506018000800001d1bf1 40. c8t60001FF010DC505314000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc505314000800001d1bf1 41. c8t60001FF010DC505916000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff010dc505916000800001d1bf1 42. c8t60001FF0123251A70D000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251a70d000800001d1bf1 43. c8t60001FF0123251B20F000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251b20f000800001d1bf1 44. c8t60001FF0123251BD11000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251bd11000800001d1bf1 45. c8t60001FF0123251C913000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251c913000800001d1bf1 46. c8t60001FF0123251D415000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251d415000800001d1bf1 47. c8t60001FF0123251E017000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251e017000800001d1bf1 48. c8t60001FF0123251EC19000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251ec19000800001d1bf1 49. c8t60001FF0123251F91B000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123251f91b000800001d1bf1 50. c8t60001FF01232516C03000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232516c03000800001d1bf1 51. c8t60001FF01232518F09000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232518f09000800001d1bf1 52. c8t60001FF01232519B0B000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232519b0b000800001d1bf1 53. c8t60001FF01232521F1F000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232521f1f000800001d1bf1 54. c8t60001FF01232522C21000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232522c21000800001d1bf1 55. c8t60001FF01232523E25000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232523e25000800001d1bf1 56. c8t60001FF01232524D29000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232524d29000800001d1bf1 57. c8t60001FF01232525C2D000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232525c2d000800001d1bf1 58. c8t60001FF01232526B31000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232526b31000800001d1bf1 59. c8t60001FF01232528E3B000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232528e3b000800001d1bf1 60. c8t60001FF0123252051D000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123252051d000800001d1bf1 61. c8t60001FF0123252552B000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123252552b000800001d1bf1 62. c8t60001FF0123252632F000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff0123252632f000800001d1bf1 63. c8t60001FF01232527233000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232527233000800001d1bf1 64. c8t60001FF01232524627000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232524627000800001d1bf1 65. c8t60001FF01232527935000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232527935000800001d1bf1 66. c8t60001FF01232523623000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232523623000800001d1bf1 67. c8t60001FF01232528839000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232528839000800001d1bf1 68. c8t60001FF01232517805000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232517805000800001d1bf1 69. c8t60001FF01232518307000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232518307000800001d1bf1 70. c8t60001FF01232516301000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232516301000800001d1bf1 71. c8t60001FF01232528037000800001D1BF1d0 <DDN-S2A 9900-6.11-14.55TB> /scsi_vhci/disk at g60001ff01232528037000800001d1bf1 Specify disk (enter its number): ZDB from one of the log devices: -------------------------------- # zdb -l /dev/dsk/c3t59d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7929625263716612584 guid=12265708552998034011 vdev_children=8 vdev_tree type=''mirror'' id=7 guid=7929625263716612584 metaslab_array=171 metaslab_shift=27 ashift=9 asize=18240241664 is_log=1 create_txg=269718 children[0] type=''disk'' id=0 guid=12265708552998034011 path=''/dev/dsk/c3t59d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000D039A________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3b,0:a'' whole_disk=1 create_txg=269718 children[1] type=''disk'' id=1 guid=2456972971894251597 path=''/dev/dsk/c3t60d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000CFFC0________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3c,0:a'' whole_disk=1 create_txg=269718 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 1 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7929625263716612584 guid=12265708552998034011 vdev_children=8 vdev_tree type=''mirror'' id=7 guid=7929625263716612584 metaslab_array=171 metaslab_shift=27 ashift=9 asize=18240241664 is_log=1 create_txg=269718 children[0] type=''disk'' id=0 guid=12265708552998034011 path=''/dev/dsk/c3t59d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000D039A________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3b,0:a'' whole_disk=1 create_txg=269718 children[1] type=''disk'' id=1 guid=2456972971894251597 path=''/dev/dsk/c3t60d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000CFFC0________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3c,0:a'' whole_disk=1 create_txg=269718 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 2 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7929625263716612584 guid=12265708552998034011 vdev_children=8 vdev_tree type=''mirror'' id=7 guid=7929625263716612584 metaslab_array=171 metaslab_shift=27 ashift=9 asize=18240241664 is_log=1 create_txg=269718 children[0] type=''disk'' id=0 guid=12265708552998034011 path=''/dev/dsk/c3t59d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000D039A________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3b,0:a'' whole_disk=1 create_txg=269718 children[1] type=''disk'' id=1 guid=2456972971894251597 path=''/dev/dsk/c3t60d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000CFFC0________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3c,0:a'' whole_disk=1 create_txg=269718 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 -------------------------------------------- LABEL 3 -------------------------------------------- version=22 name=''tank'' state=0 txg=402415 pool_guid=13155614069147461689 hostid=799263814 hostname=''Chaiten'' top_guid=7929625263716612584 guid=12265708552998034011 vdev_children=8 vdev_tree type=''mirror'' id=7 guid=7929625263716612584 metaslab_array=171 metaslab_shift=27 ashift=9 asize=18240241664 is_log=1 create_txg=269718 children[0] type=''disk'' id=0 guid=12265708552998034011 path=''/dev/dsk/c3t59d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000D039A________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3b,0:a'' whole_disk=1 create_txg=269718 children[1] type=''disk'' id=1 guid=2456972971894251597 path=''/dev/dsk/c3t60d0s0'' devid=''id1,sd at TATA_____STEC_ZeusIOPS___018_GBytes______________STM0000CFFC0________/a'' phys_path=''/pci at 7c,0/pci10de,376 at e/pci1000,3150 at 0/sd at 3c,0:a'' whole_disk=1 create_txg=269718 rewind_txg_ts=1308690257 bad config type 7 for seconds_of_rewind verify_data_errors=0 Please let me know if you need more info... Thanks, David W. Smith
Fajar A. Nugraha
2011-Jun-24 03:00 UTC
[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith <smith107 at llnl.gov> wrote:>> Generally, the log devices are listed after the pool devices. >> Did this pool have log devices at one time? Are they missing? > > Yes the pool does have logs. ?I''ll include a zpool status -v below > from when I''m booted in solaris 10 U9.I think what Cindy means is does "zpool status" on Solaris Express (when you were having the problem) has pool devices listed as well? If not, that would explain the faulted status: zfs can''t find pool devices. So we need to track why Solaris can''t see it (probably driver issues). If it can see the pool devices, then the status of each device as seen by zfs on Solaris Express would provide some info,>> My sense is that if you have remnants of the same pool name on some of >> your devices but as different pools, then you will see device problems >> like these.I had a similar case (though my problem was on Linux). In my case the "solution" was to rename /etc/zfs/zpool.cache, reboot the server, then re-import the pool.> Please let me know if you need more info...If you''re still interested in using this pool under Solaris Express, then we''ll need the output of format and zpool import when running Solaris Express. -- Fajar