P-O Yliniemi
2012-Mar-13 10:10 UTC
[zfs-discuss] Unable to import exported zpool on a new server
Hello, I''m currently replacing a temporary storage server (server1) with the one that should be the final one (server2). To keep the data storage from the old one I''m attempting to import it on the new server. Both servers are running OpenIndiana server build 151a. Server 1 (old) The zpool consists of three disks in a raidz1 configuration: # zpool status pool: storage state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c4d0 ONLINE 0 0 0 c4d1 ONLINE 0 0 0 c5d0 ONLINE 0 0 0 errors: No known data errors Output of format command gives: # format AVAILABLE DISK SELECTIONS: 0. c2t1d0 <LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126> /pci at 0,0/pci8086,25e2 at 2/pci8086,350c at 0,3/pci103c,3015 at 6/sd at 1,0 1. c4d0 <ST3000DM- W1F07HW-0001-2.73TB> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 2. c4d1 <ST3000DM- W1F05H2-0001-2.73TB> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 3. c5d0 <ST3000DM- W1F032R-0001-2.73TB> /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0 4. c5d1 <ST3000DM- W1F07HZ-0001-2.73TB> /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 1,0 (c5d1 was previously used as a hot spare, but I removed it as an attempt to export and import the zpool without the spare) # zpool export storage # zpool list (shows only rpool) # zpool import pool: storage id: 17210091810759984780 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: storage ONLINE raidz1-0 ONLINE c4d0 ONLINE c4d1 ONLINE c5d0 ONLINE (check to see if it is importable to the old server, this has also been verified since I moved back the disks to the old server yesterday to have it available during the night) zdb -l output in attached files. ------------------------------------------------------- Server 2 (new) I have attached the disks on the new server in the same order (which shouldn''t matter as ZFS should locate the disks anyway) zpool import gives: root at backup:~# zpool import pool: storage id: 17210091810759984780 state: UNAVAIL action: The pool cannot be imported due to damaged devices or data. config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL corrupted data c7t5000C50044E0F316d0 ONLINE c7t5000C50044A30193d0 ONLINE c7t5000C50044760F6Ed0 ONLINE The problem is that all the disks are there and online, but the pool is showing up as unavailable. Any ideas on what I can do more in order to solve this problem ? Regards, PeO -------------- next part -------------- # zdb -l c4d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------- next part -------------- # zdb -l c4d1s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------- next part -------------- # zdb -l c5d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 0 txg: 2450439 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------- next part -------------- (spare) # zdb -l c5d1s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 state: 3 guid: 4572611453727307581 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 state: 3 guid: 4572611453727307581 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 state: 3 guid: 4572611453727307581 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 state: 3 guid: 4572611453727307581 -------------- next part -------------- root at backup:~# zdb -l /dev/dsk/c7t5000C50044A30193d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 9273576080530492359 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------- next part -------------- root at backup:~# zdb -l /dev/dsk/c7t5000C50044E0F316d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 14478395923793210190 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------- next part -------------- root at backup:~# zdb -l /dev/dsk/c7t5000C50044760F6Ed0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: ''storage'' state: 1 txg: 2462286 pool_guid: 17210091810759984780 hostid: 13183520 hostname: ''backup'' top_guid: 11913540592052933027 guid: 6205751126661365015 vdev_children: 1 vdev_tree: type: ''raidz'' id: 0 guid: 11913540592052933027 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 9 asize: 9001731096576 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14478395923793210190 path: ''/dev/dsk/c4d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F07HW4/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 9273576080530492359 path: ''/dev/dsk/c4d1s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F05H2Y/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0:a'' whole_disk: 1 create_txg: 4 children[2]: type: ''disk'' id: 2 guid: 6205751126661365015 path: ''/dev/dsk/c5d0s0'' devid: ''id1,cmdk at AST3000DM001-9YN166=____________W1F032RJ/a'' phys_path: ''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0:a'' whole_disk: 1 create_txg: 4
Hung-Sheng Tsao (LaoTsao) Ph.D
2012-Mar-13 12:52 UTC
[zfs-discuss] Unable to import exported zpool on a new server
hi are the disk/sas controller the same on both server? -LT Sent from my iPad On Mar 13, 2012, at 6:10, P-O Yliniemi <peo at bsd-guide.net> wrote:> Hello, > > I''m currently replacing a temporary storage server (server1) with the one that should be the final one (server2). To keep the data storage from the old one I''m attempting to import it on the new server. Both servers are running OpenIndiana server build 151a. > > Server 1 (old) > The zpool consists of three disks in a raidz1 configuration: > # zpool status > pool: storage > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > c4d0 ONLINE 0 0 0 > c4d1 ONLINE 0 0 0 > c5d0 ONLINE 0 0 0 > > errors: No known data errors > > Output of format command gives: > # format > AVAILABLE DISK SELECTIONS: > 0. c2t1d0 <LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126> > /pci at 0,0/pci8086,25e2 at 2/pci8086,350c at 0,3/pci103c,3015 at 6/sd at 1,0 > 1. c4d0 <ST3000DM- W1F07HW-0001-2.73TB> > /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 > 2. c4d1 <ST3000DM- W1F05H2-0001-2.73TB> > /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 > 3. c5d0 <ST3000DM- W1F032R-0001-2.73TB> > /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0 > 4. c5d1 <ST3000DM- W1F07HZ-0001-2.73TB> > /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 1,0 > > (c5d1 was previously used as a hot spare, but I removed it as an attempt to export and import the zpool without the spare) > > # zpool export storage > > # zpool list > (shows only rpool) > > # zpool import > pool: storage > id: 17210091810759984780 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > config: > > storage ONLINE > raidz1-0 ONLINE > c4d0 ONLINE > c4d1 ONLINE > c5d0 ONLINE > > (check to see if it is importable to the old server, this has also been verified since I moved back the disks to the old server yesterday to have it available during the night) > > zdb -l output in attached files. > > ------------------------------------------------------- > > Server 2 (new) > I have attached the disks on the new server in the same order (which shouldn''t matter as ZFS should locate the disks anyway) > zpool import gives: > > root at backup:~# zpool import > pool: storage > id: 17210091810759984780 > state: UNAVAIL > action: The pool cannot be imported due to damaged devices or data. > config: > > storage UNAVAIL insufficient replicas > raidz1-0 UNAVAIL corrupted data > c7t5000C50044E0F316d0 ONLINE > c7t5000C50044A30193d0 ONLINE > c7t5000C50044760F6Ed0 ONLINE > > The problem is that all the disks are there and online, but the pool is showing up as unavailable. > > Any ideas on what I can do more in order to solve this problem ? > > Regards, > PeO > > > > <zdb_l_c4d0s0.txt> > <zdb_l_c4d1s0.txt> > <zdb_l_c5d0s0.txt> > <zdb_l_c5d1s0.txt> > <zdb_l_c7t5000C50044A30193d0s0.txt> > <zdb_l_c7t5000C50044E0F316d0s0.txt> > <zdb_l_c7t5000C50044760F6Ed0s0.txt> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Klimov
2012-Mar-13 14:24 UTC
[zfs-discuss] Unable to import exported zpool on a new server
2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:> hi > are the disk/sas controller the same on both server?Seemingly no. I don''t see the output of "format" on Server2, but for Server1 I see that the 3TB disks are used as IDE devices (probably with motherboard SATA-IDE emulation?) while on Server2 addressing goes like SAS with WWN names. It may be possible that on one controller disks are used "natively" while on another they are attached as a JBOD or a set of RAID0 disks (so the controller''s logic or its expected layout intervenes), as recently discussed on-list?> On Mar 13, 2012, at 6:10, P-O Yliniemi<peo at bsd-guide.net> wrote: > >> Hello, >> >> I''m currently replacing a temporary storage server (server1) with the one that should be the final one (server2). To keep the data storage from the old one I''m attempting to import it on the new server. Both servers are running OpenIndiana server build 151a. >> >> Server 1 (old) >> The zpool consists of three disks in a raidz1 configuration: >> # zpool status >> c4d0 ONLINE 0 0 0 >> c4d1 ONLINE 0 0 0 >> c5d0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> Output of format command gives: >> # format >> AVAILABLE DISK SELECTIONS: >> 0. c2t1d0<LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126> >> /pci at 0,0/pci8086,25e2 at 2/pci8086,350c at 0,3/pci103c,3015 at 6/sd at 1,0 >> 1. c4d0<ST3000DM- W1F07HW-0001-2.73TB> >> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 >> 2. c4d1<ST3000DM- W1F05H2-0001-2.73TB> >> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 >> 3. c5d0<ST3000DM- W1F032R-0001-2.73TB> >> /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0>> Server 2 (new) >> I have attached the disks on the new server in the same order (which shouldn''t matter as ZFS should locate the disks anyway) >> zpool import gives: >> >> root at backup:~# zpool import >> pool: storage >> id: 17210091810759984780 >> state: UNAVAIL >> action: The pool cannot be imported due to damaged devices or data. >> config: >> >> storage UNAVAIL insufficient replicas >> raidz1-0 UNAVAIL corrupted data >> c7t5000C50044E0F316d0 ONLINE >> c7t5000C50044A30193d0 ONLINE >> c7t5000C50044760F6Ed0 ONLINE >>
P-O Yliniemi
2012-Mar-13 19:32 UTC
[zfs-discuss] Unable to import exported zpool on a new server
Jim Klimov skrev 2012-03-13 15:24:> 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote: >> hi >> are the disk/sas controller the same on both server? > > Seemingly no. I don''t see the output of "format" on Server2, > but for Server1 I see that the 3TB disks are used as IDE > devices (probably with motherboard SATA-IDE emulation?) > while on Server2 addressing goes like SAS with WWN names. >Correct, the servers are all different. Server1 is a HP xw8400, and the disks are connected to the first four SATA ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports for the system disks). On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, updated with the latest IT-mode firmware (also tested with the original IR-mode firmware) The output of the ''format'' command on Server2 is: AVAILABLE DISK SELECTIONS: 0. c2t0d0 <ATA-OCZ-VERTEX3-2.11-55.90GB> /pci at 0,0/pci8086,3410 at 9/pci15d9,5 at 0/sd at 0,0 1. c2t1d0 <ATA-OCZ-VERTEX3-2.11-55.90GB> /pci at 0,0/pci8086,3410 at 9/pci15d9,5 at 0/sd at 1,0 2. c3d1 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 3. c4d0 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63> /pci at 0,0/pci-ide at 1f,5/ide at 0/cmdk at 0,0 4. c7t5000C5003F45CCF4d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> /scsi_vhci/disk at g5000c5003f45ccf4 5. c7t5000C50044E0F0C6d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> /scsi_vhci/disk at g5000c50044e0f0c6 6. c7t5000C50044E0F611d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> /scsi_vhci/disk at g5000c50044e0f611 Note that this is what it looks like now, not at the time I sent the question. The difference is that I have set up three other disks (items 4-6) on the new server, and are currently transferring the contents from Server1 to this one using zfs send/receive. I will probably be able to reconnect the correct disks to the Server2 tomorrow when the data has been transferred to the new disks (problem ''solved'' at that moment), if there is anything else that I can do to try to solve it the ''right'' way.> It may be possible that on one controller disks are used > "natively" while on another they are attached as a JBOD > or a set of RAID0 disks (so the controller''s logic or its > expected layout intervenes), as recently discussed on-list? >On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 800GB-something by the BIOS (although correctly identified by OpenIndiana and ZFS). This could be a part of the problem with the ability to export/import the pool.>> On Mar 13, 2012, at 6:10, P-O Yliniemi<peo at bsd-guide.net> wrote: >> >>> Hello, >>> >>> I''m currently replacing a temporary storage server (server1) with >>> the one that should be the final one (server2). To keep the data >>> storage from the old one I''m attempting to import it on the new >>> server. Both servers are running OpenIndiana server build 151a. >>> >>> Server 1 (old) >>> The zpool consists of three disks in a raidz1 configuration: >>> # zpool status >>> c4d0 ONLINE 0 0 0 >>> c4d1 ONLINE 0 0 0 >>> c5d0 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> Output of format command gives: >>> # format >>> AVAILABLE DISK SELECTIONS: >>> 0. c2t1d0<LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 >>> sec 126> >>> >>> /pci at 0,0/pci8086,25e2 at 2/pci8086,350c at 0,3/pci103c,3015 at 6/sd at 1,0 >>> 1. c4d0<ST3000DM- W1F07HW-0001-2.73TB> >>> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 >>> 2. c4d1<ST3000DM- W1F05H2-0001-2.73TB> >>> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 >>> 3. c5d0<ST3000DM- W1F032R-0001-2.73TB> >>> /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0 > >>> Server 2 (new) >>> I have attached the disks on the new server in the same order (which >>> shouldn''t matter as ZFS should locate the disks anyway) >>> zpool import gives: >>> >>> root at backup:~# zpool import >>> pool: storage >>> id: 17210091810759984780 >>> state: UNAVAIL >>> action: The pool cannot be imported due to damaged devices or data. >>> config: >>> >>> storage UNAVAIL insufficient replicas >>> raidz1-0 UNAVAIL corrupted data >>> c7t5000C50044E0F316d0 ONLINE >>> c7t5000C50044A30193d0 ONLINE >>> c7t5000C50044760F6Ed0 ONLINE >>>
Hung-sheng Tsao
2012-Mar-13 20:37 UTC
[zfs-discuss] Unable to import exported zpool on a new server
IMHO Zfs is smart but not smart when you deal with two different controller Sent from my iPhone On Mar 13, 2012, at 3:32 PM, P-O Yliniemi <peo at bsd-guide.net> wrote:> Jim Klimov skrev 2012-03-13 15:24: >> 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote: >>> hi >>> are the disk/sas controller the same on both server? >> >> Seemingly no. I don''t see the output of "format" on Server2, >> but for Server1 I see that the 3TB disks are used as IDE >> devices (probably with motherboard SATA-IDE emulation?) >> while on Server2 addressing goes like SAS with WWN names. >> > Correct, the servers are all different. > Server1 is a HP xw8400, and the disks are connected to the first four SATA ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports for the system disks). > On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, updated with the latest IT-mode firmware (also tested with the original IR-mode firmware) > > The output of the ''format'' command on Server2 is: > > AVAILABLE DISK SELECTIONS: > 0. c2t0d0 <ATA-OCZ-VERTEX3-2.11-55.90GB> > /pci at 0,0/pci8086,3410 at 9/pci15d9,5 at 0/sd at 0,0 > 1. c2t1d0 <ATA-OCZ-VERTEX3-2.11-55.90GB> > /pci at 0,0/pci8086,3410 at 9/pci15d9,5 at 0/sd at 1,0 > 2. c3d1 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63> > /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 > 3. c4d0 <Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63> > /pci at 0,0/pci-ide at 1f,5/ide at 0/cmdk at 0,0 > 4. c7t5000C5003F45CCF4d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> > /scsi_vhci/disk at g5000c5003f45ccf4 > 5. c7t5000C50044E0F0C6d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> > /scsi_vhci/disk at g5000c50044e0f0c6 > 6. c7t5000C50044E0F611d0 <ATA-ST3000DM001-9YN1-CC46-2.73TB> > /scsi_vhci/disk at g5000c50044e0f611 > > Note that this is what it looks like now, not at the time I sent the question. The difference is that I have set up three other disks (items 4-6) on the new server, and are currently transferring the contents from Server1 to this one using zfs send/receive. > > I will probably be able to reconnect the correct disks to the Server2 tomorrow when the data has been transferred to the new disks (problem ''solved'' at that moment), if there is anything else that I can do to try to solve it the ''right'' way. > >> It may be possible that on one controller disks are used >> "natively" while on another they are attached as a JBOD >> or a set of RAID0 disks (so the controller''s logic or its >> expected layout intervenes), as recently discussed on-list? >> > On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 800GB-something by the BIOS (although correctly identified by OpenIndiana and ZFS). This could be a part of the problem with the ability to export/import the pool. > >>> On Mar 13, 2012, at 6:10, P-O Yliniemi<peo at bsd-guide.net> wrote: >>> >>>> Hello, >>>> >>>> I''m currently replacing a temporary storage server (server1) with the one that should be the final one (server2). To keep the data storage from the old one I''m attempting to import it on the new server. Both servers are running OpenIndiana server build 151a. >>>> >>>> Server 1 (old) >>>> The zpool consists of three disks in a raidz1 configuration: >>>> # zpool status >>>> c4d0 ONLINE 0 0 0 >>>> c4d1 ONLINE 0 0 0 >>>> c5d0 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> >>>> Output of format command gives: >>>> # format >>>> AVAILABLE DISK SELECTIONS: >>>> 0. c2t1d0<LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126> >>>> /pci at 0,0/pci8086,25e2 at 2/pci8086,350c at 0,3/pci103c,3015 at 6/sd at 1,0 >>>> 1. c4d0<ST3000DM- W1F07HW-0001-2.73TB> >>>> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 >>>> 2. c4d1<ST3000DM- W1F05H2-0001-2.73TB> >>>> /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 1,0 >>>> 3. c5d0<ST3000DM- W1F032R-0001-2.73TB> >>>> /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0 >> >>>> Server 2 (new) >>>> I have attached the disks on the new server in the same order (which shouldn''t matter as ZFS should locate the disks anyway) >>>> zpool import gives: >>>> >>>> root at backup:~# zpool import >>>> pool: storage >>>> id: 17210091810759984780 >>>> state: UNAVAIL >>>> action: The pool cannot be imported due to damaged devices or data. >>>> config: >>>> >>>> storage UNAVAIL insufficient replicas >>>> raidz1-0 UNAVAIL corrupted data >>>> c7t5000C50044E0F316d0 ONLINE >>>> c7t5000C50044A30193d0 ONLINE >>>> c7t5000C50044760F6Ed0 ONLINE >>>> >