Tommy McNeely
2009-Oct-22 21:30 UTC
[zfs-discuss] cannot import ''rpool'': one or more devices is currently unavailable
I have a system who''s rpool has gone defunct. The rpool is made of a
single "disk" which is a raid5EE made of all 8 146G disks on the box.
The raid card is the Adaptec brand card. It was running nv_107, but its
currently net booted to nv_121. I have already checked in the raid card
bios, and it says the volume is "optimal" . We had a power outage in
BRM07 on Tuesday, and the system appeared to boot back up, but then went
wonky. I power cycled it, and it came back to a grub> prompt cause it
couldn''t read the filesystem.
# uname -a
SunOS 5.11 snv_121 i86pc i386 i86pc
# zpool import
pool: rpool
id: 7197437773913332097
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
rpool ONLINE
c0t0d0s0 ONLINE
# zpool import -f 7197437773913332097
cannot import ''rpool'': one or more devices is currently
unavailable
#
# zpool import -a -f -R /a
cannot import ''rpool'': one or more devices is currently
unavailable
# zdb -l /dev/dsk/c0t0d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
version=14
name=''rpool''
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type=''disk''
id=0
guid=5620634672424557591
path=''/dev/dsk/c0t0d0s0''
devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a''
phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk
at 0,0:a''
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=14
name=''rpool''
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type=''disk''
id=0
guid=5620634672424557591
path=''/dev/dsk/c0t0d0s0''
devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a''
phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk
at 0,0:a''
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=14
name=''rpool''
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type=''disk''
id=0
guid=5620634672424557591
path=''/dev/dsk/c0t0d0s0''
devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a''
phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk
at 0,0:a''
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=14
name=''rpool''
state=0
txg=742622
pool_guid=7197437773913332097
hostid=4930069
hostname=''''
top_guid=5620634672424557591
guid=5620634672424557591
vdev_tree
type=''disk''
id=0
guid=5620634672424557591
path=''/dev/dsk/c0t0d0s0''
devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a''
phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk
at 0,0:a''
whole_disk=0
metaslab_array=24
metaslab_shift=33
ashift=9
asize=880083730432
is_log=0
# zdb -cu -e -d /dev/dsk/c0t0d0s0
zdb: can''t open /dev/dsk/c0t0d0s0: No such file or directory
# zdb -e rpool -cu
zdb: can''t open rpool: No such device or address
# zdb -e 7197437773913332097
zdb: can''t open 7197437773913332097: No such device or address
#
I obviously have no clue how to weild zdb.
Any help you can offer would be appreciated.
Thanks,
Tommy
Victor Latushkin
2009-Oct-23 21:16 UTC
[zfs-discuss] cannot import ''rpool'': one or more devices is currently unavailable
Tommy McNeely wrote:> I have a system who''s rpool has gone defunct. The rpool is made of a > single "disk" which is a raid5EE made of all 8 146G disks on the box. > The raid card is the Adaptec brand card. It was running nv_107, but its > currently net booted to nv_121. I have already checked in the raid card > bios, and it says the volume is "optimal" . We had a power outage in > BRM07 on Tuesday, and the system appeared to boot back up, but then went > wonky. I power cycled it, and it came back to a grub> prompt cause it > couldn''t read the filesystem.We''ve been able to recover pool by rolling back a few uberblocks - only a few log files had errors. Uberblock rollback project would allow to recover in this case quite easily. Victor> # uname -a > SunOS 5.11 snv_121 i86pc i386 i86pc > > # zpool import > pool: rpool > id: 7197437773913332097 > state: ONLINE > status: The pool was last accessed by another system. > action: The pool can be imported using its name or numeric identifier and > the ''-f'' flag. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > rpool ONLINE > c0t0d0s0 ONLINE > # zpool import -f 7197437773913332097 > cannot import ''rpool'': one or more devices is currently unavailable > # > > # zpool import -a -f -R /a > cannot import ''rpool'': one or more devices is currently unavailable > # zdb -l /dev/dsk/c0t0d0s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=14 > name=''rpool'' > state=0 > txg=742622 > pool_guid=7197437773913332097 > hostid=4930069 > hostname='''' > top_guid=5620634672424557591 > guid=5620634672424557591 > vdev_tree > type=''disk'' > id=0 > guid=5620634672424557591 > path=''/dev/dsk/c0t0d0s0'' > devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a'' > phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk at 0,0:a'' > whole_disk=0 > metaslab_array=24 > metaslab_shift=33 > ashift=9 > asize=880083730432 > is_log=0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=14 > name=''rpool'' > state=0 > txg=742622 > pool_guid=7197437773913332097 > hostid=4930069 > hostname='''' > top_guid=5620634672424557591 > guid=5620634672424557591 > vdev_tree > type=''disk'' > id=0 > guid=5620634672424557591 > path=''/dev/dsk/c0t0d0s0'' > devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a'' > phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk at 0,0:a'' > whole_disk=0 > metaslab_array=24 > metaslab_shift=33 > ashift=9 > asize=880083730432 > is_log=0 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=14 > name=''rpool'' > state=0 > txg=742622 > pool_guid=7197437773913332097 > hostid=4930069 > hostname='''' > top_guid=5620634672424557591 > guid=5620634672424557591 > vdev_tree > type=''disk'' > id=0 > guid=5620634672424557591 > path=''/dev/dsk/c0t0d0s0'' > devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a'' > phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk at 0,0:a'' > whole_disk=0 > metaslab_array=24 > metaslab_shift=33 > ashift=9 > asize=880083730432 > is_log=0 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=14 > name=''rpool'' > state=0 > txg=742622 > pool_guid=7197437773913332097 > hostid=4930069 > hostname='''' > top_guid=5620634672424557591 > guid=5620634672424557591 > vdev_tree > type=''disk'' > id=0 > guid=5620634672424557591 > path=''/dev/dsk/c0t0d0s0'' > devid=''id1,sd at TSun_____STK_RAID_INT____EFD1DFE0/a'' > phys_path=''/pci at 0,0/pci8086,3607 at 4/pci108e,286 at 0/disk at 0,0:a'' > whole_disk=0 > metaslab_array=24 > metaslab_shift=33 > ashift=9 > asize=880083730432 > is_log=0 > # zdb -cu -e -d /dev/dsk/c0t0d0s0 > zdb: can''t open /dev/dsk/c0t0d0s0: No such file or directory > # zdb -e rpool -cu > zdb: can''t open rpool: No such device or address > # zdb -e 7197437773913332097 > zdb: can''t open 7197437773913332097: No such device or address > # > > I obviously have no clue how to weild zdb. > > Any help you can offer would be appreciated. > > Thanks, > Tommy > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss