Hi all, I have a RAID-Z2 setup with 6x 500Gb SATA disks. I exported the array to use under a different system but during or after the export one of the disks failed: kja at localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-2Q config: chronicle DEGRADED raidz2 DEGRADED c9t2d0 UNAVAIL cannot open c9t1d0 ONLINE c9t0d0 ONLINE c9t4d0 ONLINE c9t5d0 ONLINE c9t3d0 ONLINE I have no success trying to reimport the pool: kja at localhost:~$ pfexec zpool import -f chronicle cannot import ''chronicle'': one or more devices is currently unavailable The disk has since been replaced, so now: kja at localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices contains corrupted data. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-4J config: chronicle DEGRADED raidz2 DEGRADED c9t2d0 FAULTED corrupted data c9t1d0 ONLINE c9t0d0 ONLINE c9t4d0 ONLINE c9t5d0 ONLINE c9t3d0 ONLINE but "pfexec zpool import -f chronicle" still fails with the same message. I''ve Google''d this several times for a fix but to no avail. Any assistance is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090920/f0b48c76/attachment.html>
Casper.Dik at Sun.COM
2009-Sep-21 11:37 UTC
[zfs-discuss] Re-import RAID-Z2 with faulted disk
> >The disk has since been replaced, so now: >kja at localhost:~$ pfexec zpool import > pool: chronicle > id: 11592382930413748377 > state: DEGRADED >status: One or more devices contains corrupted data. >action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > see: http://www.sun.com/msg/ZFS-8000-4J >config: > > chronicle DEGRADED > raidz2 DEGRADED > c9t2d0 FAULTED corrupted data > c9t1d0 ONLINE > c9t0d0 ONLINE > c9t4d0 ONLINE > c9t5d0 ONLINE > c9t3d0 ONLINE > >but "pfexec zpool import -f chronicle" still fails with the same message. >That sounds like a bug; if ZFS can recover then zpool import should also work. So what was wrong with the broken disk? Not just badly plugged in? Cas[er
On Mon, Sep 21, 2009 at 3:37 AM, <Casper.Dik at sun.com> wrote:> > > > >The disk has since been replaced, so now: > >kja at localhost:~$ pfexec zpool import > > pool: chronicle > > id: 11592382930413748377 > > state: DEGRADED > >status: One or more devices contains corrupted data. > >action: The pool can be imported despite missing or damaged devices. The > > fault tolerance of the pool may be compromised if imported. > > see: http://www.sun.com/msg/ZFS-8000-4J > >config: > > > > chronicle DEGRADED > > raidz2 DEGRADED > > c9t2d0 FAULTED corrupted data > > c9t1d0 ONLINE > > c9t0d0 ONLINE > > c9t4d0 ONLINE > > c9t5d0 ONLINE > > c9t3d0 ONLINE > > > >but "pfexec zpool import -f chronicle" still fails with the same message. > > > > That sounds like a bug; if ZFS can recover then zpool import should > also work. > > So what was wrong with the broken disk? Not just badly plugged in? > > Cas[er > >The disk physically failed. When I powered the system on, it made frighteningly loud clicking sounds and would not be acknowledged by any system I was running, like it wasn''t even there. - Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090921/bfb77f3a/attachment.html>
I''m running vanilla 2009.06 since its release. I''ll definitely give it a shot with the Live CD. Also I tried importing with only the five good disks physically attached and get the same message. - Kyle On Mon, Sep 21, 2009 at 3:50 AM, Chris Murray <chrismurray84 at googlemail.com>wrote:> That really sounds like a scenario that ZFS would be able to cope with. > > What operating system are you using now? I''ve had good results > ''fixing'' pools with problems which were preventing import in the past by > using the Opensolaris 2009.06 Live CD. It would depend what build you''re on > now as to whether that will yield any results, of course ... > > Also, is the import any more successful if the new, empty disk just isn''t > present at all, and c9t2d0 is connected to nothing? > > Chris > > 2009/9/21 Kyle J. Aleshire <kjaleshire at gmail.com> > >> Hi all, I have a RAID-Z2 setup with 6x 500Gb SATA disks. I exported the >> array to use under a different system but during or after the export one of >> the disks failed: >> >> kja at localhost:~$ pfexec zpool import >> pool: chronicle >> id: 11592382930413748377 >> state: DEGRADED >> status: One or more devices are missing from the system. >> action: The pool can be imported despite missing or damaged devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://www.sun.com/msg/ZFS-8000-2Q >> config: >> >> chronicle DEGRADED >> raidz2 DEGRADED >> c9t2d0 UNAVAIL cannot open >> c9t1d0 ONLINE >> c9t0d0 ONLINE >> c9t4d0 ONLINE >> c9t5d0 ONLINE >> c9t3d0 ONLINE >> >> I have no success trying to reimport the pool: >> kja at localhost:~$ pfexec zpool import -f chronicle >> cannot import ''chronicle'': one or more devices is currently unavailable >> >> The disk has since been replaced, so now: >> kja at localhost:~$ pfexec zpool import >> pool: chronicle >> id: 11592382930413748377 >> state: DEGRADED >> status: One or more devices contains corrupted data. >> action: The pool can be imported despite missing or damaged devices. The >> fault tolerance of the pool may be compromised if imported. >> see: http://www.sun.com/msg/ZFS-8000-4J >> config: >> >> chronicle DEGRADED >> raidz2 DEGRADED >> c9t2d0 FAULTED corrupted data >> c9t1d0 ONLINE >> c9t0d0 ONLINE >> c9t4d0 ONLINE >> c9t5d0 ONLINE >> c9t3d0 ONLINE >> >> but "pfexec zpool import -f chronicle" still fails with the same message. >> >> I''ve Google''d this several times for a fix but to no avail. Any assistance >> is appreciated. >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090921/6af41c97/attachment.html>