We have a pair of 3511s that are host to a couple of ZFS filesystems.
Over the weekend we had a power hit, and when we brought the server that
the 3511s are attached to back up, the ZFS filesystem was hosed. Are we
totally out of luck here? There''s nothing here that we can''t
recover,
given enough time, but I''d really rather not have to do this.
The machine is a v40z, the 3511s are attached via FC, and uname -a says:
SunOS search 5.10 Generic_118855-33 i86pc i386 i86pc
zpool list says:
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
files - - - - FAULTED -
zpool status -v says:
pool: files
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-CS
scrub: none requested
config:
NAME STATE READ WRITE
CKSUM
files FAULTED 0 0
6 corrupted data
raidz1 ONLINE 0 0
6
c0t600C0FF0000000000923490E9DA84700d0 ONLINE 0 0
0
c0t600C0FF0000000000923494F39349400d0 ONLINE 0 0
0
c0t600C0FF000000000092349138D7A3C00d0 ONLINE 0 0
0
c0t600C0FF0000000000923495AF4B94F00d0 ONLINE 0 0
0
c0t600C0FF00000000009234972FF459200d0 ONLINE 0 0
0
Steve, desperate to get his filesystem back
What does ''fmdump -eV'' show? You might also want to try the
following
and run ''zpool status'' in the background:
# dtrace -n ''zfs_ereport_post:entry{stack()}''
This will provide additional information of the source of the ereports
isn''t obvious.
- Eric
On Mon, Oct 29, 2007 at 03:44:14PM -0400, Stephen Green
wrote:> We have a pair of 3511s that are host to a couple of ZFS filesystems.
> Over the weekend we had a power hit, and when we brought the server that
> the 3511s are attached to back up, the ZFS filesystem was hosed. Are we
> totally out of luck here? There''s nothing here that we
can''t recover,
> given enough time, but I''d really rather not have to do this.
>
> The machine is a v40z, the 3511s are attached via FC, and uname -a says:
>
> SunOS search 5.10 Generic_118855-33 i86pc i386 i86pc
>
> zpool list says:
>
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> files - - - - FAULTED -
>
> zpool status -v says:
>
> pool: files
> state: FAULTED
> status: The pool metadata is corrupted and the pool cannot be opened.
> action: Destroy and re-create the pool from a backup source.
> see: http://www.sun.com/msg/ZFS-8000-CS
> scrub: none requested
> config:
>
> NAME STATE READ WRITE
> CKSUM
> files FAULTED 0 0
> 6 corrupted data
> raidz1 ONLINE 0 0
> 6
> c0t600C0FF0000000000923490E9DA84700d0 ONLINE 0 0
> 0
> c0t600C0FF0000000000923494F39349400d0 ONLINE 0 0
> 0
> c0t600C0FF000000000092349138D7A3C00d0 ONLINE 0 0
> 0
> c0t600C0FF0000000000923495AF4B94F00d0 ONLINE 0 0
> 0
> c0t600C0FF00000000009234972FF459200d0 ONLINE 0 0
> 0
>
>
> Steve, desperate to get his filesystem back
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock wrote:> What does ''fmdump -eV'' show?See fmdump.out attachment You might also want to try the following> and run ''zpool status'' in the background: > > # dtrace -n ''zfs_ereport_post:entry{stack()}''See dtrace.out attachment. Thanks, Eric.> > This will provide additional information of the source of the ereports > isn''t obvious. > > - Eric > > On Mon, Oct 29, 2007 at 03:44:14PM -0400, Stephen Green wrote: >> We have a pair of 3511s that are host to a couple of ZFS filesystems. >> Over the weekend we had a power hit, and when we brought the server that >> the 3511s are attached to back up, the ZFS filesystem was hosed. Are we >> totally out of luck here? There''s nothing here that we can''t recover, >> given enough time, but I''d really rather not have to do this. >> >> The machine is a v40z, the 3511s are attached via FC, and uname -a says: >> >> SunOS search 5.10 Generic_118855-33 i86pc i386 i86pc >> >> zpool list says: >> >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> files - - - - FAULTED - >> >> zpool status -v says: >> >> pool: files >> state: FAULTED >> status: The pool metadata is corrupted and the pool cannot be opened. >> action: Destroy and re-create the pool from a backup source. >> see: http://www.sun.com/msg/ZFS-8000-CS >> scrub: none requested >> config: >> >> NAME STATE READ WRITE >> CKSUM >> files FAULTED 0 0 >> 6 corrupted data >> raidz1 ONLINE 0 0 >> 6 >> c0t600C0FF0000000000923490E9DA84700d0 ONLINE 0 0 >> 0 >> c0t600C0FF0000000000923494F39349400d0 ONLINE 0 0 >> 0 >> c0t600C0FF000000000092349138D7A3C00d0 ONLINE 0 0 >> 0 >> c0t600C0FF0000000000923495AF4B94F00d0 ONLINE 0 0 >> 0 >> c0t600C0FF00000000009234972FF459200d0 ONLINE 0 0 >> 0 >> >> >> Steve, desperate to get his filesystem back >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: fmdump.out URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071029/e0a593f0/attachment.ksh> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: dtrace.out URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071029/e0a593f0/attachment-0001.ksh>