Sam Fourman Jr.
2010-Jul-06 17:02 UTC
[zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)
Hello list, I posted this a few days ago on opensolaris-discuss@ list I am posting here, because there my be too much noise on other lists I have been without this zfs set for a week now. My main concern at this point,is it even possible to recover this zpool. How does the metadata work? what tool could is use to rebuild the corrupted parts or even find out what parts are corrupted. most but not all of these disks were Hitachi Retail 1TB didks. I have a Fileserver that runs FreeBSD 8.1 (zfs v14) after a poweroutage, I am unable to import my zpool named Network my pool is made up of 6 1TB disks configured in raidz. there is ~1.9TB of actual data on this pool. I have loaded Open Solaris svn_134 on a seprate boot disk, in hopes of recovering my zpool. on Open Solaris 134, I am not able to import my zpool almost everything I try gives me cannot import ''Network'': I/O error I have done quite a bit of searching, and I found that import -fFX Network should work however after ~ 20 hours this hard locks Open Solaris (however it does return a ping) here is a list of commands that I have run on Open Solaris http://www.puffybsd.com/zfsv14.txt if anyone could help me use zdb or mdb to recover my pool I would very much appreciate it. I believe the metadata is corrupt on my zpool -- Sam Fourman Jr. Fourman Networks http://www.fourmannetworks.com
Cindy Swearingen
2010-Jul-06 20:58 UTC
[zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)
Hi Sam, In general, FreeBSD uses different device naming conventions and power failures seem to clobber disk labeling. The "I/O error" message also points to problems accessing these disks. I''m not sure if this helps, but I see that the 6 disks from the zdb -e output are indicated as c7t0d0p0 --> c7t5d0p0, and the paths resemble /dev/dsk/c7t1d0p0, which makes sense. When you look at the individual labels using zdb -l /dev/dsk/c7t1d0p0, the physical path for this disk looks identical, but the path=/dev/da4 and so on. I''m not familiar with the /dev/da* device naming. When I use zdb -l on my pool''s disks, I see the same phys_path and the same path= for each disk. I hoping someone else who has patched their disks back together after a power failure can comment. Thanks, Cindy On 07/06/10 11:02, Sam Fourman Jr. wrote:> Hello list, > > I posted this a few days ago on opensolaris-discuss@ list > I am posting here, because there my be too much noise on other lists > > I have been without this zfs set for a week now. > My main concern at this point,is it even possible to recover this zpool. > > How does the metadata work? what tool could is use to rebuild the > corrupted parts > or even find out what parts are corrupted. > > > most but not all of these disks were Hitachi Retail 1TB didks. > > > I have a Fileserver that runs FreeBSD 8.1 (zfs v14) > after a poweroutage, I am unable to import my zpool named Network > my pool is made up of 6 1TB disks configured in raidz. > there is ~1.9TB of actual data on this pool. > > I have loaded Open Solaris svn_134 on a seprate boot disk, > in hopes of recovering my zpool. > > on Open Solaris 134, I am not able to import my zpool > almost everything I try gives me cannot import ''Network'': I/O error > > I have done quite a bit of searching, and I found that import -fFX > Network should work > however after ~ 20 hours this hard locks Open Solaris (however it does > return a ping) > > here is a list of commands that I have run on Open Solaris > > http://www.puffybsd.com/zfsv14.txt > > if anyone could help me use zdb or mdb to recover my pool > I would very much appreciate it. > > I believe the metadata is corrupt on my zpool > >
Richard Elling
2010-Jul-06 23:27 UTC
[zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)
On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote:> Hello list, > > I posted this a few days ago on opensolaris-discuss@ list > I am posting here, because there my be too much noise on other lists > > I have been without this zfs set for a week now. > My main concern at this point,is it even possible to recover this zpool. > > How does the metadata work? what tool could is use to rebuild the > corrupted parts > or even find out what parts are corrupted. > > > most but not all of these disks were Hitachi Retail 1TB didks. > > > I have a Fileserver that runs FreeBSD 8.1 (zfs v14) > after a poweroutage, I am unable to import my zpool named Network > my pool is made up of 6 1TB disks configured in raidz. > there is ~1.9TB of actual data on this pool. > > I have loaded Open Solaris svn_134 on a seprate boot disk, > in hopes of recovering my zpool. > > on Open Solaris 134, I am not able to import my zpool > almost everything I try gives me cannot import ''Network'': I/O error > > I have done quite a bit of searching, and I found that import -fFX > Network should work > however after ~ 20 hours this hard locks Open Solaris (however it does > return a ping) > > here is a list of commands that I have run on Open Solaris > > http://www.puffybsd.com/zfsv14.txtYou ran "zdb -l /dev/dsk/c7t5d0s2" which is not the same as "zdb -l /dev/dsk/c7t5d0p0" because of the default partitioning. In Solaris c*t*d*p* are fdisk partitions and c*t*d*s* are SMI or EFI slices. This why label 2&3 could not be found and can be part of the problem to start. Everything in /dev/dsk and /dev/rdsk is a symlink or directory, so you can fake them out with a temporary directory and clever use of the "zpool import -d" command. Examples are in the archives. -- richard> > if anyone could help me use zdb or mdb to recover my pool > I would very much appreciate it. > > I believe the metadata is corrupt on my zpool > > > -- > > Sam Fourman Jr. > Fourman Networks > http://www.fourmannetworks.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Richard Elling richard at nexenta.com +1-760-896-4422 ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/
Victor Latushkin
2010-Jul-08 17:23 UTC
[zfs-discuss] Help with Faulted Zpool Call for Help(Cross post)
On Jul 7, 2010, at 3:27 AM, Richard Elling wrote:> > On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote: > >> Hello list, >> >> I posted this a few days ago on opensolaris-discuss@ list >> I am posting here, because there my be too much noise on other lists >> >> I have been without this zfs set for a week now. >> My main concern at this point,is it even possible to recover this zpool. >> >> How does the metadata work? what tool could is use to rebuild the >> corrupted parts >> or even find out what parts are corrupted. >> >> >> most but not all of these disks were Hitachi Retail 1TB didks. >> >> >> I have a Fileserver that runs FreeBSD 8.1 (zfs v14) >> after a poweroutage, I am unable to import my zpool named Network >> my pool is made up of 6 1TB disks configured in raidz. >> there is ~1.9TB of actual data on this pool. >> >> I have loaded Open Solaris svn_134 on a seprate boot disk, >> in hopes of recovering my zpool. >> >> on Open Solaris 134, I am not able to import my zpool >> almost everything I try gives me cannot import ''Network'': I/O error >> >> I have done quite a bit of searching, and I found that import -fFX >> Network should work >> however after ~ 20 hours this hard locks Open Solaris (however it does >> return a ping) >> >> here is a list of commands that I have run on Open Solaris >> >> http://www.puffybsd.com/zfsv14.txt > > You ran "zdb -l /dev/dsk/c7t5d0s2" which is not the same as > "zdb -l /dev/dsk/c7t5d0p0" because of the default partitioning. > In Solaris c*t*d*p* are fdisk partitions and c*t*d*s* are SMI or > EFI slices. This why label 2&3 could not be found and can be > part of the problem to start.This is unlikely, as raidz vdev is reported as ONLINE, though you can use attached script to verify this. -------------- next part -------------- A non-text attachment was scrubbed... Name: raidz_open2.d Type: application/octet-stream Size: 1010 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100708/93a4042d/attachment.obj> -------------- next part --------------