After a crash, in my zpool tree, some dataset report this we i do a ls -la: brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts also if i set zfs set mountpoint=legacy dataset and then i mount the dataset to other location before the directory tree was only : dataset - vdisk.raw The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. However i can send a snapshot of this dataset to another system, but the same behavior occurs. If i do zdb -dddd dataset at the end of the output i can se the references to my file: Object lvl iblk dblk dsize lsize %full type 7 5 16K 128K 149G 256G 58.26 ZFS plain file 264 bonus ZFS znode dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 2097152 path /vdisk.raw uid 777 gid 60001 atime Sun Oct 18 00:49:05 2009 mtime Thu Sep 9 16:22:14 2010 ctime Thu Sep 9 16:22:14 2010 crtime Sun Oct 18 00:49:05 2009 gen 444453 mode 100777 size 274877906945 parent 3 links 1 pflags 40800000104 xattr 0 rdev 0x0000000000000000 if i further investigate: zdb -ddddd dataset 7 Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 objects, rootbp DVA[0]=<0:6654f24000:200> DVA[1]=<1:1a1e3c3600:200> [L0 D MU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=182119L/182119P fill=5 cksum=177e7dd4cd:81ae6d143ee:1782c972431a0:2f927ca7 a1de2c Object lvl iblk dblk dsize lsize %full type 7 5 16K 128K 149G 256G 58.26 ZFS plain file 264 bonus ZFS znode dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 2097152 path /vdisk.raw uid 777 gid 60001 atime Sun Oct 18 00:49:05 2009 mtime Thu Sep 9 16:22:14 2010 ctime Thu Sep 9 16:22:14 2010 crtime Sun Oct 18 00:49:05 2009 gen 444453 mode 100777 size 274877906945 parent 3 links 1 pflags 40800000104 xattr 0 rdev 0x0000000000000000 Indirect blocks: 0 L4 1:6543e22800:400 4000L/400P F=1221767 B=177453/177453 0 L3 1:65022f8a00:2000 4000L/2000P F=1221766 B=177453/177453 0 L2 1:65325a0400:1c00 4000L/1c00P F=16229 B=177453/177453 0 L1 1:6530718400:1600 4000L/1600P F=128 B=177453/177453 0 L0 0:433c473a00:20000 20000L/20000P F=1 B=177453/177453 20000 L0 1:205c471600:20000 20000L/20000P F=1 B=91830/91830 40000 L0 0:3c418ac600:20000 20000L/20000P F=1 B=91830/91830 60000 L0 0:3c418cc600:20000 20000L/20000P F=1 B=91830/91830 80000 L0 0:3c418ec600:20000 20000L/20000P F=1 B=91830/91830 a0000 L0 0:3c4190c600:20000 20000L/20000P F=1 B=91830/91830 c0000 L0 0:3c4192c600:20000 20000L/20000P F=1 B=91830/91830 e0000 L0 0:3c4194c600:20000 20000L/20000P F=1 B=91830/91830 100000 L0 0:3c4198c600:20000 20000L/20000P F=1 B=91830/91830 120000 L0 0:3c4196c600:20000 20000L/20000P F=1 B=91830/91830 140000 L0 1:205c491600:20000 20000L/20000P F=1 B=91830/91830 160000 L0 1:205c4b1600:20000 20000L/20000P F=1 B=91830/91830 180000 L0 1:205c4d1600:20000 20000L/20000P F=1 B=91830/91830 1a0000 L0 1:205c4f1600:20000 20000L/20000P F=1 B=91830/91830 1c0000 L0 1:205c511600:20000 20000L/20000P F=1 B=91830/91830 1e0000 L0 1:205c531600:20000 20000L/20000P F=1 B=91830/91830 200000 L0 1:205c551600:20000 20000L/20000P F=1 B=91830/91830 220000 L0 1:205c571600:20000 20000L/20000P F=1 B=91830/91830 240000 L0 0:3c419ac600:20000 20000L/20000P F=1 B=91830/91830 260000 L0 0:3c419cc600:20000 20000L/20000P F=1 B=91830/91830 280000 L0 0:3c419ec600:20000 20000L/20000P F=1 B=91830/91830 2a0000 L0 0:3c41a0c600:20000 20000L/20000P F=1 B=91830/91830 .................. many more lines till 149G It seems all data blocks are there. Any ideas on hot to recover from this situation? Valerio Piancastelli piancastelli at iclos.com
What OpenSolaris build are you running? victor On 17.09.10 13:53, Valerio Piancastelli wrote:> After a crash, in my zpool tree, some dataset report this we i do a ls -la: > > brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts > > also if i set > > zfs set mountpoint=legacy dataset > > and then i mount the dataset to other location > > before the directory tree was only : > > dataset > - vdisk.raw > > The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. > However i can send a snapshot of this dataset to another system, but the same behavior occurs. > > If i do > zdb -dddd dataset > at the end of the output i can se the references to my file: > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > > if i further investigate: > > zdb -ddddd dataset 7 > > Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 objects, rootbp DVA[0]=<0:6654f24000:200> DVA[1]=<1:1a1e3c3600:200> [L0 D > MU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=182119L/182119P fill=5 cksum=177e7dd4cd:81ae6d143ee:1782c972431a0:2f927ca7 > a1de2c > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > Indirect blocks: > 0 L4 1:6543e22800:400 4000L/400P F=1221767 B=177453/177453 > 0 L3 1:65022f8a00:2000 4000L/2000P F=1221766 B=177453/177453 > 0 L2 1:65325a0400:1c00 4000L/1c00P F=16229 B=177453/177453 > 0 L1 1:6530718400:1600 4000L/1600P F=128 B=177453/177453 > 0 L0 0:433c473a00:20000 20000L/20000P F=1 B=177453/177453 > 20000 L0 1:205c471600:20000 20000L/20000P F=1 B=91830/91830 > 40000 L0 0:3c418ac600:20000 20000L/20000P F=1 B=91830/91830 > 60000 L0 0:3c418cc600:20000 20000L/20000P F=1 B=91830/91830 > 80000 L0 0:3c418ec600:20000 20000L/20000P F=1 B=91830/91830 > a0000 L0 0:3c4190c600:20000 20000L/20000P F=1 B=91830/91830 > c0000 L0 0:3c4192c600:20000 20000L/20000P F=1 B=91830/91830 > e0000 L0 0:3c4194c600:20000 20000L/20000P F=1 B=91830/91830 > 100000 L0 0:3c4198c600:20000 20000L/20000P F=1 B=91830/91830 > 120000 L0 0:3c4196c600:20000 20000L/20000P F=1 B=91830/91830 > 140000 L0 1:205c491600:20000 20000L/20000P F=1 B=91830/91830 > 160000 L0 1:205c4b1600:20000 20000L/20000P F=1 B=91830/91830 > 180000 L0 1:205c4d1600:20000 20000L/20000P F=1 B=91830/91830 > 1a0000 L0 1:205c4f1600:20000 20000L/20000P F=1 B=91830/91830 > 1c0000 L0 1:205c511600:20000 20000L/20000P F=1 B=91830/91830 > 1e0000 L0 1:205c531600:20000 20000L/20000P F=1 B=91830/91830 > 200000 L0 1:205c551600:20000 20000L/20000P F=1 B=91830/91830 > 220000 L0 1:205c571600:20000 20000L/20000P F=1 B=91830/91830 > 240000 L0 0:3c419ac600:20000 20000L/20000P F=1 B=91830/91830 > 260000 L0 0:3c419cc600:20000 20000L/20000P F=1 B=91830/91830 > 280000 L0 0:3c419ec600:20000 20000L/20000P F=1 B=91830/91830 > 2a0000 L0 0:3c41a0c600:20000 20000L/20000P F=1 B=91830/91830 > > .................. many more lines till 149G > > It seems all data blocks are there. > > Any ideas on hot to recover from this situation? > > > Valerio Piancastelli > piancastelli at iclos.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- -- Victor Latushkin phone: x11467 / +74959370467 TSC-Kernel EMEA mobile: +78957693012 Sun Services, Moscow blog: http://blogs.sun.com/vlatushkin Sun Microsystems
with uname -a : SunOS disk-01 5.11 snv_111b i86pc i386 i86pc Solaris it is Opesolaris 2009.06 other useful info: zfs list sas/mail-cts NAME USED AVAIL REFER MOUNTPOINT sas/mail-cts 149G 250G 149G /sas/mail-cts and with df Filesystem 1K-blocks Used Available Use% Mounted on sas/mail-cts 418174037 156501827 261672210 38% /sas/mail-cts Do you need any other infos? Valerio Piancastelli piancastelli at iclos.com ----- Messaggio originale ----- Da: "Victor Latushkin" <Victor.Latushkin at Sun.COM> A: "Valerio Piancastelli" <piancastelli at iclos.com> Cc: zfs-discuss at opensolaris.org Inviato: Venerd?, 17 settembre 2010 16:46:31 Oggetto: Re: [zfs-discuss] ZFS Dataset lost structure What OpenSolaris build are you running? victor On 17.09.10 13:53, Valerio Piancastelli wrote:> After a crash, in my zpool tree, some dataset report this we i do a ls -la: > > brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts > > also if i set > > zfs set mountpoint=legacy dataset > > and then i mount the dataset to other location > > before the directory tree was only : > > dataset > - vdisk.raw > > The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. > However i can send a snapshot of this dataset to another system, but the same behavior occurs. > > If i do > zdb -dddd dataset > at the end of the output i can se the references to my file: > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > > if i further investigate: > > zdb -ddddd dataset 7 > > Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 objects, rootbp DVA[0]=<0:6654f24000:200> DVA[1]=<1:1a1e3c3600:200> [L0 D > MU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=182119L/182119P fill=5 cksum=177e7dd4cd:81ae6d143ee:1782c972431a0:2f927ca7 > a1de2c > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > Indirect blocks: > 0 L4 1:6543e22800:400 4000L/400P F=1221767 B=177453/177453 > 0 L3 1:65022f8a00:2000 4000L/2000P F=1221766 B=177453/177453 > 0 L2 1:65325a0400:1c00 4000L/1c00P F=16229 B=177453/177453 > 0 L1 1:6530718400:1600 4000L/1600P F=128 B=177453/177453 > 0 L0 0:433c473a00:20000 20000L/20000P F=1 B=177453/177453 > 20000 L0 1:205c471600:20000 20000L/20000P F=1 B=91830/91830 > 40000 L0 0:3c418ac600:20000 20000L/20000P F=1 B=91830/91830 > 60000 L0 0:3c418cc600:20000 20000L/20000P F=1 B=91830/91830 > 80000 L0 0:3c418ec600:20000 20000L/20000P F=1 B=91830/91830 > a0000 L0 0:3c4190c600:20000 20000L/20000P F=1 B=91830/91830 > c0000 L0 0:3c4192c600:20000 20000L/20000P F=1 B=91830/91830 > e0000 L0 0:3c4194c600:20000 20000L/20000P F=1 B=91830/91830 > 100000 L0 0:3c4198c600:20000 20000L/20000P F=1 B=91830/91830 > 120000 L0 0:3c4196c600:20000 20000L/20000P F=1 B=91830/91830 > 140000 L0 1:205c491600:20000 20000L/20000P F=1 B=91830/91830 > 160000 L0 1:205c4b1600:20000 20000L/20000P F=1 B=91830/91830 > 180000 L0 1:205c4d1600:20000 20000L/20000P F=1 B=91830/91830 > 1a0000 L0 1:205c4f1600:20000 20000L/20000P F=1 B=91830/91830 > 1c0000 L0 1:205c511600:20000 20000L/20000P F=1 B=91830/91830 > 1e0000 L0 1:205c531600:20000 20000L/20000P F=1 B=91830/91830 > 200000 L0 1:205c551600:20000 20000L/20000P F=1 B=91830/91830 > 220000 L0 1:205c571600:20000 20000L/20000P F=1 B=91830/91830 > 240000 L0 0:3c419ac600:20000 20000L/20000P F=1 B=91830/91830 > 260000 L0 0:3c419cc600:20000 20000L/20000P F=1 B=91830/91830 > 280000 L0 0:3c419ec600:20000 20000L/20000P F=1 B=91830/91830 > 2a0000 L0 0:3c41a0c600:20000 20000L/20000P F=1 B=91830/91830 > > .................. many more lines till 149G > > It seems all data blocks are there. > > Any ideas on hot to recover from this situation? > > > Valerio Piancastelli > piancastelli at iclos.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- -- Victor Latushkin phone: x11467 / +74959370467 TSC-Kernel EMEA mobile: +78957693012 Sun Services, Moscow blog: http://blogs.sun.com/vlatushkin Sun Microsystems
I have another dataset similar to the one i cannot get, if i do: zdb -dddd dataset_good 7 ----------------------------------------------------------------------------------------- Dataset store/nfs/ICLOS/prod/mail-ginjus [ZPL], ID 2119, cr_txg 5736, 9.14G, 5 objects, rootbp DVA[0]=<0:276a891800:200> DVA[1]=<1:5414087200:200> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=244803L/244803P fill=5 cksum=168e2eca3c:78cf0b7dd4c:1600292b5b33d:2d964a4b60c0f6 Object lvl iblk dblk dsize lsize %full type 7 5 16K 128K 9.14G 256G 3.57 ZFS plain file 264 bonus ZFS znode dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 2097152 path /vdisk.raw uid 777 gid 60001 atime Sun Oct 18 00:49:05 2009 mtime Sat Sep 18 16:43:31 2010 ctime Sat Sep 18 16:43:31 2010 crtime Sun Oct 18 00:49:05 2009 gen 444453 mode 100777 size 274877906945 parent 3 links 1 pflags 40800000104 xattr 0 rdev 0x0000000000000000 ------------------------------------------------------------------------------------------ the other one: zdb -dddd dataset_bad 7 ------------------------------------------------------------------------------------------ Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 objects, rootbp DVA[0]=<1:26043b3600:200> DVA[1]=<0:e119e2800:200> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=235450L/235450P fill=5 cksum=11a2fa25cc:5ec70954c62:110f36e1324c4:22ca901812d046 Object lvl iblk dblk dsize lsize %full type 7 5 16K 128K 149G 256G 58.26 ZFS plain file 264 bonus ZFS znode dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 2097152 path /vdisk.raw uid 777 gid 60001 atime Sun Oct 18 00:49:05 2009 mtime Thu Sep 9 16:22:14 2010 ctime Thu Sep 9 16:22:14 2010 crtime Sun Oct 18 00:49:05 2009 gen 444453 mode 100777 size 274877906945 parent 3 links 1 pflags 40800000104 xattr 0 rdev 0x0000000000000000 ------------------------------------------------------------------------------------------ both are clones of the same dataset. Everything seems ok, but when i do zfs mount dataset_bad the system seems to recognize a block device root at disk-01:/mail# /usr/bin/ls -v cts brwxrwxrwx+ 2 root root 0, 0 ott 18 2009 cts 0:owner@:read_data/write_data/append_data/read_xattr/write_xattr/execute /read_attributes/write_attributes/read_acl/write_acl/write_owner /synchronize:allow 1:group@:read_data/write_data/append_data/read_xattr/execute /read_attributes/read_acl/synchronize:allow 2:everyone@:read_data/write_data/append_data/read_xattr/execute /read_attributes/read_acl/synchronize:allow Valerio Piancastelli piancastelli at iclos.com Da: "Valerio Piancastelli" <piancastelli at iclos.com> A: "Victor Latushkin" <Victor.Latushkin at Sun.COM> Inviato: Venerd?, 17 settembre 2010 17:55:02 Oggetto: RE: [zfs-discuss] ZFS Dataset lost structure with uname -a : SunOS disk-01 5.11 snv_111b i86pc i386 i86pc Solaris it is Opesolaris 2009.06 other useful info: zfs list sas/mail-cts NAME USED AVAIL REFER MOUNTPOINT sas/mail-cts 149G 250G 149G /sas/mail-cts and with df Filesystem 1K-blocks Used Available Use% Mounted on sas/mail-cts 418174037 156501827 261672210 38% /sas/mail-cts Do you need any other infos? Valerio Piancastelli piancastelli at iclos.com ----- Messaggio originale ----- Da: "Victor Latushkin" <Victor.Latushkin at Sun.COM> A: "Valerio Piancastelli" <piancastelli at iclos.com> Cc: zfs-discuss at opensolaris.org Inviato: Venerd?, 17 settembre 2010 16:46:31 Oggetto: Re: [zfs-discuss] ZFS Dataset lost structure What OpenSolaris build are you running? victor On 17.09.10 13:53, Valerio Piancastelli wrote:> After a crash, in my zpool tree, some dataset report this we i do a ls -la: > > brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts > > also if i set > > zfs set mountpoint=legacy dataset > > and then i mount the dataset to other location > > before the directory tree was only : > > dataset > - vdisk.raw > > The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. > However i can send a snapshot of this dataset to another system, but the same behavior occurs. > > If i do > zdb -dddd dataset > at the end of the output i can se the references to my file: > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > > if i further investigate: > > zdb -ddddd dataset 7 > > Dataset store/nfs/ICLOS/prod/mail-cts [ZPL], ID 4525, cr_txg 91826, 149G, 5 objects, rootbp DVA[0]=<0:6654f24000:200> DVA[1]=<1:1a1e3c3600:200> [L0 D > MU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=182119L/182119P fill=5 cksum=177e7dd4cd:81ae6d143ee:1782c972431a0:2f927ca7 > a1de2c > > Object lvl iblk dblk dsize lsize %full type > 7 5 16K 128K 149G 256G 58.26 ZFS plain file > 264 bonus ZFS znode > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 2097152 > path /vdisk.raw > uid 777 > gid 60001 > atime Sun Oct 18 00:49:05 2009 > mtime Thu Sep 9 16:22:14 2010 > ctime Thu Sep 9 16:22:14 2010 > crtime Sun Oct 18 00:49:05 2009 > gen 444453 > mode 100777 > size 274877906945 > parent 3 > links 1 > pflags 40800000104 > xattr 0 > rdev 0x0000000000000000 > Indirect blocks: > 0 L4 1:6543e22800:400 4000L/400P F=1221767 B=177453/177453 > 0 L3 1:65022f8a00:2000 4000L/2000P F=1221766 B=177453/177453 > 0 L2 1:65325a0400:1c00 4000L/1c00P F=16229 B=177453/177453 > 0 L1 1:6530718400:1600 4000L/1600P F=128 B=177453/177453 > 0 L0 0:433c473a00:20000 20000L/20000P F=1 B=177453/177453 > 20000 L0 1:205c471600:20000 20000L/20000P F=1 B=91830/91830 > 40000 L0 0:3c418ac600:20000 20000L/20000P F=1 B=91830/91830 > 60000 L0 0:3c418cc600:20000 20000L/20000P F=1 B=91830/91830 > 80000 L0 0:3c418ec600:20000 20000L/20000P F=1 B=91830/91830 > a0000 L0 0:3c4190c600:20000 20000L/20000P F=1 B=91830/91830 > c0000 L0 0:3c4192c600:20000 20000L/20000P F=1 B=91830/91830 > e0000 L0 0:3c4194c600:20000 20000L/20000P F=1 B=91830/91830 > 100000 L0 0:3c4198c600:20000 20000L/20000P F=1 B=91830/91830 > 120000 L0 0:3c4196c600:20000 20000L/20000P F=1 B=91830/91830 > 140000 L0 1:205c491600:20000 20000L/20000P F=1 B=91830/91830 > 160000 L0 1:205c4b1600:20000 20000L/20000P F=1 B=91830/91830 > 180000 L0 1:205c4d1600:20000 20000L/20000P F=1 B=91830/91830 > 1a0000 L0 1:205c4f1600:20000 20000L/20000P F=1 B=91830/91830 > 1c0000 L0 1:205c511600:20000 20000L/20000P F=1 B=91830/91830 > 1e0000 L0 1:205c531600:20000 20000L/20000P F=1 B=91830/91830 > 200000 L0 1:205c551600:20000 20000L/20000P F=1 B=91830/91830 > 220000 L0 1:205c571600:20000 20000L/20000P F=1 B=91830/91830 > 240000 L0 0:3c419ac600:20000 20000L/20000P F=1 B=91830/91830 > 260000 L0 0:3c419cc600:20000 20000L/20000P F=1 B=91830/91830 > 280000 L0 0:3c419ec600:20000 20000L/20000P F=1 B=91830/91830 > 2a0000 L0 0:3c41a0c600:20000 20000L/20000P F=1 B=91830/91830 > > .................. many more lines till 149G > > It seems all data blocks are there. > > Any ideas on hot to recover from this situation? > > > Valerio Piancastelli > piancastelli at iclos.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- -- Victor Latushkin phone: x11467 / +74959370467 TSC-Kernel EMEA mobile: +78957693012 Sun Services, Moscow blog: http://blogs.sun.com/vlatushkin Sun Microsystems