Here is the out put from: zdb -vvv smbpool/glusterfs 0x621b67
Dataset smbpool/glusterfs [ZPL], ID 270, cr_txg 1034346, 20.1T, 4139680 objects,
rootbp DVA[0]=<5:5e000021000:600> DVA[1]=<0:56000021000:600> [L0 DMU
objset] fletcher4 lzjb LE contiguous unique double size=400L/200P
birth=1887643L/1887643P fill=4139680
cksum=c3a5ac075:4be35f40b07:f3425110eaaa:217fb2e74152e6
Object lvl iblk dblk dsize lsize %full type
6429543 1 16K 512 2K 512 100.00 ZFS directory
264 bonus ZFS znode
dnode flags: USED_BYTES
dnode maxblkid: 0
path ???<object#6429543>
uid 1009
gid 300
atime Fri Jul 22 11:02:33 2011
mtime Fri Jul 22 11:02:33 2011
ctime Fri Jul 22 11:02:33 2011
crtime Fri Jul 22 11:02:33 2011
gen 1659401
mode 41777
size 5
parent 6429542
links 0
xattr 0
rdev 0x0000000000000000
Still hoping someone could point me in the right direction....right now I am
doing a recursive find command to locate files created on July 22nd (by that
user)...but somehow I think the files no longer exist and that is why zfs is
confused.
Any ideas please????
Thanks,
Shain
----------------------------------------------------------------------------------------------------
From: Shain Miley
Sent: Wednesday, October 12, 2011 3:06 PM
To: zfs-discuss at opensolaris.org
Subject: Scrub error and object numbers
Hello all,
I am using Opensolaris version snv_101b and after some recent issues with a
faulty raid card I am unable to finish an entire ''zpool scrub''
to completion.
While running the scub I receive the following:
errors: Permanent errors have been detected in the following files:
smbpool/glusterfs:<0x621b67>
I have found out that the number after the data set represents the object number
of the file/directory in question, however I have not been able to figure out
what I need to do next to get this cleared up.
We currently have 25TB of large files stored on this file server...so I am
REALLY looking to avoid having to do some sort of massive backup/restore in
order to clear this up.
Can anyone help shed some light on what I can/should do next?
Thanks in advance,
Shain