inode#11 is in the system directory. fsck cannot fix this automatically.
If the corruption is limited, there is a chance the inodes could be
recreated manually. But do look at backups to restore.
On 02/01/2012 10:20 AM, Werner Flamme wrote:> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi,
>
> when I try to mount an OCFS2 volume, I get
>
> - ---snip---
> [12212.195823] OCFS2: ERROR (device sde1): ocfs2_validate_inode_block:
> Invalid dinode #11: signature > [12212.195825]
> [12212.195827] File system is now read-only due to the potential of
> on-disk corruption. Please run fsck.ocfs2 once the file system is
> unmounted.
> [12212.195832] (mount.ocfs2,9772,0):ocfs2_read_locked_inode:499 ERROR:
> status = -22
> [12212.195842] (mount.ocfs2,9772,0):_ocfs2_get_system_file_inode:158
> ERROR: status = -116
> [12212.195853]
> (mount.ocfs2,9772,0):ocfs2_init_global_system_inodes:475 ERROR: status
> = -22
> [12212.195860]
> (mount.ocfs2,9772,0):ocfs2_init_global_system_inodes:478 ERROR: Unable
> to load system inode 4, possibly corrupt fs?
> [12212.195862] (mount.ocfs2,9772,0):ocfs2_initialize_super:2379 ERROR:
> status = -22
> [12212.195864] (mount.ocfs2,9772,0):ocfs2_fill_super:1064 ERROR:
> status = -22
> [12212.195869] ocfs2: Unmounting device (8,65) on (node 0)
> - ---pins---
>
> And doing an fsck, it looks like this:
> - ---snip---
> # fsck.ocfs2 -f /dev/disk/by-label/ERSATZ
> fsck.ocfs2 1.8.0
> Checking OCFS2 filesystem in /dev/disk/by-label/ERSATZ:
> Label: ERSATZ
> UUID: AEB995484F2D4D19835AA380CAE0683A
> Number of blocks: 268434093
> Block size: 4096
> Number of clusters: 268434093
> Cluster size: 4096
> Number of slots: 40
>
> /dev/disk/by-label/ERSATZ was run with -f, check forced.
> Pass 0a: Checking cluster allocation chains
> pass0: Bad magic number in inode reading inode alloc inode 11 for
> verification
> fsck.ocfs2: Bad magic number in inode while performing pass 0
> - ---pins---
>
> Any chance to access the filesystem other that reformatting it?
>
> The node ist the only node that can access this volume. I plan to
> share it via iSCSI, but first it must be mountable... There are 3
> other volumes in this cluster, mounted by about a dozen nodes.
>
> Regards,
> Werner