Hello,
I don't encounter this so far. You can install relative ocfs2-tools
debug packages and gdb to find out what's happening. And get your
findings back to us;-)
To me, it look like a DLM issue, not super block.
Eric
On 05/22/2016 05:44 AM, Mailer Regs wrote:> Hi LQ friends,
>
> I have a problem with our OCFS2 cluster, which I couldn't solve by
myself.
> In short, I have a OCFS2 cluster with 3 nodes and a shared storage LUN. I
> have mapped the LUN to all 3 of the nodes, and split the LUN into 2
> partitions, formatted them as OCFS2 filesystems and mounted them
> successfully. The system has been running OK for nearly 2 years, but today
> the partition 1 suddenly is not accessible. I have to reboot 1 node. After
> rebooting, the partition 2 is mounted OK, but the partition 1 cannot be
> mounted.
> The error is below:
>
> # mount -t ocfs2 /dev/mapper/mpath3p1 /test
> mount.ocfs2: Bad magic number in inode while trying to determine
> heartbeat information
>
> # fsck.ocfs2 /dev/mapper/mpath3p1
> fsck.ocfs2 1.6.3
> fsck.ocfs2: Bad magic number in inode while initializing the DLM
>
> # fsck.ocfs2 -r 2 /dev/mapper/mpath3p1
> fsck.ocfs2 1.6.3
> [RECOVER_BACKUP_SUPERBLOCK] Recover superblock information from backup
> block#1048576? <n> y
> fsck.ocfs2: Bad magic number in inode while initializing the DLM
>
> # parted /dev/mapper/mpath3
> GNU Parted 1.8.1
> Using /dev/mapper/mpath3
> Welcome to GNU Parted! Type 'help' to view a list of commands.
> (parted) print
>
> Model: Linux device-mapper (dm)
> Disk /dev/mapper/mpath3: 20.0TB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
>
> Number Start End Size File system Name Flags
> 1 17.4kB 10.2TB 10.2TB primary
> 2 10.2TB 20.0TB 9749GB primary
>
>
>
> Usually, the bad magic number happens when the super block is corrupted,
> and I have experienced several similar cases before, which can be solved
> quickly by using backup super blocks. But this case is different, I cannot
> fix the problem by simply replacing the super block, thus I'm out of
ideas.
>
> Please take a look and suggest me how to solve this problem, as I need to
> recover the data, it's the most important goal now.
>
> Thanks in advance.
>
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-users
>