search for: ocfs2rec

Displaying 4 results from an estimated 4 matches for "ocfs2rec".

Did you mean: ocfs2new
2013 Nov 01
1
How to break out the unstop loop in the recovery thread? Thanks a lot.
...ng the iSCSI SAN of HP 4330 as the storage. As the storage restarted, there were two node restarted for fence without heartbeating writting on to the storage. But the last one does not restart, and it still write error message into syslog as below: Oct 30 02:01:01 server177 kernel: [25786.227598] (ocfs2rec,14787,13):ocfs2_read_journal_inode:1463 ERROR: status = -5 Oct 30 02:01:01 server177 kernel: [25786.227615] (ocfs2rec,14787,13):ocfs2_replay_journal:1496 ERROR: status = -5 Oct 30 02:01:01 server177 kernel: [25786.227631] (ocfs2rec,14787,13):ocfs2_recover_node:1652 ERROR: status = -5 Oct 30 02:01:0...
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2011 Dec 20
1
OCFS2 problems when connectivity lost
...p") tries to shut the resources down on that node, but fails to stop the OCFS2 filesystem resource stating that it is "in use". *Both* OCFS2 nodes (ie. the one with the network down and the one which is still up in the partition with quorum) hang with dmesg reporting that events, ocfs2rec and ocfs2_wq on *both* nodes are "blocked for more than 120 seconds". When the network is operational, umount by hand works without any problems, because for the testing scenario there are no services running which are keeping the mountpoint busy. Configuration we used is pretty much...
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
...ot; /> The only way I?ve been able to successfully regain I/O within the cluster is to bring back up the other node. While monitoring the logs, it seems that it is OCFS2 that?s establishing the lock/unlock and not DRBD at all. > > > Apr 1 12:07:19 ubu10a kernel: [ 1352.739777] > (ocfs2rec,3643,0):ocfs2_replay_journal:1605 Recovering node 1124116672 from > slot 1 on device (147,0) > Apr 1 12:07:19 ubu10a kernel: [ 1352.900874] > (ocfs2rec,3643,0):ocfs2_begin_quota_recovery:407 Beginning quota recovery in > slot 1 > Apr 1 12:07:19 ubu10a kernel: [ 1352.902509] > (o...