search for: dlm_do_recovery

Displaying 6 results from an estimated 6 matches for "dlm_do_recovery".

2009 Jul 29
3
Error message whil booting system
...A40F8801B56257D805C88:$RECOVERY: at least one node (3) to recover before lock mastery can begin Jul 29 10:17:32 alf1 kernel: (2629,2):dlm_get_lock_resource:878 7BE7E9E2026A40F8801B56257D805C88: recovery map is not empty, but must master $RECOVERY lock now Jul 29 10:17:32 alf1 kernel: (2629,1):dlm_do_recovery:524 (2629) Node 1 is the Recovery Master for the Dead Node 3 for Domain 7BE7E9E2026A40F8801B56257D805C88 Jul 29 10:17:34 alf1 kernel: o2net: accepted connection from node alf3 (num 3) at 172.25.29.13:7777 Jul 29 10:17:38 alf1 kernel: ocfs2_dlm: Node 3 joins domain 7BE7E9E2026A40F8801B56257D805C...
2011 Mar 04
1
node eviction
...EAB9829B18CA65FC88:$RECOVERY: at least one node (2) torecover before lock mastery can begin Mar 3 16:18:04 xirisoas3 kernel: (23344,2):dlm_get_lock_resource:955 129859624F7042EAB9829B18CA65FC88: recovery map is not empty, but must master $RECOVERY lock now Mar 3 16:18:04 xirisoas3 kernel: (23344,2):dlm_do_recovery:519 (23344) Node 3 is the Recovery Master for the Dead Node 2 for Domain 129859624F7042EAB9829B18CA65FC88 Mar 3 16:20:48 xirisoas3 kernel: (22790,2):o2net_connect_expired:1585 ERROR: no connection established with node 2 after 10.0 seconds, giving up and returning errors. Mar 3 16:20:59 xirisoas3 k...
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors in kernel log: May 12 14:46:41 falcon-cl5 May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR: status = -22 Only one node is mounted volumes in cluster: /dev/sde on /home/apache/users/D1 type ocfs2 (rw,_netdev,noatime,heartbeat=local) /dev/sdd on /home/apache/users/D2 type ocfs2
2009 Feb 04
1
Strange dmesg messages
...tery can begin (6968,7):dlm_get_lock_resource:913 F59B45831EEA41F384BADE6C4B7A932B:$RECOVERY: at least one node (0) to recover before lock mastery can begin (6968,7):dlm_get_lock_resource:947 F59B45831EEA41F384BADE6C4B7A932B: recovery map is not empty, but must master $RECOVERY lock now (6968,7):dlm_do_recovery:524 (6968) Node 1 is the Recovery Master for the Dead Node 0 for Domain F59B45831EEA41F384BADE6C4B7A932B (12281,2):ocfs2_replay_journal:1004 Recovering node 0 from slot 0 on device (8,33) (fs/jbd/recovery.c, 255): journal_recover: JBD: recovery, exit status 0, recovered transactions 66251376 to...
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2010 Jan 14
1
another fencing question
Hi, periodically one of on my two nodes cluster is fenced here are the logs: Jan 14 07:01:44 nvr1-rc kernel: o2net: no longer connected to node nvr2- rc.minint.it (num 0) at 1.1.1.6:7777 Jan 14 07:01:44 nvr1-rc kernel: (21534,1):dlm_do_master_request:1334 ERROR: link to 0 went down! Jan 14 07:01:44 nvr1-rc kernel: (4007,4):dlm_send_proxy_ast_msg:458 ERROR: status = -112 Jan 14 07:01:44