search for: torecov

Displaying 7 results from an estimated 7 matches for "torecov".

Did you mean: torecv
2007 Nov 29
1
Troubles with two node
...:dlm_restart_lock_mastery:1215 ERROR: node down! 0 Nov 22 18:14:51 web-ha2 kernel: (17798,1):dlm_wait_for_lock_mastery:1036 ERROR: status = -11 Nov 22 18:14:51 web-ha2 kernel: (17804,1):dlm_get_lock_resource:896 86472C5C33A54FF88030591B1210C560:M0000000000000009e7e54516dd16ec: at least one node (0) torecover before lock mastery can begin Nov 22 18:14:51 web-ha2 kernel: (17730,1):dlm_get_lock_resource:896 86472C5C33A54FF88030591B1210C560:M0000000000000009e76bf516dd144d: at least one node (0) torecover before lock mastery can begin Nov 22 18:14:51 web-ha2 kernel: (17634,0):dlm_get_lock_resource:896 864...
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
...8 07:23:36 groupwise-1-mht kernel: (4377,0):dlm_wait_for_node_death:371 2062CE05ABA246988E9CCCDAE253F458: waiting 5000ms for notification of death of node 2 Mar 8 07:23:40 groupwise-1-mht kernel: (28613,2):dlm_get_lock_resource:847 B6ECAF5A668A4573AF763908F26958DB:$RECOVERY: at least one node (2) torecover before lock mastery can begin Mar 8 07:23:40 groupwise-1-mht kernel: (28613,2):dlm_get_lock_resource:874 B6ECAF5A668A4573AF763908F26958DB: recovery map is not empty, but must master $RECOVERY lock now Mar 8 07:23:41 groupwise-1-mht kernel: (4432,0):ocfs2_replay_journal:1176 Recovering node 2 fr...
2008 Jul 14
1
Node fence on RHEL4 machine running 1.2.8-2
...be fencing this system by restarting *** The 'dlm_send_remote_convert_request' and 'dlm_wait_for_node_death' on nodes 2 and 3 (and 4) then continued until: Jul 14 05:58:02 node3 (3542,2):dlm_get_lock_resource:921 98F84EF9EC254C499F79F8C13C57CF2E:$RECOVERY: at least one node (0) torecover before lock mastery can begin Jul 14 05:58:02 node3 (3542,2):dlm_get_lock_resource:955 98F84EF9EC254C499F79F8C13C57CF2E: recovery map is not empty, but must master $RECOVERY lock now Jul 14 05:58:02 node2 (3479,2):ocfs2_dlm_eviction_cb:119 device (8,49): dlm has evicted node 0 Jul 14 05:58:04 n...
2011 Mar 04
1
node eviction
...omain ("129859624F7042EAB9829B18CA65FC88"): 1 2 3 4 Mar 3 16:18:02 xirisoas3 kernel: o2net: no longer connected to node XIRISOAS2 (num 2) at 10.0.0.5:9999 Mar 3 16:18:04 xirisoas3 kernel: (23344,2):dlm_get_lock_resource:921 129859624F7042EAB9829B18CA65FC88:$RECOVERY: at least one node (2) torecover before lock mastery can begin Mar 3 16:18:04 xirisoas3 kernel: (23344,2):dlm_get_lock_resource:955 129859624F7042EAB9829B18CA65FC88: recovery map is not empty, but must master $RECOVERY lock now Mar 3 16:18:04 xirisoas3 kernel: (23344,2):dlm_do_recovery:519 (23344) Node 3 is the Recovery Master f...
2007 Oct 08
2
OCF2 and LVM
Does anybody knows if is there a certified procedure in to backup a RAC DB 10.2.0.3 based on OCFS2 , via split mirror or snaphots technology ? Using Linux LVM and OCFS2, does anybody knows if is possible to dinamically extend an OCFS filesystem, once the underlying LVM Volume has been extended ? Thanks in advance Riccardo Paganini
2007 Feb 06
2
Network 10 sec timeout setting?
Hello! Hey didnt a setting for the 10 second network timeout get into the 2.6.20 kernel? if so how do we set this? I am getting OCFS2 1.3.3 (2201,0):o2net_connect_expired:1547 ERROR: no connection established with node 1 after 10.0 seconds, giving up and returning errors. (2458,0):dlm_request_join:802 ERROR: status = -107 (2458,0):dlm_try_to_join_domain:950 ERROR: status = -107
2006 Sep 21
0
ocfs2 reboot
...8.878265: 1158757938.878271) Sep 20 15:20:02 src-rac-duplicati1 kernel: (10047,0):ocfs2_replay_journal:1174 Recovering node 1 from slot 0 on device (104,1) Sep 20 15:20:05 src-rac-duplicati1 kernel: (2062,1):dlm_get_lock_resource:847 6AEF3479C4784E9895BDE697EFCAC035:$RECOVERY: at least one node (1) torecover before lock mastery can begin Sep 20 15:20:05 src-rac-duplicati1 kernel: (2062,1):dlm_get_lock_resource:874 6AEF3479C4784E9895BDE697EFCAC035: recovery map is not empty, but must master $RECOVERY lock now Can you help me? What can I do or look for? Thank's -------------- next part ---------...