Displaying 13 results from an estimated 13 matches for "dlm_restart_lock_mastery".
2007 Nov 29
1
Troubles with two node
...r_request:1331
ERROR: link to 0 went down!
Nov 28 15:29:29 web-ha2 kernel: (23450,0):dlm_get_lock_resource:915
ERROR: status = -107
Nov 28 15:29:46 web-ha2 kernel: (23443,0):dlm_do_master_request:1331
ERROR: link to 0 went down!
ERROR: status = -107
[...]
Nov 22 18:14:50 web-ha2 kernel: (17634,0):dlm_restart_lock_mastery:1215
ERROR: node down! 0
Nov 22 18:14:50 web-ha2 kernel: (17634,0):dlm_wait_for_lock_mastery:1036
ERROR: status = -11
Nov 22 18:14:51 web-ha2 kernel: (17619,1):dlm_restart_lock_mastery:1215
ERROR: node down! 0
Nov 22 18:14:51 web-ha2 kernel: (17619,1):dlm_wait_for_lock_mastery:1036
ERROR: status =...
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
...13,2):dlm_get_lock_resource:874 B6ECAF5A668A4573AF763908F26958DB: recovery map is not empty, but must master $RECOVERY lock now
Mar 8 07:23:41 groupwise-1-mht kernel: (4432,0):ocfs2_replay_journal:1176 Recovering node 2 from slot 1 on device (253,1)
Mar 8 07:23:41 groupwise-1-mht kernel: (4192,0):dlm_restart_lock_mastery:1214 ERROR: node down! 2
Mar 8 07:23:41 groupwise-1-mht kernel: (4192,0):dlm_wait_for_lock_mastery:1035 ERROR: status = -11
Mar 8 07:23:41 groupwise-1-mht kernel: (929,1):dlm_restart_lock_mastery:1214 ERROR: node down! 2
Mar 8 07:23:41 groupwise-1-mht kernel: (929,1):dlm_wait_for_lock_mastery:10...
2010 Jan 14
1
another fencing question
Hi,
periodically one of on my two nodes cluster is fenced here are the logs:
Jan 14 07:01:44 nvr1-rc kernel: o2net: no longer connected to node nvr2-
rc.minint.it (num 0) at 1.1.1.6:7777
Jan 14 07:01:44 nvr1-rc kernel: (21534,1):dlm_do_master_request:1334 ERROR:
link to 0 went down!
Jan 14 07:01:44 nvr1-rc kernel: (4007,4):dlm_send_proxy_ast_msg:458 ERROR:
status = -112
Jan 14 07:01:44
2010 Apr 05
1
Kernel Panic, Server not coming back up
...3c5: at least
one node (2) to recover before lock mastery can begin
o2net: accepted connection from node qa-web2 (num 2) at 147.178.220.32:7777
ocfs2_dlm: Node 2 joins domain 6A03E81A818641A68FD8DC23854E12D3
ocfs2_dlm: Nodes in domain ("6A03E81A818641A68FD8DC23854E12D3"): 0 1 2
(12701,1):dlm_restart_lock_mastery:1216 node 2 up while restarting
(12701,1):dlm_wait_for_lock_mastery:1040 ERROR: status = -11
Any suggestions? Is there anymore data I can provide?
Thanks for any help.
Kevin
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything:
TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604 246266859
TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
6074335 30371669 285493670
TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...kernel: [ 4231.992489] (dlm_reco_thread,14227,3):dlm_do_master_request:1332 ERROR: link to 3 went down!
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.992497] (dlm_reco_thread,14227,3):dlm_get_lock_resource:917 ERROR: status = -107
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993204] (dlm_reco_thread,13736,2):dlm_restart_lock_mastery:1221 ERROR: node down! 2
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993214] (dlm_reco_thread,13736,2):dlm_wait_for_lock_mastery:1038 ERROR: status = -11
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993223] (dlm_reco_thread,13736,2):dlm_do_master_requery:1656 ERROR: Error -107 when sending message 514 (key...
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...kernel: [ 4231.992489] (dlm_reco_thread,14227,3):dlm_do_master_request:1332 ERROR: link to 3 went down!
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.992497] (dlm_reco_thread,14227,3):dlm_get_lock_resource:917 ERROR: status = -107
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993204] (dlm_reco_thread,13736,2):dlm_restart_lock_mastery:1221 ERROR: node down! 2
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993214] (dlm_reco_thread,13736,2):dlm_wait_for_lock_mastery:1038 ERROR: status = -11
Apr 27 17:44:18 ZHJD-VM6 kernel: [ 4231.993223] (dlm_reco_thread,13736,2):dlm_do_master_requery:1656 ERROR: Error -107 when sending message 514 (key...
2007 Oct 08
2
OCF2 and LVM
Does anybody knows if is there a certified procedure in to
backup a RAC DB 10.2.0.3 based on OCFS2 ,
via split mirror or snaphots technology ?
Using Linux LVM and OCFS2, does anybody knows if is
possible to dinamically extend an OCFS filesystem,
once the underlying LVM Volume has been extended ?
Thanks in advance
Riccardo Paganini
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors
in kernel log:
May 12 14:46:41 falcon-cl5
May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR:
status = -22
Only one node is mounted volumes in cluster:
/dev/sde on /home/apache/users/D1 type ocfs2
(rw,_netdev,noatime,heartbeat=local)
/dev/sdd on /home/apache/users/D2 type ocfs2
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2009 Feb 03
10
Convert mle list to a hash
These patches convert the mle list to a hash. The same patches apply on
ocfs2 1.4 too.
Currently, we use the same number of hash pages for mles and lockres'.
This will be addressed in a future patch that will make both of them
configurable.
Sunil
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi,
The following patches comprise the bulk of Ocfs2 updates for the
2.6.30 merge window. Aside from larger, more involved fixes, we're adding
the following features, which I will describe in the order their patches are
mailed.
Sunil's exported some more state to our debugfs files, and
consolidated some other aspects of our debugfs infrastructure. This will
further aid us in debugging