Displaying 13 results from an estimated 13 matches for "dlm_restart_lock_masteri".
Did you mean:
dlm_restart_lock_mastery
2007 Nov 29
1
Troubles with two node
Hi all,
I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre
channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20).
The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times
on a month) the OCFS2 stop to work on both system. On the first node I'm
getting no error in log files and after a forced shoutdown of the first
node on the second I can see
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
We are running OCFS2 on SLES9 machines using a FC SAN. Without warning both nodes will become unresponsive. Can not access either machine via ssh or terminal (hangs after typing in username). However the machine still responds to pings. This continues until one node is rebooted, at which time the second node resumes normal operations.
I am not entirely sure that this is an OCFS2 problem at all
2010 Jan 14
1
another fencing question
Hi,
periodically one of on my two nodes cluster is fenced here are the logs:
Jan 14 07:01:44 nvr1-rc kernel: o2net: no longer connected to node nvr2-
rc.minint.it (num 0) at 1.1.1.6:7777
Jan 14 07:01:44 nvr1-rc kernel: (21534,1):dlm_do_master_request:1334 ERROR:
link to 0 went down!
Jan 14 07:01:44 nvr1-rc kernel: (4007,4):dlm_send_proxy_ast_msg:458 ERROR:
status = -112
Jan 14 07:01:44
2010 Apr 05
1
Kernel Panic, Server not coming back up
I have a relatively new test environment setup that is a little different
from your typical scenario. This is my first time using OCFS2, but I
believe it should work the way I have it setup.
All of this is setup on VMWare virtual hosts. I have two front-end web
servers and one backend administrative server. They all share 2 virtual
hard drives within VMware (independent, persistent, &
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything:
TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604 246266859
TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
6074335 30371669 285493670
TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone
I have some questions with the OCFS2 when using it as vm-store.
With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4.
As the network configure change, there are some issues as the log below.
Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog?
Why the host ZHJD-VM6 is blocked until it reboot
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone
I have some questions with the OCFS2 when using it as vm-store.
With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4.
As the network configure change, there are some issues as the log below.
Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog?
Why the host ZHJD-VM6 is blocked until it reboot
2007 Oct 08
2
OCF2 and LVM
Does anybody knows if is there a certified procedure in to
backup a RAC DB 10.2.0.3 based on OCFS2 ,
via split mirror or snaphots technology ?
Using Linux LVM and OCFS2, does anybody knows if is
possible to dinamically extend an OCFS filesystem,
once the underlying LVM Volume has been extended ?
Thanks in advance
Riccardo Paganini
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors
in kernel log:
May 12 14:46:41 falcon-cl5
May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR:
status = -22
Only one node is mounted volumes in cluster:
/dev/sde on /home/apache/users/D1 type ocfs2
(rw,_netdev,noatime,heartbeat=local)
/dev/sdd on /home/apache/users/D2 type ocfs2
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2009 Feb 03
10
Convert mle list to a hash
These patches convert the mle list to a hash. The same patches apply on
ocfs2 1.4 too.
Currently, we use the same number of hash pages for mles and lockres'.
This will be addressed in a future patch that will make both of them
configurable.
Sunil
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi,
The following patches comprise the bulk of Ocfs2 updates for the
2.6.30 merge window. Aside from larger, more involved fixes, we're adding
the following features, which I will describe in the order their patches are
mailed.
Sunil's exported some more state to our debugfs files, and
consolidated some other aspects of our debugfs infrastructure. This will
further aid us in debugging