similar to: Node fence on RHEL4 machine running 1.2.8-2

Displaying 20 results from an estimated 400 matches similar to: "Node fence on RHEL4 machine running 1.2.8-2"

2007 Mar 08
4
ocfs2 cluster becomes unresponsive
We are running OCFS2 on SLES9 machines using a FC SAN. Without warning both nodes will become unresponsive. Can not access either machine via ssh or terminal (hangs after typing in username). However the machine still responds to pings. This continues until one node is rebooted, at which time the second node resumes normal operations. I am not entirely sure that this is an OCFS2 problem at all
2010 Dec 09
2
servers blocked on ocfs2
Hi, we have recently started to use ocfs2 on some RHEL 5.5 servers (ocfs2-1.4.7) Some days ago, two servers sharing an ocfs2 filesystem, and with quite virtual services, stalled, in what it seems on ocfs2 issue. This are the lines in their messages files: =====node heraclito (0)======================================== /Dec 4 09:15:06 heraclito kernel: o2net: connection to node parmenides
2006 May 18
0
Node crashed after remove a path
Hi, I have a 2-node cluster on 2 Dell PowerEdge 2650. When remove a device path, and both nodes crashed. Any help would be appreciated. Thanks! Roger--- Configuration: Oracle: 10.2.0.1.0 x86 Oracle home: on OCFS2 shared with multipath Oracle datafiles: OCFS2 shared with multipath cat redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 2) uname -a Linux sqa-pe2650-40
2009 Nov 06
0
iscsi connection drop, comes back in seconds, then deadlock in cluster
Greetings ocfs2 folks, A client is experiencing some random deadlock issues within a cluster, wondering if anyone can point us in the right direction. The iSCSI connection seemed to have dropped on one node briefly, ultimately several hours later landing us in a complete deadlock scenario where multiple nodes (Node 7 and Node 8) had to be panic'd (by hand - they didn't ever panic on
2008 Jan 23
1
OCFS2 DLM problems
Hello everyone, once again. We are running into a problem, which has shown now 2 times, possible 3 (once the systems looked different.) The environment is 6 HP DL360/380 g5 servers with eth0 being the public interface, eth1 and bond0 (eth2 and eth3) used for clusterware and bond0 also used for OCFS2. The bond0 interface is in active/passive mode. There are no network errors counters showing and
2009 Mar 18
2
shutdown by o2net_idle_timer causes Xen to hang
Hello, we've had some serious trouble with a two-node Xen-based OCFS2 cluster. In brief: we had two incidents where one node detects an idle timeout and shuts the other node down which causes the other node and the Dom0 to hang. Both times this could only be resolved by rebooting the whole machine using the built-in IPMI card. All machines (including the other DomUs) run Centos 5.2
2009 Feb 04
1
Strange dmesg messages
Hi list, Something went wrong this morning and we have a node ( #0 ) reboot. Something blocked the NFS access from both nodes, one rebooted and the another we restarted the nfsd and it brought him back. Looking at node #0 - the one that rebooted - logs everything seems normal, but looking at the othere node dmesg's we saw this messages: First the o2net detected that node #0 was dead: (It
2006 Sep 21
0
ocfs2 reboot
Hi , I'm new in this mailing list but I have several errors using ocfs2. We had ocfs 1.2.1 and both node of cluster reboot so We have made an upgrade to ocfs2 1.2.3 Again we had a reboot of one node of the cluster . /var/log/messages show : o2net_idle_timer:1309 here are some times that might help debug the situation: (tmr 1158758358.807993 now 1158758368.805980 dr 1158758358.807964adv
2007 Nov 29
1
Troubles with two node
Hi all, I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20). The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times on a month) the OCFS2 stop to work on both system. On the first node I'm getting no error in log files and after a forced shoutdown of the first node on the second I can see
2009 Jul 29
3
Error message whil booting system
Hi, When system booting getting error message "modprobe: FATAL: Module ocfs2_stackglue not found" in message. Some nodes reboot without any error message. ------------------------------------------------- ul 27 10:02:19 alf3 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jul 27 10:02:19 alf3 kernel: Netfilter messages via NETLINK v0.30. Jul 27 10:02:19 alf3 kernel:
2011 Mar 04
1
node eviction
Hello... I wonder if someone have had similar problem like this... a node evicts almost in a weekly basis and I have not found the root cause yet.... Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Node 1 joins domain 129859624F7042EAB9829B18CA65FC88 Mar 2 10:20:57 xirisoas3 kernel: ocfs2_dlm: Nodes in domain ("129859624F7042EAB9829B18CA65FC88"): 1 2 3 4 Mar 3 16:18:02 xirisoas3 kernel:
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up. I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up. root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D PID STAT COMMAND WIDE-WCHAN-COLUMN 7489 D jbd2/sdh-621
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up. I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up. root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D PID STAT COMMAND WIDE-WCHAN-COLUMN 7489 D jbd2/sdh-621
2023 Jun 13
0
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello, Our static analysis tool finds some possible data races in the OCFS2 file system in Linux 6.4.0-rc6. In most calling contexts, the variables? such as res->lockname.name and res->owner are accessed with holding the lock res->spinlock. Here is an example: ? lockres_seq_start() --> Line 539 in dlmdebug.c ??? spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2023 Jun 13
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello, Our static analysis tool finds some possible data races in the OCFS2 file system in Linux 6.4.0-rc6. In most calling contexts, the variables such as res->lockname.name and res->owner are accessed with holding the lock res->spinlock. Here is an example: lockres_seq_start() --> Line 539 in dlmdebug.c spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2023 Jun 16
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hi, On 6/13/23 4:23 PM, Tuo Li wrote: > Hello, > > Our static analysis tool finds some possible data races in the OCFS2 file > system in Linux 6.4.0-rc6. > > In most calling contexts, the variables such as res->lockname.name and > res->owner are accessed with holding the lock res->spinlock. Here is an > example: > > lockres_seq_start() --> Line 539
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2010 Apr 05
1
Kernel Panic, Server not coming back up
I have a relatively new test environment setup that is a little different from your typical scenario. This is my first time using OCFS2, but I believe it should work the way I have it setup. All of this is setup on VMWare virtual hosts. I have two front-end web servers and one backend administrative server. They all share 2 virtual hard drives within VMware (independent, persistent, &
2010 Aug 26
1
[PATCH 2/5] ocfs2/dlm: add lockres as parameter to dlm_new_lock()
Wether the dlm_lock needs to access lvb or not depends on dlm_lock_resource it belongs to. So a new parameter "struct dlm_lock_resource *res" is added to dlm_new_lock() so that we can know if we need to allocate lvb for the dlm_lock. And we have to make the lockres availale for calling dlm_new_lock(). Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> ---