search for: dlm_wait_for_lock_masteri

Displaying 15 results from an estimated 15 matches for "dlm_wait_for_lock_masteri".

2007 Nov 29
1
Troubles with two node
Hi all, I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20). The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times on a month) the OCFS2 stop to work on both system. On the first node I'm getting no error in log files and after a forced shoutdown of the first node on the second I can see
2023 Jun 16
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hi, On 6/13/23 4:23 PM, Tuo Li wrote: > Hello, > > Our static analysis tool finds some possible data races in the OCFS2 file > system in Linux 6.4.0-rc6. > > In most calling contexts, the variables such as res->lockname.name and > res->owner are accessed with holding the lock res->spinlock. Here is an > example: > > lockres_seq_start() --> Line 539
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
We are running OCFS2 on SLES9 machines using a FC SAN. Without warning both nodes will become unresponsive. Can not access either machine via ssh or terminal (hangs after typing in username). However the machine still responds to pings. This continues until one node is rebooted, at which time the second node resumes normal operations. I am not entirely sure that this is an OCFS2 problem at all
2007 Jul 25
4
Problem installing on RH3 U8
Hi, i dont seem to be able to get ocfs running on RH3 U8 32Bit [root@libra-devb-db1 root]# uname -a Linux devb-db1.mydomain 2.4.21-47.ELsmp #1 SMP Wed Jul 5 20:38:41 EDT 2006 i686 athlon i386 GNU/Linux [root@devb-db1 root]# cat /etc/redhat-release Red Hat Enterprise Linux AS release 3 (Taroon Update 8) [root@devb-db1 root]# rpm -ivh ocfs-2.4.21-EL-smp-1.0.14-1.i686.rpm Preparing...
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up. I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up. root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D PID STAT COMMAND WIDE-WCHAN-COLUMN 7489 D jbd2/sdh-621
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up. I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up. root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D PID STAT COMMAND WIDE-WCHAN-COLUMN 7489 D jbd2/sdh-621
2010 Jan 14
1
another fencing question
Hi, periodically one of on my two nodes cluster is fenced here are the logs: Jan 14 07:01:44 nvr1-rc kernel: o2net: no longer connected to node nvr2- rc.minint.it (num 0) at 1.1.1.6:7777 Jan 14 07:01:44 nvr1-rc kernel: (21534,1):dlm_do_master_request:1334 ERROR: link to 0 went down! Jan 14 07:01:44 nvr1-rc kernel: (4007,4):dlm_send_proxy_ast_msg:458 ERROR: status = -112 Jan 14 07:01:44
2010 Apr 05
1
Kernel Panic, Server not coming back up
I have a relatively new test environment setup that is a little different from your typical scenario. This is my first time using OCFS2, but I believe it should work the way I have it setup. All of this is setup on VMWare virtual hosts. I have two front-end web servers and one backend administrative server. They all share 2 virtual hard drives within VMware (independent, persistent, &
2023 Jun 13
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello, Our static analysis tool finds some possible data races in the OCFS2 file system in Linux 6.4.0-rc6. In most calling contexts, the variables such as res->lockname.name and res->owner are accessed with holding the lock res->spinlock. Here is an example: lockres_seq_start() --> Line 539 in dlmdebug.c spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2010 Dec 09
2
servers blocked on ocfs2
Hi, we have recently started to use ocfs2 on some RHEL 5.5 servers (ocfs2-1.4.7) Some days ago, two servers sharing an ocfs2 filesystem, and with quite virtual services, stalled, in what it seems on ocfs2 issue. This are the lines in their messages files: =====node heraclito (0)======================================== /Dec 4 09:15:06 heraclito kernel: o2net: connection to node parmenides
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
Hi, everyone I have some questions with the OCFS2 when using it as vm-store. With Ubuntu 1204, kernel version is 3.2.40, and ocfs2-tools version is 1.6.4. As the network configure change, there are some issues as the log below. Why is there the information of "Node 255 (he) is the Recovery Master for the dead node 255" in the syslog? Why the host ZHJD-VM6 is blocked until it reboot
2007 Oct 08
2
OCF2 and LVM
Does anybody knows if is there a certified procedure in to backup a RAC DB 10.2.0.3 based on OCFS2 , via split mirror or snaphots technology ? Using Linux LVM and OCFS2, does anybody knows if is possible to dinamically extend an OCFS filesystem, once the underlying LVM Volume has been extended ? Thanks in advance Riccardo Paganini
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors in kernel log: May 12 14:46:41 falcon-cl5 May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR: status = -22 Only one node is mounted volumes in cluster: /dev/sde on /home/apache/users/D1 type ocfs2 (rw,_netdev,noatime,heartbeat=local) /dev/sdd on /home/apache/users/D2 type ocfs2