search for: o2dlm

Displaying 20 results from an estimated 31 matches for "o2dlm".

2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...9.124027] Call Trace: Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124035] [<ffffffff8165a55f>] schedule+0x3f/0x60 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124039] [<ffffffff8165aba5>] schedule_timeout+0x2a5/0x320 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124046] [<ffffffffa036e020>] ? o2dlm_lock_ast_wrapper+0x20/0x20 [ocfs2_stack_o2cb] Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124051] [<ffffffff8158346a>] ? do_tcp_sendpages+0x5ba/0x6e0 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124055] [<ffffffff8165a39f>] wait_for_common+0xdf/0x180 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124...
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...9.124027] Call Trace: Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124035] [<ffffffff8165a55f>] schedule+0x3f/0x60 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124039] [<ffffffff8165aba5>] schedule_timeout+0x2a5/0x320 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124046] [<ffffffffa036e020>] ? o2dlm_lock_ast_wrapper+0x20/0x20 [ocfs2_stack_o2cb] Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124051] [<ffffffff8158346a>] ? do_tcp_sendpages+0x5ba/0x6e0 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124055] [<ffffffff8165a39f>] wait_for_common+0xdf/0x180 Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124...
2010 Apr 14
2
[PATCH 1/2] ocfs2/dlm: Make o2dlm domain join/leave messages KERN_NOTICE
o2dlm join and leave messages are more than informational as they are required is debugging locking issues. This patch changes them from KERN_INFO to KERN_NOTICE. Signed-off-by: Sunil Mushran <sunil.mushran at oracle.com> --- fs/ocfs2/dlm/dlmdomain.c | 6 +++--- 1 files changed, 3 insertions(+...
2009 Jun 19
1
[PATCH] ocfs2: Provide the ocfs2_dlm_lvb_valid() stack API.
The Lock Value Block (LVB) of a DLM lock can be lost when nodes die and the DLM cannot reconstruct its state. Clients of the DLM need to know this. ocfs2's internal DLM, o2dlm, explicitly zeroes out the LVB when it loses track of the state. This is not a standard behavior, but ocfs2 has always relied on it. Thus, an o2dlm LVB is always "valid". ocfs2 now supports both o2dlm and fs/dlm via the stack glue. When fs/dlm loses track of an LVBs state, it sets a f...
2011 Oct 18
12
Unable to stop cluster as heartbeat region still active
Hi, I have a 2 nodes ocfs2 cluster running UEK 2.6.32-100.0.19.el5, ocfs2console-1.6.3-2.el5, ocfs2-tools-1.6.3-2.el5. My problem is that all the time when i try to run /etc/init.d/o2cb stop it fails with this error: Stopping O2CB cluster CLUSTER: Failed Unable to stop cluster as heartbeat region still active There is no active mount point. I tried to manually stop the heartdbeat with
2011 Jul 06
2
Slow umounts on SLES10 patchlevel 3 ocfs2
Hi, we are using a SLES10 Patchlevel 3 with 12 Nodes hosting tomcat application servers. The cluster was running some time (about 200 days) without problems. Recently we needed to shutdown the cluster for maintenance and experianced very long times for the umount of the filesystem. It took something like 45 minutes each node and filesystem (12 x 45 minutes shutdown time). As a result the planned
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...Linux Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21. Server20: Feb 27 09:29:31 Server20 kernel: [424826.197532] o2net: No longer connected to node Server21 (num 2) at 192.168.20.21:7100 Feb 27 09:29:31 Server20 kernel: [424826.197633] o2cb: o2dlm has evicted node 2 from domain C5FDF4DB054B49B587DF8D4848443259 Feb 27 09:29:35 Server20 kernel: [424830.079130] o2dlm: Begin recovery on domain C5FDF4DB054B49B587DF8D4848443259 for node 2 Feb 27 09:29:35 Server20 kernel: [424830.079156] o2dlm: Node 1 (me) is the Recovery Master for the dead node 2...
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...Linux Server20 reboot for the disconnection with iSCSI SAN, so Server20 recovery resource locks for Server21. Server20: Feb 27 09:29:31 Server20 kernel: [424826.197532] o2net: No longer connected to node Server21 (num 2) at 192.168.20.21:7100 Feb 27 09:29:31 Server20 kernel: [424826.197633] o2cb: o2dlm has evicted node 2 from domain C5FDF4DB054B49B587DF8D4848443259 Feb 27 09:29:35 Server20 kernel: [424830.079130] o2dlm: Begin recovery on domain C5FDF4DB054B49B587DF8D4848443259 for node 2 Feb 27 09:29:35 Server20 kernel: [424830.079156] o2dlm: Node 1 (me) is the Recovery Master for the dead node 2...
2012 Jun 14
0
[ocfs2-announce] OCFS2 1.4.10-1 released
...e compat code to handle changes in EL5.6 ocfs2: Up version to 1.4.8 ocfs2: cluster Add per region debugfs file to show the elapsed time ocfs2: cluster Create debugfs dir for heartbeat regions Ocfs2: Handle empty list in lockres_seq_start for dlmdebug.c ocfs2: Don t walk off the end of fast symlinks o2dlm: force free mles during dlm exit ocfs2: tighten up strlen checking ocfs2: Remove the redundant cpu_to_le64 ocfs2: Move orphan scan work to ocfs2_wq ocfs2: Make nointr a default mount option ocfs2: print node when tcp fails ocfs2:_dlmfs Fix math error when reading LVB ocfs2: Check the owner of a loc...
2010 Jun 19
3
[PATCH 1/1] ocfs2 fix o2dlm dlm run purgelist
There are two problems in dlm_run_purgelist 1. If a lockres is found to be in use, dlm_run_purgelist keeps trying to purge the same lockres instead of trying the next lockres. 2. When a lockres is found unused, dlm_run_purgelist releases lockres spinlock before setting DLM_LOCK_RES_DROPPING_REF and calls dlm_purge_lockres. spinlock is reacquired but in this window lockres can get reused. This
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2014 Aug 22
2
ocfs2 problem on ctdb cluster
...-t ocfs2 /dev/drbd1 /cluster we get: mount.ocfs2: Unable to access cluster service while trying to join the group We then call: sudo dpkg-reconfigure ocfs2-tools Setting cluster stack "o2cb": OK Starting O2CB cluster ocfs2: OK And all is well: Aug 22 13:48:23 uc1 kernel: [ 1181.117051] o2dlm: Joining domain B044256AC5F14DB089B4C87F28EE9583 ( 1 ) 1 nodes Aug 22 13:48:23 uc1 kernel: [ 1181.258192] ocfs2: Mounting device (147,1) on (node 1, slot 0) with ordered data mode. mount | grep cluster /dev/drbd1 on /cluster type ocfs2 (rw,_netdev,heartbeat=local) Why doesn't o2cb 'stic...
2010 May 20
0
[GIT PULL] ocfs2 updates for 2.6.35
...up localalloc mount option size parsing ocfs2: increase the default size of local alloc windows ocfs2: change default reservation window sizes ocfs2: Add dir_resv_level mount option Srinivas Eeda (1): o2net: log socket state changes Sunil Mushran (3): ocfs2/dlm: Make o2dlm domain join/leave messages KERN_NOTICE ocfs2: Make nointr a default mount option ocfs2/dlm: Increase o2dlm lockres hash size Tao Ma (11): ocfs2: Some tiny bug fixes for discontiguous block allocation. ocfs2: ocfs2_group_bitmap_size has to handle old volume. ocfs2: Add...
2007 Feb 06
1
ocfs2-tools-1.2.2 compile.
Hi, The ocfs2 package compiled perfectly, but tools did not. The test setup is using opensuse10.1 - updates applied For "ocfs2-tools-1.2.2": In file included from include/ocfs2.h:60, from alloc.c:32: include/ocfs2_fs.h: In function ?ocfs2_fast_symlink_chars?: include/ocfs2_fs.h:566: warning: implicit declaration of function ?offsetof? include/ocfs2_fs.h:566: error: expected
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors in kernel log: May 12 14:46:41 falcon-cl5 May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR: status = -22 Only one node is mounted volumes in cluster: /dev/sde on /home/apache/users/D1 type ocfs2 (rw,_netdev,noatime,heartbeat=local) /dev/sdd on /home/apache/users/D2 type ocfs2
2011 Apr 15
0
ocfs2 1.6 2.6.38-2-amd64 kernel panic when unmount
...Fatal exception Unmount on the other node finishes. [18066.352020] o2net: no longer connected to node node1 (num 1) at 192.168.0.5:7777 [18066.367353] ocfs2: Unmounting device (147,0) on (node 0) [18142.527745] o2net: accepted connection from node node1 (num 1) at 192.168.0.5:7777 [18146.144648] o2dlm: Nodes in domain 8FE8B448109C4DBEA78241004023AB1E: 0 1 [18146.158441] ocfs2: Mounting device (147,0) on (node 0, slot 0) with ordered data mode. Thanks KuiZ
2011 May 18
0
[GIT PULL] ocfs2 and configfs fixes for 2.6.39-rc
...it fixes Joel Becker (2): configfs: Don't try to d_delete() negative dentries. configfs: Fix race between configfs_readdir() and configfs_d_iput() Marcus Meissner (1): ocfs2: Initialize data_ac (might be used uninitialized) Sunil Mushran (5): ocfs2/dlm: Use negotiated o2dlm protocol version ocfs2/cluster: Increase the live threshold for global heartbeat ocfs2/cluster: Heartbeat mismatch message improved ocfs2: Skip mount recovery for hard-ro mounts ocfs2/dlm: Target node death during resource migration leads to thread spin Tristan Ye (1):...
2008 Apr 04
1
OCFS2 and iSCSI
Is it possible to implement OCFS2 with an iSCSI filesystem that is shared between 5 web servers. Thanks, LDB
2008 Jun 09
0
OCFS2 1.2.9-1 for RHEL4 and RHEL5 released
...00% just before a node fences due to network idle timeout. This issue could have previously been misdiagnosed as occurring due to a low cluster timeout setting but instead is because of a bug as described in this bugzilla. http://oss.oracle.com/bugzilla/show_bug.cgi?id=919 * Plugs memory leaks in o2dlm These leaks were detected during mainline testing. They went undetected for so long because not only was the size of the leak small, it reproduced under very specific conditions described in the patch fixes below. http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c5...
2008 Jun 09
0
OCFS2 1.2.9-1 for RHEL4 and RHEL5 released
...00% just before a node fences due to network idle timeout. This issue could have previously been misdiagnosed as occurring due to a low cluster timeout setting but instead is because of a bug as described in this bugzilla. http://oss.oracle.com/bugzilla/show_bug.cgi?id=919 * Plugs memory leaks in o2dlm These leaks were detected during mainline testing. They went undetected for so long because not only was the size of the leak small, it reproduced under very specific conditions described in the patch fixes below. http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c5...