search for: dlm_locks

Displaying 18 results from an estimated 18 matches for "dlm_locks".

Did you mean: dlm_lock
2007 May 17
1
[PATCH] ocfs: use list_for_each_entry where benefical
...); - list_for_each(iter, queue) { - lock = list_entry (iter, struct dlm_lock, list); - + list_for_each_entry(lock, queue, list) { /* add another lock. */ total_locks++; if (!dlm_add_lock_to_array(lock, mres, i)) @@ -1717,7 +1701,6 @@ static int dlm_process_recovery_data(str struct dlm_lockstatus *lksb = NULL; int ret = 0; int i, j, bad; - struct list_head *iter; struct dlm_lock *lock = NULL; u8 from = O2NM_MAX_NODES; unsigned int added = 0; @@ -1755,8 +1738,7 @@ static int dlm_process_recovery_data(str spin_lock(&res->spinlock); for (j = DLM_GRANTED_LIST; j &l...
2010 Aug 26
1
[PATCH 2/5] ocfs2/dlm: add lockres as parameter to dlm_new_lock()
...mon.h b/fs/ocfs2/dlm/dlmcommon.h index 49e6492..4e10aa6 100644 --- a/fs/ocfs2/dlm/dlmcommon.h +++ b/fs/ocfs2/dlm/dlmcommon.h @@ -785,7 +785,8 @@ static inline unsigned long long dlm_get_lock_cookie_seq(u64 cookie) } struct dlm_lock * dlm_new_lock(int type, u8 node, u64 cookie, - struct dlm_lockstatus *lksb); + struct dlm_lockstatus *lksb, + struct dlm_lock_resource *res); void dlm_lock_get(struct dlm_lock *lock); void dlm_lock_put(struct dlm_lock *lock); diff --git a/fs/ocfs2/dlm/dlmlock.c b/fs/ocfs2/dlm/dlmlock.c index 5c7ece7..7d0bef2 100644 --- a/fs/ocfs2/dlm/dlmlo...
2009 Jul 31
3
[PATCH] Remove redundant BUG_ON in __dlm_queue_ast
Remove redundant BUG_ON() Signed-off-by: Goldwyn Rodrigues <rgoldwyn at suse.de> --- diff --git a/fs/ocfs2/dlm/dlmast.c b/fs/ocfs2/dlm/dlmast.c index d07ddbe..81eff8e 100644 --- a/fs/ocfs2/dlm/dlmast.c +++ b/fs/ocfs2/dlm/dlmast.c @@ -103,7 +103,6 @@ static void __dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock) lock->ast_pending, lock->ml.type); BUG(); } -
2009 Jun 19
1
[PATCH] ocfs2: Provide the ocfs2_dlm_lvb_valid() stack API.
The Lock Value Block (LVB) of a DLM lock can be lost when nodes die and the DLM cannot reconstruct its state. Clients of the DLM need to know this. ocfs2's internal DLM, o2dlm, explicitly zeroes out the LVB when it loses track of the state. This is not a standard behavior, but ocfs2 has always relied on it. Thus, an o2dlm LVB is always "valid". ocfs2 now supports both o2dlm and
2014 Sep 10
1
How to unlock a bloked resource? Thanks
...he cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How to unlock the lock held by the node? Is there some commands to unlock the resource? Thanks. ------------------------------------------------------------------------------------------------------------------------------------- ????????????????????????????????????????...
2014 Sep 10
1
How to unlock a bloked resource? Thanks
...he cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How to unlock the lock held by the node? Is there some commands to unlock the resource? Thanks. ------------------------------------------------------------------------------------------------------------------------------------- ????????????????????????????????????????...
2010 Aug 26
1
Announce: Lustre 1.8.4 is available!
...ferent files. https://bugzilla.lustre.org/show_bug.cgi?id=17485 - Fix multiple issues related to the grant space. The problem manifests itself as an OST failing write requests with ENOSPC while some free space remains. https://bugzilla.lustre.org/show_bug.cgi?id=22755 - The dlm_locks slab can grow significantly and consumes a lot of memory on the server. This is fixed by imposing an hardlimit to grant_plan. https://bugzilla.lustre.org/show_bug.cgi?id=22476 * Lustre Sysadmins may also be interested in the details of: - https://bugzilla.lustre.org/show_bug.cgi...
2010 Aug 26
1
Announce: Lustre 1.8.4 is available!
...ferent files. https://bugzilla.lustre.org/show_bug.cgi?id=17485 - Fix multiple issues related to the grant space. The problem manifests itself as an OST failing write requests with ENOSPC while some free space remains. https://bugzilla.lustre.org/show_bug.cgi?id=22755 - The dlm_locks slab can grow significantly and consumes a lot of memory on the server. This is fixed by imposing an hardlimit to grant_plan. https://bugzilla.lustre.org/show_bug.cgi?id=22476 * Lustre Sysadmins may also be interested in the details of: - https://bugzilla.lustre.org/show_bug.cgi...
2008 Sep 01
1
(no subject)
Hello, We just experienced a hang that looks superficially very similar to http://www.mail-archive.com/ocfs2-users at oss.oracle.com/msg02359.html There are 3 nodes in the cluster ocfs2-1.4.1 rhel 5.2. Versions, uname's in the attached text file which also includes fs_locks dumps and various other diagnostics. The lock up happened when we were restarting a java application that was
2014 Sep 26
2
One node hangs up issue requiring goog idea, thanks
...y RO Holders: 0 EX Holders: 0 Pending Action: Convert Pending Unlock Action: None Requested Mode: Exclusive Blocking Mode: No Lock PR > Gets: 318317 Fails: 0 Waits (usec) Total: 128622 Max: 3 EX > Gets: 706878 Fails: 0 Waits (usec) Total: 284967 Max: 2 Disk Refreshes: 0 debugfs: dlm_locks M00000000000000046e011700000000 Lockres: M00000000000000046e011700000000 Owner: 2 State: 0x0 Last Used: 0 ASTs Reserved: 0 Inflight: 0 Migration Pending: No Refs: 4 Locks: 2 On Lists: None Reference Map: 1 Lock-Queue Node Level Conv Cookie Refs AST BAST Pendi...
2009 Jan 14
15
Backport patches to ocfs2 1.4 tree from mainline
Found 15 patches (out of 162) that appeared relevant to ocfs2 1.4. Please review. Sunil
2006 Apr 14
1
[RFC: 2.6 patch] fs/ocfs2/: remove unused exports
This patch removes the following unused EXPORT_SYMBOL_GPL's: - cluster/heartbeat.c: o2hb_check_node_heartbeating_from_callback - cluster/heartbeat.c: o2hb_stop_all_regions - cluster/nodemanager.c: o2nm_get_node_by_num - cluster/nodemanager.c: o2nm_configured_node_map - cluster/nodemanager.c: o2nm_get_node_by_ip - cluster/nodemanager.c: o2nm_node_put - cluster/nodemanager.c: o2nm_node_get -
2008 Apr 02
10
[PATCH 0/62] Ocfs2 updates for 2.6.26-rc1
The following series of patches comprises the bulk of our outstanding changes for Ocfs2. Aside from the usual set of cleanups and fixes that were inappropriate for 2.6.25, there are a few highlights: The '/sys/o2cb' directory has been moved to '/sys/fs/o2cb'. The new location meshes better with modern sysfs layout. A symbolic link has been placed in the old location so as to
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>]
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>]
2006 Aug 15
0
[git patches] ocfs2 updates
This set of patches includes a few dlm related fixes from Kurt, and a small, trivial cleanup by Adrian. Also included are three disk allocation patches by me - two fixes and one incremental improvement in our allocation strategy. These have been around since early June, so I think they've had enough testing that they can go upstream. Please pull from 'upstream-linus' branch of
2010 Jan 21
4
dlmglue fixes
David, So here are the two patches. Remove all patches that you have and apply these. The first one is straight forward. The second one will hopefully fix the livelock issue you have been encountering. People reviewing the patches should note that the second one is slightly different than the one I posted earlier. It removes the BUG_ON in the if condition where we jump to update_holders. The