Displaying 20 results from an estimated 800 matches similar to: "Convert mle list to a hash"
2009 Feb 26
13
o2dlm mle hash patches - round 2
The changes from the last drop are:
1. Patch 11 removes struct dlm_lock_name.
2. Patch 12 is an unrelated bugfix. Actually is related to a bugfix
that we are retracting in mainline currently. The patch may need more testing.
While I did hit the condition in my testing, Marcos hasn't. I am sending it
because it can be queued for 2.6.30. Give us more time to test.
3. Patch 13 will be useful
2009 Apr 17
26
OCFS2 1.4: Patches backported from mainline
Please review the list of patches being applied to the ocfs2 1.4 tree.
All patches list the mainline commit hash.
Thanks
Sunil
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi,
The following patches comprise the bulk of Ocfs2 updates for the
2.6.30 merge window. Aside from larger, more involved fixes, we're adding
the following features, which I will describe in the order their patches are
mailed.
Sunil's exported some more state to our debugfs files, and
consolidated some other aspects of our debugfs infrastructure. This will
further aid us in debugging
2012 Nov 02
1
[PATCH] ocfs2:fix memory leak in dlm_add_migration_mle
After some parallel mount/umount test on ocfs2, we got this: slab error
in kmem_cache_destroy(): cache `o2dlm_mle': Can't free all objects.
Then we found a memleak situation in dlm_add_migration_mle().
When a mle found, it will be removed from dlm->hlist. If there is no
pointer to it at that moment, the mle will become an ?orphan mle?
that no process can find and release.
2008 Apr 02
10
[PATCH 0/62] Ocfs2 updates for 2.6.26-rc1
The following series of patches comprises the bulk of our outstanding
changes for Ocfs2.
Aside from the usual set of cleanups and fixes that were inappropriate for
2.6.25, there are a few highlights:
The '/sys/o2cb' directory has been moved to '/sys/fs/o2cb'. The new location
meshes better with modern sysfs layout. A symbolic link has been placed in
the old location so as to
2009 Jan 14
15
Backport patches to ocfs2 1.4 tree from mainline
Found 15 patches (out of 162) that appeared relevant to ocfs2 1.4.
Please review.
Sunil
2009 Apr 22
1
[PATCH 1/1] OCFS2: fasten dlm_lock_resource hash_table lookups
#backporting the 3 patches at http://kernel.us.oracle.com/~smushran/srini/ to 1.2.
enlarge hash_table capacity to fasten hash_table lookups.
Signed-off-by: Wengang wang <wen.gang.wang at oracle.com>
--
diff -up ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c
--- ./svnocfs2-1.2/fs/ocfs2/dlm/dlmdebug.c.orig 2009-04-22 11:00:37.000000000 +0800
+++
2007 May 17
1
[PATCH] ocfs: use list_for_each_entry where benefical
Signed-off-by: Christoph Hellwig <hch@lst.de>
Index: linux-2.6/fs/ocfs2/cluster/tcp.c
===================================================================
--- linux-2.6.orig/fs/ocfs2/cluster/tcp.c 2007-05-06 13:51:17.000000000 +0200
+++ linux-2.6/fs/ocfs2/cluster/tcp.c 2007-05-17 15:00:14.000000000 +0200
@@ -261,14 +261,12 @@ out:
static void o2net_complete_nodes_nsw(struct o2net_node
2009 Feb 03
5
[PATCH 1/4] ocfs2/dlm: Retract fix for race between purge and migrate
Mainline commit d4f7e650e55af6b235871126f747da88600e8040 attempts to delay
the dlm_thread from sending the drop ref message if the lockres is being
migrated. The problem is that we make the dlm_thread wait for the migration
to complete. This causes a deadlock as dlm_thread also participates in the
lockres migration process.
A better fix for the original oss bugzilla#1012 is in testing.
2009 May 03
1
Deadlock in dlmmaster.c
Hi,
I've found some possible deadlock in fs/ocfs2/dlm/dlmmaster.c - version
2.6.28 (probably this code is in newer versions too).
Could someone confirm this? Thank you.
fs/ocfs2/dlm/dlmmaster.c
==================
function dlm_master_request_handler: (res->spinlock <- dlm->master_lock)
-----------------------------------
spin_lock(&res->spinlock); at line 1427
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up.
I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up.
root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D
PID STAT COMMAND WIDE-WCHAN-COLUMN
7489 D jbd2/sdh-621
2014 Sep 11
1
May be deadlock for wrong locking order, patch request reviewed, thanks
As we test the ocfs2 cluster, the cluster is sometime hangs up.
I got some information about the dead lock, which cause the cluster hangs up, the sys dir / lock is held and the node did not release it which cause the cluster hangs up.
root at cvknode-21:~# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN | grep D
PID STAT COMMAND WIDE-WCHAN-COLUMN
7489 D jbd2/sdh-621
2010 Aug 26
1
[PATCH 2/5] ocfs2/dlm: add lockres as parameter to dlm_new_lock()
Wether the dlm_lock needs to access lvb or not depends on dlm_lock_resource it belongs to. So a new parameter "struct dlm_lock_resource *res" is added to dlm_new_lock() so that we can know if we need to allocate lvb for the dlm_lock. And we have to make the lockres availale for calling dlm_new_lock().
Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com>
---
2010 Jun 19
3
[PATCH 1/1] ocfs2 fix o2dlm dlm run purgelist
There are two problems in dlm_run_purgelist
1. If a lockres is found to be in use, dlm_run_purgelist keeps trying to purge
the same lockres instead of trying the next lockres.
2. When a lockres is found unused, dlm_run_purgelist releases lockres spinlock
before setting DLM_LOCK_RES_DROPPING_REF and calls dlm_purge_lockres.
spinlock is reacquired but in this window lockres can get reused. This
2010 Oct 08
23
O2CB global heartbeat - hopefully final drop!
All,
This is hopefully the final drop of the patches for adding global heartbeat
to the o2cb stack.
The diff from the previous set is here:
http://oss.oracle.com/~smushran/global-hb-diff-2010-10-07
Implemented most of the suggestions provided by Joel and Wengang.
The most important one was to activate the feature only at the end,
Also, got mostly a clean run with checkpatch.pl.
Sunil
2023 Jun 13
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hello,
Our static analysis tool finds some possible data races in the OCFS2 file
system in Linux 6.4.0-rc6.
In most calling contexts, the variables such as res->lockname.name and
res->owner are accessed with holding the lock res->spinlock. Here is an
example:
lockres_seq_start() --> Line 539 in dlmdebug.c
spin_lock(&res->spinlock); --> Line 574 in dlmdebug.c (Lock
2023 Jun 16
1
[BUG] ocfs2/dlm: possible data races in dlm_drop_lockres_ref_done() and dlm_get_lock_resource()
Hi,
On 6/13/23 4:23 PM, Tuo Li wrote:
> Hello,
>
> Our static analysis tool finds some possible data races in the OCFS2 file
> system in Linux 6.4.0-rc6.
>
> In most calling contexts, the variables such as res->lockname.name and
> res->owner are accessed with holding the lock res->spinlock. Here is an
> example:
>
> lockres_seq_start() --> Line 539
2009 Jul 07
2
[PATCH 1/1] ocfs2-devel: trivial fix for s/migrate/migration/ in dlmrecovery.c, line 1121
in dlmrecovery.c:1121, replace 'migrate' to 'migration' to keep the consistency
by comparing to other lines with the similar log info in the same file.
Signed-off-by: Jeff Liu <jeff.liu at oracle.com>
---
fs/ocfs2/dlm/dlmrecovery.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index
2009 Jul 02
1
[PATCH 1/1] NET_MAX_PAYLOAD_BYTES typo?
I've read fs/ocfs2/dlm/dlmcommon.h to study the structure of dlm_migratable_lockres.
However, I may find a typo for the DLM_MIG_LOCKRES_MAX_LEN,
the NET_MAX_PAYLOAD_BYTES should be O2NET_MAX_PAYLOAD_BYTES, I think.
In comments, the sizeof(net_msg) should be sizeof(o2net_msg) by referring to fs/ocfs2/cluster/tcp.h.
Signed-off-by: Jeff Liu <jeff.liu at oracle.com>
---
2008 Jan 09
2
[PATCH 1/1] Clear joining_node no matter whether it is in the domain map or not.
Currently the process of dlm join contains 2 steps: query join and assert join.
After query join, the joined node will set its joining_node. So if the joining
node happens to panic before the 2nd step, the joined node will fail to clear
its joining_node flag because that node isn't in the domain map. It at least
cause 2 problems.
1. All the new join request will fail. So no new node can mount