Displaying 15 results from an estimated 15 matches for "__ocfs2_cluster_lock".
2013 May 06
2
[PATCH] ocfs2: unlock rw lock if inode lock failed
In ocfs2_file_aio_write, it does ocfs2_rw_lock first and then
ocfs2_inode_lock. But if ocfs2_inode_lock failed, it goes to out_sems
without unlocking rw lock. This will cause a bug in ocfs2_lock_res_free
when testing res->l_ex_holders, which is increased in
__ocfs2_cluster_lock and decreased in __ocfs2_cluster_unlock.
Signed-off-by: Joseph Qi <joseph.qi at huawei.com>
---
fs/ocfs2/file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 6474cb4..e2cd7a8 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c...
2009 Jun 02
10
[PATCH 0/7] [RESEND] Fix some deadlocks in quota code and implement lockdep for cluster locks
Hi,
I'm resending this patch series. It's rediffed against linux-next branch of
Joel's git tree. The first four patches are obvious fixes of deadlocks in quota
code and should go in as soon as possible. The other three patches implement
lockdep support for OCFS2 cluster locks. So you can have a look whether the
code make sence to you and possibly merge them. They should be NOP when
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
..., we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2]
As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below,
The kernel version is 3.13.6.
Aug 20 10:05:43 server211 kernel: [82025.281828] Tainted: GF W O 3.13.6 #5
Aug 20 10:05:43 server...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
..., we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2]
As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below,
The kernel version is 3.13.6.
Aug 20 10:05:43 server211 kernel: [82025.281828] Tainted: GF W O 3.13.6 #5
Aug 20 10:05:43 server...
2010 Feb 03
1
[PATCH] ocfs2: Plugs race between the dc thread and an unlock ast message
This patch plugs a race between the downconvert thread and an unlock ast message.
Specifically, after the downconvert worker has done its task, the dc thread needs
to check whether an unlock ast made the downconvert moot.
Reported-by: David Teigland <teigland at redhat.com>
Signed-off-by: Sunil Mushran <sunil.mushran at oracle.com>
Acked-by: Mark Fasheh <mfasheh at sus.com>
---
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...n+0xdf/0x180
Feb 27 09:50:59 Server21 kernel: [ 1199.751327] [<ffffffff8105f990>] ? try_to_wake_up+0x200/0x200
Feb 27 09:50:59 Server21 kernel: [ 1199.751331] [<ffffffff8165a51d>] wait_for_completion+0x1d/0x20
Feb 27 09:50:59 Server21 kernel: [ 1199.751357] [<ffffffffa05d7eb3>] __ocfs2_cluster_lock.isra.34+0x1f3/0x810 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751364] [<ffffffff813162a1>] ? vsnprintf+0x461/0x600
Feb 27 09:50:59 Server21 kernel: [ 1199.751369] [<ffffffffa017c3bf>] ? o2cb_cluster_connect+0x1af/0x2e0 [ocfs2_stack_o2cb]
Feb 27 09:50:59 Server21 kernel: [ 1199.7...
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...n+0xdf/0x180
Feb 27 09:50:59 Server21 kernel: [ 1199.751327] [<ffffffff8105f990>] ? try_to_wake_up+0x200/0x200
Feb 27 09:50:59 Server21 kernel: [ 1199.751331] [<ffffffff8165a51d>] wait_for_completion+0x1d/0x20
Feb 27 09:50:59 Server21 kernel: [ 1199.751357] [<ffffffffa05d7eb3>] __ocfs2_cluster_lock.isra.34+0x1f3/0x810 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751364] [<ffffffff813162a1>] ? vsnprintf+0x461/0x600
Feb 27 09:50:59 Server21 kernel: [ 1199.751369] [<ffffffffa017c3bf>] ? o2cb_cluster_connect+0x1af/0x2e0 [ocfs2_stack_o2cb]
Feb 27 09:50:59 Server21 kernel: [ 1199.7...
2010 Apr 29
2
Hardware error or ocfs2 error?
...0a/0x449
Apr 29 11:01:18 node06 kernel: [2569440.616378] [<ffffffff812ee118>] ? wait_for_common+0xde/0x14f
Apr 29 11:01:18 node06 kernel: [2569440.616396] [<ffffffff8104a188>] ? default_wake_function+0x0/0x9
Apr 29 11:01:18 node06 kernel: [2569440.616421] [<ffffffffa0fbac46>] ? __ocfs2_cluster_lock+0x8a4/0x8c5 [ocfs2]
Apr 29 11:01:18 node06 kernel: [2569440.616445] [<ffffffff812ee517>] ? out_of_line_wait_on_bit+0x6b/0x77
Apr 29 11:01:18 node06 kernel: [2569440.616468] [<ffffffffa0fbe8ff>] ? ocfs2_inode_lock_full_nested+0x1a3/0xb2c [ocfs2]
Apr 29 11:01:18 node06 kernel: [2569440....
2009 Jun 04
2
[PATCH 0/2] OCFS2 lockdep support
Hi,
here comes the next version of OCFS2 lockdep support. I've dropped patches
with fixes from the series since they were already merged.
As Joel suggested, I've simplified the main patch a bit so that we don't
have ifdefs around lock declarations and there are also a few other minor
improvements.
Honza
2009 Feb 26
1
[PATCH 0/7] OCFS2 locking fixes and lockdep annotations
Hi,
the first four patches in this series fix locking problems in OCFS2 quota code (three of
them can lead to potential deadlocks). The fifth patch reorders ip_alloc_sem for directories
to be acquired before localalloc locks. Mark would you please merge these?
The last two patches implement lockdep annotations for OCFS2 cluster locks. We annotate all
the cluster locks except for special ones
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...n+0xdf/0x180
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124061] [<ffffffff8105f990>] ? try_to_wake_up+0x200/0x200
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124065] [<ffffffff8165a51d>] wait_for_completion+0x1d/0x20
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124092] [<ffffffffa053beb3>] __ocfs2_cluster_lock.isra.34+0x1f3/0x810 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124116] [<ffffffffa05543e0>] ? ocfs2_queue_orphan_scan+0x270/0x270 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124136] [<ffffffffa053d0a9>] ocfs2_orphan_scan_lock+0x99/0xf0 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel:...
2013 Apr 28
2
Is it one issue. Do you have some good ideas, thanks a lot.
...n+0xdf/0x180
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124061] [<ffffffff8105f990>] ? try_to_wake_up+0x200/0x200
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124065] [<ffffffff8165a51d>] wait_for_completion+0x1d/0x20
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124092] [<ffffffffa053beb3>] __ocfs2_cluster_lock.isra.34+0x1f3/0x810 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124116] [<ffffffffa05543e0>] ? ocfs2_queue_orphan_scan+0x270/0x270 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel: [ 3959.124136] [<ffffffffa053d0a9>] ocfs2_orphan_scan_lock+0x99/0xf0 [ocfs2]
Apr 27 17:39:45 ZHJD-VM6 kernel:...
2010 Jan 21
4
dlmglue fixes
David,
So here are the two patches. Remove all patches that you have and apply
these.
The first one is straight forward.
The second one will hopefully fix the livelock issue you have been
encountering.
People reviewing the patches should note that the second one is slightly
different than the one I posted earlier. It removes the BUG_ON in the if
condition where we jump to update_holders. The
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything:
TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604 246266859
TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
6074335 30371669 285493670
TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604
2011 Dec 27
0
[Kernel 3.1.5] [OCFS2] After many write/delete on ocfs2 both servers in cluster kernel oops
...t; Call Trace:
> [<ffffffff8148d45d>] ? schedule_timeout+0x1ed/0x2d0
> [<ffffffffa0b7d1ea>] ? dlmlock+0x8a/0xda0 [ocfs2_dlm]
> [<ffffffff8148ce5c>] ? wait_for_common+0x12c/0x1a0
> [<ffffffff81052230>] ? try_to_wake_up+0x280/0x280
> [<ffffffffa0a3b9c0>] ? __ocfs2_cluster_lock+0x1f0/0x780 [ocfs2]
> [<ffffffff8148ce80>] ? wait_for_common+0x150/0x1a0
> [<ffffffffa0a9c6bc>] ? ocfs2_buffer_cached+0x8c/0x180 [ocfs2]
> [<ffffffffa0a40bc6>] ? ocfs2_inode_lock_full_nested+0x126/0x540 [ocfs2]
> [<ffffffffa0a5922e>] ? ocfs2_lookup_lock_orphan_di...