Displaying 7 results from an estimated 7 matches for "jbd2_journal_stop".
2013 Jun 19
1
[PATCH] fs/jbd2: t_updates should increase when start_this_handle() failed in jbd2__journal_restart()
...his_handle(). Before calling start_this_handle()?subtract
1 from transaction->t_updates.
If start_this_handle() succeeds, transaction->t_updates increases by 1
in it. But if start_this_handle() fails, transaction->t_updates does
not increase.
So, when commit the handle's transaction in jbd2_journal_stop(), the
assertion is false, and then trigger a bug.
The assertion is as follows:
J_ASSERT(atomic_read(&transaction->t_updates) > 0)
Signed-off-by: Younger Liu <younger.liu at huawei.com>
---
fs/jbd2/transaction.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/jbd2/tra...
2013 Jun 19
1
ocfs2: Should move ocfs2_start_trans out of lock_page
Currently ocfs2_start_trans/ocfs2_commit_trans are in
lock_page/unlock_page. This may cause dead lock.
Here is the situation:
write -> lock_page -> ocfs2_start_trans -> ocfs2_commit_trans -> unlock_page
ocfs2_start_trans/ocfs2_commit_trans calls
jbd2_journal_start/jbd2_journal_stop which may also call lock_page. So
if the page operated is unfortunately the same with the page to be
committed, dead lock happens.
In ext4, lock_page/unlock_page are in
ext4_journal_start/ext4_journal_stop, this can avoid such kind of dead
lock. So I think we should move ocfs2_start_trans/ocfs2_co...
2010 Aug 04
6
[PATCH -v2 0/3] jbd2 scalability patches
...standing_credits correctly, and there were race conditions
caused by the fact the I had overlooked the fact that
__jbd2_log_wait_for_space() and jbd2_get_transaction() requires
j_state_lock to be write locked.
Theodore Ts'o (3):
jbd2: Use atomic variables to avoid taking t_handle_lock in
jbd2_journal_stop
jbd2: Change j_state_lock to be a rwlock_t
jbd2: Remove t_handle_lock from start_this_handle()
fs/ext4/inode.c | 4 +-
fs/ext4/super.c | 4 +-
fs/jbd2/checkpoint.c | 18 +++---
fs/jbd2/commit.c | 42 ++++++------
fs/jbd2/journal.c | 94 +++++++++++++----------...
2013 Sep 08
0
3.12rc1-pre Nouveau? oops
...d/0x570
> [<ffffffff81071483>] ? __wake_up+0x43/0x70
> [<ffffffff8137c8f8>] ? __pm_runtime_resume+0x48/0x70
> [<ffffffffa020e5a2>] ? nouveau_drm_open+0x42/0x1d0 [nouveau]
> [<ffffffff811bb858>] ? ext4_da_write_end+0xa8/0x2b0
> [<ffffffff811ef559>] ? jbd2_journal_stop+0x1d9/0x2c0
> [<ffffffff8124a56f>] ? apparmor_capable+0x1f/0x90
> [<ffffffffa00c5cab>] ? drm_open+0x28b/0x6e0 [drm]
> [<ffffffffa00c6206>] ? drm_stub_open+0x106/0x1a0 [drm]
> [<ffffffff81143bc0>] ? cdev_put+0x30/0x30
> [<ffffffff81143c56>] ? chrdev_...
2013 Sep 08
2
3.12rc1-pre Nouveau? oops
...;] ? rpm_resume+0x39d/0x570
[<ffffffff81071483>] ? __wake_up+0x43/0x70
[<ffffffff8137c8f8>] ? __pm_runtime_resume+0x48/0x70
[<ffffffffa020e5a2>] ? nouveau_drm_open+0x42/0x1d0 [nouveau]
[<ffffffff811bb858>] ? ext4_da_write_end+0xa8/0x2b0
[<ffffffff811ef559>] ? jbd2_journal_stop+0x1d9/0x2c0
[<ffffffff8124a56f>] ? apparmor_capable+0x1f/0x90
[<ffffffffa00c5cab>] ? drm_open+0x28b/0x6e0 [drm]
[<ffffffffa00c6206>] ? drm_stub_open+0x106/0x1a0 [drm]
[<ffffffff81143bc0>] ? cdev_put+0x30/0x30
[<ffffffff81143c56>] ? chrdev_open+0x96/0x1d0
[&...
2008 Sep 04
4
[PATCH 0/3] ocfs2: Switch over to JBD2.
ocfs2 currently uses the Journaled Block Device (JBD) for its
journaling. This is a very stable and tested codebase. However, JBD
is limited by architecture to 32bit block numbers. This means an ocfs2
filesystem is limited to 2^32 blocks. With a 4K blocksize, that's 16TB.
People want larger volumes.
Fortunately, there is now JBD2. JBD2 adds 64bit block number support
and some other
2008 Dec 22
56
[git patches] Ocfs2 patches for merge window, batch 2/3
Hi,
This is the second batch of Ocfs2 patches intended for the merge window. The
1st batch were sent out previously:
http://lkml.org/lkml/2008/12/19/280
The bulk of this set is comprised of Jan Kara's patches to add quota support
to Ocfs2. Many of the quota patches are to generic code, which I carried to
make merging of the Ocfs2 support easier. All of the non-ocfs2 patches
should have