Displaying 18 results from an estimated 18 matches for "journal_head".
2005 Sep 09
7
[PATCH 0/6] jbd cleanup
The following 6 patches cleanup the jbd code and kill about 200 lines.
First of 4 patches can apply to 2.6.13-git8 and 2.6.13-mm2.
The rest of them can apply to 2.6.13-mm2.
fs/jbd/checkpoint.c | 179 +++++++++++--------------------------------
fs/jbd/commit.c | 101 ++++++++++--------------
fs/jbd/journal.c | 11 +-
fs/jbd/revoke.c | 158
2001 Aug 09
2
Debugging help: BUG: Assertion failure with ext3-0.95 for 2.4.7
...9 17:57:31 boeaet34 kernel: EXT3 FS 2.4-0.9.5, 30 Jul 2001 on md(9,0),
internal journal
Aug 9 17:57:31 boeaet34 kernel: EXT3-fs: recovery complete.
Aug 9 17:57:31 boeaet34 kernel: EXT3-fs: mounted filesystem with ordered
data mode.
Aug 9 17:57:39 boeaet34 kernel: <79): journal_dirty_metadata:
journal_headion.c, 1069): journa(tra journal_head ion.c, 1069):
journal_dirty_metadata: j 1de27ec0
Aug 9 17:57:39 boeaet34 kernel: irty_metadata: journal_head 1de27f20
Aug 9 17:57:39 boeaet34 kernel: <n.c, 1069): journal_dirty_m jou):
journal_dirthead 1de27ec0
Aug 9 17:57:39 boeaet34 kernel: ei.c, 663): e...
2011 Sep 01
1
No buffer space available - loses network connectivity
...93 99% 2.00K 749 2 2996K size-2048
1050 1032 98% 0.55K 150 7 600K inode_cache
792 767 96% 1.00K 198 4 792K size-1024
649 298 45% 0.06K 11 59 44K pid
600 227 37% 0.09K 15 40 60K journal_head
590 298 50% 0.06K 10 59 40K delayacct_cache
496 424 85% 0.50K 62 8 248K size-512
413 156 37% 0.06K 7 59 28K fs_cache
404 44 10% 0.02K 2 202 8K biovec-1
390 293 75% 0.12K 13...
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...dm_tio 4157 7308 16 203 1
dm_io 4155 6760 20 169 1
uhci_urb_priv 0 0 40 92 1
ext3_inode_cache 1062 2856 512 8 1
ext3_xattr 0 0 48 78 1
journal_handle 74 169 20 169 1
journal_head 583 1224 52 72 1
revoke_table 6 254 12 254 1
revoke_record 0 0 16 203 1
qla2xxx_srbs 244 360 128 30 1
scsi_cmd_cache 106 130 384 10 1
sgpool-256 32 32 4096 1 1
sgpool-128...
2000 Nov 04
1
ext3 for 2.4
Hi Stephen,
could you give us an idea, when we could expect the patch for the
2.4-series kernels? Do you have a 'roadmap' here or are you waiting here
for the 'complete' kernel-API for the journaling filesystems?
thanks,
Joachim
--
Joachim Kunze
Alte Marktstrasse 16 Tel.: +49-7042-830006
D-71665 Horrheim Fax: +49-7042-830006
Germany eMail:
2008 Mar 18
1
Problems patching fs/jbd/checkpoint.c in RHEL4 2.6.9-67.0.4 kernel
...lose++;
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
log_start_commit(journal, tid);
@@ -226,7 +227,7 @@ __flush_batch(journal_t *journal, struct
*/
static int __flush_buffer(journal_t *journal, struct journal_head *jh,
struct buffer_head **bhs, int *batch_count,
- int *drop_count)
+ int *drop_count, transaction_t *transaction)
{
struct buffer_head *bh = jh2bh(jh);
int ret = 0;
@@ -247,6 +248,7 @@ static int __flush_buffer(jo...
2001 Mar 30
1
Re: Bug in __invalidate_buffers?
I previously wrote:
> OK, my previous patch cleans up the ASSERT for invalidate_buffers()
> (modulo the fact that it was missing a ')' at the end of the line)
> but it hasn't really fixed the whole problem. If a file write is in
> progress when invalidate_buffers() is called, I get an oops:
> The oops is caused from __invalidate_buffers() calling put_last_free(bh)
>
2009 Sep 23
0
jbd/kjournald oops on 2.6.30.1
...ens (no heavy file/disk io).
Any insights or patches that I can try? (i searched lkml and ext3
lists but could not find any similar oops/reports).
== Oops ===================
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
IP: [<ffffffff80373520>] __journal_remove_journal_head+0x10/0x120
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/class/scsi_host/host0/proc_name
CPU 0
Pid: 3834, comm: kjournald Not tainted 2.6.30.1_test #1
RIP: 0010:[<ffffffff80373520>] [<ffffffff80373520>]
__journal_remove_journal_head+0x10/0x120
RSP: 0018:ffff880c7ee11d80 EFLAGS: 0001...
2011 Sep 01
0
No buffer space available - loses network connectivity
...1493 99% 2.00K 749 2 2996K size-2048
1050 1032 98% 0.55K 150 7 600K inode_cache
792 767 96% 1.00K 198 4 792K size-1024
649 298 45% 0.06K 11 59 44K pid
600 227 37% 0.09K 15 40 60K journal_head
590 298 50% 0.06K 10 59 40K delayacct_cache
496 424 85% 0.50K 62 8 248K size-512
413 156 37% 0.06K 7 59 28K fs_cache
404 44 10% 0.02K 2 202 8K biovec-1
390 293 75% 0.12K 13...
2005 Jan 04
0
[PATCH] BUG on error handlings in Ext3 under I/O failure condition
...9-pre3-bk2/fs/jbd/commit.c 2004-02-18 22:36:31.000000000 +0900
+++ linux-2.4.29-pre3-bk2_fix/fs/jbd/commit.c 2005-01-04 19:58:32.000000000 +0900
@@ -92,7 +92,7 @@
struct buffer_head *wbuf[64];
int bufs;
int flags;
- int err;
+ int err = 0;
unsigned long blocknr;
char *tagp = NULL;
journal_header_t *header;
@@ -299,6 +299,8 @@
spin_unlock(&journal_datalist_lock);
unlock_journal(journal);
wait_on_buffer(bh);
+ if (unlikely(!buffer_uptodate(bh)))
+ err = -EIO;
/* the journal_head may have been removed now */
lock_journal(journal);
goto write_out_data;...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...0 348 11 1 : tunables 54 27
8 : slabdata 0 0 0
ext2_inode_cache 0 0 432 9 1 : tunables 54 27
8 : slabdata 0 0 0
journal_handle 4 185 20 185 1 : tunables 120 60
8 : slabdata 1 1 0
journal_head 41 75 52 75 1 : tunables 120 60
8 : slabdata 1 1 0
revoke_table 2 290 12 290 1 : tunables 120 60
8 : slabdata 1 1 0
revoke_record 0 0 16 226 1 : tunables 120 60
8 : slabdata 0...
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
...0 0 1168 28
fat_cache 0 0 40 102
hugetlbfs_inode_cache 16 16 976 16
jbd2_transaction_s 150 150 320 25
jbd2_journal_handle 306 306 80 51
journal_handle 0 0 56 73
journal_head 1518 1800 112 36
revoke_table 1536 1536 16 256
revoke_record 768 768 32 128
ext4_inode_cache 30702 30952 1704 19
ext4_free_data 768 768 64 64
ext4_allocation_context 180 180 136...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...35 59 64 59
ip_fib_alias 15 113 32 113
ip_fib_hash 15 113 32 113
ext3_inode_cache 309 576 460 8
ext3_xattr 0 0 44 84
journal_handle 64 169 20 169
journal_head 196 504 52 72
revoke_table 6 254 12 254
revoke_record 0 0 16 203
dm_tio 11142 11165 16 203
dm_io 11105 11154 20 169
scsi_cmd_cache 10 10 384...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and
2010 Aug 04
6
[PATCH -v2 0/3] jbd2 scalability patches
This version fixes three bugs in the 2nd patch of this series that
caused kernel BUG when the system was under race. We weren't accounting
with t_oustanding_credits correctly, and there were race conditions
caused by the fact the I had overlooked the fact that
__jbd2_log_wait_for_space() and jbd2_get_transaction() requires
j_state_lock to be write locked.
Theodore Ts'o (3):
jbd2: Use
2008 Dec 22
56
[git patches] Ocfs2 patches for merge window, batch 2/3
Hi,
This is the second batch of Ocfs2 patches intended for the merge window. The
1st batch were sent out previously:
http://lkml.org/lkml/2008/12/19/280
The bulk of this set is comprised of Jan Kara's patches to add quota support
to Ocfs2. Many of the quota patches are to generic code, which I carried to
make merging of the Ocfs2 support easier. All of the non-ocfs2 patches
should have