similar to: jbd count incremented *even* if volume is mounted RO?

Displaying 20 results from an estimated 500 matches similar to: "jbd count incremented *even* if volume is mounted RO?"

2001 Aug 23
2
EXT3 Trouble on 2.4.4
All, I know that there is no official port to Kernel 2.4.4, thus I may not get any help, however I am hoping someone could point me in the right direction for my problem. I am currently forced to use kernel 2.4.4 for reasons out of my control (embedded board). Here are the exact versions of everything I'm running: ExT3 Version: ext3-2.4-0.9.6-248 Util Version: util-linux-2.11f.tar.bz2 e2fs
2006 Jun 20
1
viewing ext3 journal
Hi! Is there a way to view ext3 filesystem's maintained journal (in a human-readable-format)? I ask, because i have had a server crash before and now i'm wondering if i might take a look at last things that my server did straight before crash. I guess clarifying log insertions might be lost before buffers were flushed to disk. Thx.
2001 Jul 13
0
0.0.7a + rh2.2.19: help solve rejects
I get 2 rejects applying 2.2.19-ext3 to latest errata rh 2.2.19 kernel. 1) fs/buffer.c Should I put "J_ASSERT(buf->b_count > 0);" before or after " *(int *)0 = 0;"? ===== ext3 0.0.7a patch --- 934,946 ---- if (buf->b_count) { buf->b_count--; + if (!buf->b_count && + (buf->b_jlist != BJ_None && buf->b_jlist
2001 Jul 29
1
2.2.19/0.0.7a: bonnie -> VM problems
SYSTEM: rh6x based system, 2.2.19-6.2.7 rh errata kernel + 0.0.7a patch, I rebuilt rpm for i686; celeron466, 64MB, PIIX4. root fs is on software raid1 ext2, 6 additional fs's on software raid1 ext2. There's a 3rd HD, not mirrored, which is mounted ext3. EXT3-fs: mounted filesystem with ordered data mode. I enabled journal with tune2fs -j with unmounted fs. The 3 HDs are tuned with
2003 Nov 16
1
Bug in 2.6.0-9
Assertion failure in journal_add_journal_head() at fs/jbd/journal.c:1679 : "(((&bh->b_count)->counter) > 0) || (bh->b_page && bh->b_page->mapping)" ------------[ cut here ]------------ kernel BUG at fs/jbd/journal.c:1679! invalid operand: 0000 [#2] CPU: 0 EIP: 0060:[<c017637f>] Not tainted EFLAGS: 00010282 EIP is at
2001 Mar 30
1
Re: Bug in __invalidate_buffers?
I previously wrote: > OK, my previous patch cleans up the ASSERT for invalidate_buffers() > (modulo the fact that it was missing a ')' at the end of the line) > but it hasn't really fixed the whole problem. If a file write is in > progress when invalidate_buffers() is called, I get an oops: > The oops is caused from __invalidate_buffers() calling put_last_free(bh) >
2001 Mar 29
1
Re: Bug in __invalidate_buffers?
I previously wrote: > I have come across what appears to be a bug in __invalidate_buffers() > w.r.t. the change in ext3-0.0.6 using BH_JDirty instead of BH_Dirty > for buffers held in the journal. If invalidate_buffers() is called > on a device (LVM likes to do this a lot, for whatever reason), it yanks > JDirty buffers out from underneath the journal layer, and causes an > oops
2001 May 16
1
Re: [linux-lvm] lvm deadlock with 2.4.x kernel?
I think I have this one solved, I hope. I think what Andreas and I are running into are a few different assertions. One being the LVM lvm_do_pv_flush caused assertion which is related directly to invalidate_buffers() being called which then triggers refile_buffer() on a journaled buffer, which appears clean in all other ways according to the checks in refile_buffer(). The following is what
2002 May 28
2
Journal size
Is there any way to query the size of an existing journal? I have heard a number of sizes thrown around as defaults, but I need to be able to reliably get the exact journal size. Thanks Jason -- ---------------------------------------------------- Storix Software info@storix.com (sales) 1-619-702-6500 support@storix.com (support) 1-877-STORIX-1 (US) http://www.storix.com
2002 May 12
3
ext3 .journal location?
Forgive my novice question, but I am a new student of Linux working on presenting the ext3 journaling filesystem to my class. I seek any advice on how to visibly demonstrate (including a purposeful crash of a Linux box) the benefits of ext3 over ext2. I am not worthy to lick the bootstraps of this group, but I beg for any help! The problem I am having extends to even locating the .journal file
2005 Mar 10
3
a few questions about ext3 journal
A few wild ideas/questions : 1) Is there a way to check the size of the journal of an ext3 filesystem ? I mean - the actually used size ; not the total size of the journal. 2) Would it be difficult to implement "freeze" of ext3 filesystem - that is, blocking all I/O to the filesystem until it's "unfrozen" (XFS can do that), for two purposes : A/ allowing
2007 Jan 29
1
seeking a developper documentation for jbd and ext3
Hi, I am a student in computer science and I develop a program that tries to explain other students the mecanisms of the ext3 filesystem : we show the content of each structure and explain what it means. But I was unable to find a developper documentation for the jounalizing functionality (jbd). Could you please tell me where can I find one ? ( in english or in french ) Also a documentation
2005 Apr 21
0
Problems with ext3/jbd on 2.4.27-vrs1 with power management
Hello all, I have a question about ext3/jbd for 2.4 as it pertains to the ARM architecture. The presence of the ext3 and/or jbd drivers seem to cause suspend/resume to stop working on our platform (SA1110 (StrongARM) - based). My question is: are there any appropriate patches since 2.4.27 to either ext3 or jbd that may be appropos to this issue. I have also posted to the ARM kernel mailing list
2008 Mar 18
1
Problems patching fs/jbd/checkpoint.c in RHEL4 2.6.9-67.0.4 kernel
My manual patching of the rejects in checkpoint.c didn''t work out; a delete of 10,000 files caused a panic (in any ext fs, not just Lustre). In the new checkpoint.c, two routines expected by the patch no longer exist: __cleanup_transaction and __flush_buffer. I can avoid the panic if I omit (don''t try to manually patch) the following: Index: linux-2.6.9/fs/jbd/checkpoint.c
2006 Aug 27
1
how can I get the 64bit JBD patch?
I see that ocfs2 has used 64bit JBD. I want to know where is the 64bit JBD patch? Whether the patch is developed by ocfs2 or not? Whether the patch is developed by JBD itself? Anyone can help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-devel/attachments/20060828/98fdb6c1/attachment.html
2010 Aug 02
1
JBD: failed to read block at offset
Im getting errors like JBD: failed to read block at offset 4360 on a raid 5 parition , have run several fcsk on it.... are these Journal errors recoverable? I've done a physical scan of the hard disks with HP insight manager as well -- ------------------------------------------------------------------------------------------------------------------------------------- NOTICE: This message,
2002 Sep 06
1
kjournald & jbd
Hello everybody, Could someone please explain to me what is the difference between kjournald and jbd (precisely, what does each of them do?) Thank you, Alina _________________________________________________________________ Chat with friends online, try MSN Messenger: http://messenger.msn.com
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff, On 07/03/2010 03:58 AM, Jeff Moyer wrote: > Hi, > > Running iozone or fs_mark with fsync enabled, the performance of CFQ is > far worse than that of deadline for enterprise class storage when dealing > with file sizes of 8MB or less. I used the following command line as a > representative test case: > > fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2009 Sep 23
0
jbd/kjournald oops on 2.6.30.1
Hi, I am getting the following Oops on 2.6.30.1 kernel. The bad part is, it happens rarely (twice in last 1.5 months) and the system is pretty lightly loaded when this happens (no heavy file/disk io). Any insights or patches that I can try? (i searched lkml and ext3 lists but could not find any similar oops/reports). == Oops =================== BUG: unable to handle kernel NULL pointer
2005 Nov 16
0
(large, external) data journal BUG (Assertion failure in __journal_drop_transaction() at fs/jbd/checkpoint.c:626: "transaction->t_forget == NULL")
Hi, A couple of our important servers, both running FC4 but one i386 and one x86_64, have been crashing recently. They both are running ext3 data=journal with large external journals and high commit intervals. Both machines use the gdth driver for their hardware RAID sets, if that's of any use. I think the hardware is good in both cases. I hope someone finds this data useful enough to be