similar to: jbd/kjournald oops on 2.6.30.1

Displaying 20 results from an estimated 100 matches similar to: "jbd/kjournald oops on 2.6.30.1"

2001 Aug 23
2
EXT3 Trouble on 2.4.4
All, I know that there is no official port to Kernel 2.4.4, thus I may not get any help, however I am hoping someone could point me in the right direction for my problem. I am currently forced to use kernel 2.4.4 for reasons out of my control (embedded board). Here are the exact versions of everything I'm running: ExT3 Version: ext3-2.4-0.9.6-248 Util Version: util-linux-2.11f.tar.bz2 e2fs
2002 Sep 06
1
kjournald & jbd
Hello everybody, Could someone please explain to me what is the difference between kjournald and jbd (precisely, what does each of them do?) Thank you, Alina _________________________________________________________________ Chat with friends online, try MSN Messenger: http://messenger.msn.com
2009 Jul 19
0
Disabling checksum offloading at install OSOL 2009.06 PV DomU on Xen 3.4.1 Ubuntu 9.04 Dom0 ( with 2.6.30.1 xenified aka Suse kernel)
Following bellow procedure involves Solaris Kernel module debugger to patch OSOL’s (SNV_111b) kernel at booting up to succeed with dhcp lease and to be able proceed with initial install. File /etc/system gets updated via root terminal session before standard reboot to make the patch for kernel permanent
2001 Nov 05
2
Oops on 2.4.13-pre6 (sparc64)
Ah, Mondays. The following oops happened after approximately eleven days of uptime. The machine was not under any particular load at the time. Following a forced reboot, all filesystems replayed the journal successfully. Relevant log entries leading up to the oops: Nov 4 04:54:08 localhost kernel: attempt to access beyond end of device Nov 4 04:54:08 localhost kernel: 03:02: rw=1,
2002 Apr 04
1
Ext3 related oops and a crash
We have here an knfs fileserver running ext3 on 2.4.18 kernel with three filesystems: <CLIP> Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda3 10080520 3609632 6368476 37% / /dev/hda4 16437332 12295408 3974932 76% /home /dev/md1 1024872060 409906136 614441636 41% /fs </CLIP> The /fs filesystem lives on a
2002 Sep 25
0
PROBLEM:
Ext3 Journal oops & RAID-1 set losing sync. (Sent to both EXT3 and Linux-RAID since both are in use and seem possibly relevant) I have had a number of problems maintaining a software RAID-1 set on an IDE box I maintain; it seems that doing raidhotadd on the drive marked as invalid works each time though. However, I've had both errors about trying to read past the end of the
2007 Jan 29
1
seeking a developper documentation for jbd and ext3
Hi, I am a student in computer science and I develop a program that tries to explain other students the mecanisms of the ext3 filesystem : we show the content of each structure and explain what it means. But I was unable to find a developper documentation for the jounalizing functionality (jbd). Could you please tell me where can I find one ? ( in english or in french ) Also a documentation
2005 Apr 21
0
Problems with ext3/jbd on 2.4.27-vrs1 with power management
Hello all, I have a question about ext3/jbd for 2.4 as it pertains to the ARM architecture. The presence of the ext3 and/or jbd drivers seem to cause suspend/resume to stop working on our platform (SA1110 (StrongARM) - based). My question is: are there any appropriate patches since 2.4.27 to either ext3 or jbd that may be appropos to this issue. I have also posted to the ARM kernel mailing list
2008 Mar 18
1
Problems patching fs/jbd/checkpoint.c in RHEL4 2.6.9-67.0.4 kernel
My manual patching of the rejects in checkpoint.c didn''t work out; a delete of 10,000 files caused a panic (in any ext fs, not just Lustre). In the new checkpoint.c, two routines expected by the patch no longer exist: __cleanup_transaction and __flush_buffer. I can avoid the panic if I omit (don''t try to manually patch) the following: Index: linux-2.6.9/fs/jbd/checkpoint.c
2006 Aug 27
1
how can I get the 64bit JBD patch?
I see that ocfs2 has used 64bit JBD. I want to know where is the 64bit JBD patch? Whether the patch is developed by ocfs2 or not? Whether the patch is developed by JBD itself? Anyone can help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-devel/attachments/20060828/98fdb6c1/attachment.html
2010 Aug 02
1
JBD: failed to read block at offset
Im getting errors like JBD: failed to read block at offset 4360 on a raid 5 parition , have run several fcsk on it.... are these Journal errors recoverable? I've done a physical scan of the hard disks with HP insight manager as well -- ------------------------------------------------------------------------------------------------------------------------------------- NOTICE: This message,
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff, On 07/03/2010 03:58 AM, Jeff Moyer wrote: > Hi, > > Running iozone or fs_mark with fsync enabled, the performance of CFQ is > far worse than that of deadline for enterprise class storage when dealing > with file sizes of 8MB or less. I used the following command line as a > representative test case: > > fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2005 May 03
0
several ext3 and mysql kernel crashes
Hi Ext3! I'm running about 30 dedicated MySQL machines under quite decent loads, and they are occassionally crashing. I've been logging console messages recently in an effort to find the cause, and some appear to be related to I perused your lists and found the message I'm replying to. If you don't mind, I've included messages and ksymoops from two crashes that I had
2003 Jun 13
1
jbd count incremented *even* if volume is mounted RO?
Continuing on with my earlier post . . . after looking through code of JBD, is the following perhaps the difference in why the md5 values differ; When a journalled filesystem that uses jbd is mounted the journal b_count is incremented by one? *EVEN* if the volume was mounted read only, this b_count is still increased by one? curious as ever! lt __________________________________ Do you
2007 Apr 17
0
[About kjournald]
Hi,guys In my server, the process kjournald used much more cpu time. How can I let my system load down? Thanks. Forrest Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://listman.redhat.com/archives/ext3-users/attachments/20070417/67bf3e94/attachment.htm>
2005 Jun 09
1
kjournald pegging cpu
kernel version 2.6.10-1.771_FC2smp We have had quite a few instances of kjournald pegging cpu and thereby effectively knocking out the system's i/o. What can we do to provide more information so that the cause can be identified and fixed? Thanks Christopher
2005 Feb 15
0
Oops in 2.6.10-ac12 in kjournald (journal_commit_transaction)
Today our mailserver froze after just one day of uptime. I was able to capture the Oops on the screen using my digital camera: http://www.stahl.bau.tu-bs.de/~hildeb/bugreport/ Keywords: EIP is at journal_commit_transaction, process kjournald # mount /dev/cciss/c0d0p6 on / type ext3 (rw,errors=remount-ro) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts
2003 Jan 17
1
Write to ext3 fs -> kjournald goes bananas
Hello, we have just observed a rather annoying behavior one of our servers. Specifically, when writing data to a file on an ext3fs filesystem at around 1.5 MB/sec, network connectivity suddently started to show signs of lag. The ping would spike up to ~1 second every 5 seconds or so. This affects the server in question and all hosts routing traffic through it (i.e., it's not a userland-only
2005 Oct 02
0
kjournald and zttest results
Hello! While performing some zttest's for some time today, I was also keeping an eye of a top of the machine. While the zttest was running, I also had a ssh-keygen and a dd creating a 5GB file on an EXT3 partition running. I noticed that for the most part, I got a decent number of 100%'s, and a bunch of 99.6%'s or higher. However, it seems that whenever the zttest dropped
2002 Feb 13
2
Oops in kjournald
I'm getting oops whenever I pull a big file off of an ext3 filesystem on my large LV. The kernel this comes from happens to have lvm 1.0.2 and posix ACL for ext2/3 patched in, but I get the crash even on vanilla 2.4.17. kymoops 2.4.3 on i686 2.4.17-acl-lvm. Options used -V (default) -k /proc/ksyms (default) -l /proc/modules (default) -o /lib/modules/2.4.17-acl-lvm/