Displaying 20 results from an estimated 9000 matches similar to: "[PATCH] open files in kjounald"
2002 Apr 22
1
Re: [PATCH] open files in kjounald (2)
Sorry, got it the wrong way around :-(
Hello everybody!
As I wrote in my mail the previous week ("BUG: 2.4.19pre1 & journal_thread
& open filehandles") I followed the problem a little bit further.
Here's my patch; I beg the ext3-maintainers (Stephen, Andreas, Andrew) to
have a look at it and submit it to Marcelo and Linux for inclusion.
(2.4 is for now for me more
2002 Aug 22
1
kjounald takes too much cpu
Hi all,
The problem I have now is similar to previous posts in this list before, but I
couldn't find the solution. If the solution is there, please refer me to
that, and I apologize for not being able to find it.
I recently installed RH7.3, with ext3 fs, on several identical computers.
The processor is AMD 1900+, the motherboard is ASUS A7V266-C,
and the hard drive is a Maxtor 40GB, 7200
2001 Aug 29
1
kupdated, bdflush and kjournald stuck in D state on RAID1 device (deadlock?)
(Sent to linux-raid, linux-kernel and ext3-users since I'm not sure what type of issue
this is)
I've got a test system here running Redhat 7.1 + stock 2.4.9 with these
patches:
http://www.fys.uio.no/~trondmy/src/2.4.9/linux-2.4.9-NFS_ALL.dif
http://www.zip.com.au/~akpm/ext3-2.4-0.9.6-249.gz
http://domsch.com/linux/aacraid/linux-2.4.9-aacraid-20010816.patch
All three patches applied
2005 Jan 07
2
Asterisk 1.0.2 - Unable to allocate channel structure
Hi,
This morning I had some failed calls. On the console (and in the log)
I saw the error "Unable to allocate channel structure". Before I restarted
the process, I checked it's memory usage in ps and glanced at my free
memory in top. Asterisk was using a normal ammount of memory, about
40M. I don't think this was a system limit. This was running Asterisk
v1.0.2. Below is
2002 Sep 06
1
kjournald & jbd
Hello everybody,
Could someone please explain to me what is the difference between
kjournald and jbd (precisely, what does each of them do?)
Thank you,
Alina
_________________________________________________________________
Chat with friends online, try MSN Messenger: http://messenger.msn.com
2005 Jun 14
2
[2.6 patch] fs/jbd/: possible cleanups
This patch contains the following possible cleanups:
- make needlessly global functions static
- journal.c: remove the unused global function __journal_internal_check
and move the check to journal_init
- remove the following write-only global variable:
- journal.c: current_journal
- remove the following unneeded EXPORT_SYMBOL's:
- journal.c: journal_check_used_features
-
2005 Apr 12
6
Centos-4 Kernel pannic
Hi all,
We are running a new Centos-4 server, and it has kernel panicked on us 4
times in the last month. After the first kernel panic we hooked up a
serial console to the server and captured the output in order to have a
record of what happens. I've included the error messages from the last
time it locked up... but it doesn't really mean much to me. Anybody have
any ideas what might be
2007 Jun 16
1
kjournald hang on ext3 to ext3 copy
All,
I am running into a situation in which one of my ext3 filesystems is
getting hung during normal usage. There are three ext3 filesystems on a
CompactFLASH. One is mounted as / and one as /tmp. In my test, I am
copying a 100 MB file from /root to /tmp repeatedly. While doing this
test, I eventually see the copying stop, and any attempts to access /tmp
fail - if I even do ls /tmp the
2005 Jul 19
1
[2.6 patch] fs/jbd/: cleanups
This patch contains the following cleanups:
- make needlessly global functions static
- journal.c: remove the unused global function __journal_internal_check
and move the check to journal_init
- remove the following write-only global variable:
- journal.c: current_journal
- remove the following unneeded EXPORT_SYMBOL:
- journal.c: journal_recover
Signed-off-by: Adrian Bunk
2006 Feb 18
1
kernel panic: Assertion failure in __journal_unfile_buffer()
I was just extracting a 96MB tar file ( tar -xWf backup.tar ), the cpu load was 99%
for a long time. I then stopped it and tried again, but this time this popped up in
my ssh session:
--
Message from syslogd at rock at Sat Feb 18 00:47:05 2006 ...
rock kernel: Assertion failure in __journal_unfile_buffer() at
fs/jbd/transaction.c:1520: "jh->b_jlist < 9"
--
A kernel panic dump is
2005 Apr 22
2
[2.6 patch] fs/jbd/: possible cleanups
This patch contains the following possible cleanups:
- make needlessly global functions static
- #if 0 the following unused global functions:
- journal.c: __journal_internal_check
- journal.c: journal_ack_err
- remove the following write-only global variable:
- journal.c: current_journal
- remove the following unneeded EXPORT_SYMBOL's:
- journal.c: journal_check_used_features
-
2014 Mar 26
1
host crashes "unable to handle paging request"
Hi,
we have regular crashed of a kvm host with the error "unable to handle
paging request".
Can this be due to memory over-commitment even if some memory is still used
by the kernel for caches and buffers? (collectd graph shows no free
memory, with 15G used, very little buffers, and 1G cache). There are 32GB
of swap, of which only 150MB are used.
I suspect might be the direction to
2003 Feb 03
4
[Bug 40] system hangs, Availability problems, maybe conntrack bug, possible reason here.
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=40
laforge@netfilter.org changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
------- Additional Comments From laforge@netfilter.org 2003-02-03 16:49 -------
We haven't seen this
2002 Jul 30
1
Disk Hangs with 2.4.18 and ext3
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Background:
Large NFS/mail server. Dual PIII/1GHZ. 4GB memory.
Mylex AcceleRAID 352 RAID controller (uses DAC960 driver).
Intel eepro100 network cards.
RedHat 7.3 with all errata. Kernel-2.4.18-5smp.
2GB of memory is used by a RAM disk for mail queue.
ext3 filesystems (switched to ext2 to see if that helps).
one large (100GB data partition).
2001 Oct 09
2
Assert in jbd-kernel.c
Hello. I have installed the ext3 file system on a test system, and
sometimes I have a problem: I get an assert from within jbd-kernel.c,
and whatever prgram was writing to the disk when this happens is unable
to continue.
The system is a server I built, which I named "dax". It is running
Debian unstable, and I updated it to all the latest packages in Debian
unstable as of today.
2009 Sep 24
1
strange fencing behavior
I have 10 servers in a cluster running Debian Etch with 2.6.26-bpo.2
with a backport of ocfs2-tools-1.4.1-1
I'm using AoE to export the drives from a Debian Lenny server in the
cluster.
My problem is if I mount the ocfs2 partition on the server that is
exporting it via AoE it fences the entire cluster. Looking at the logs
exporting the ocfs2 partition doesn't give much information...
2004 Jan 26
2
Crashed kernel
http://www.sample.banga.lt/crash.gif
System - fully (except kernel) updated RedHat 7.3.
Filesystems - ext3 in default ordered mode.
What could be the cause of the crash? Kernel update
will solve the problem?
Thanks,
Mindaugas
2009 Feb 04
1
Strange dmesg messages
Hi list,
Something went wrong this morning and we have a node ( #0 ) reboot.
Something blocked the NFS access from both nodes, one rebooted and the
another we restarted the nfsd and it brought him back.
Looking at node #0 - the one that rebooted - logs everything seems
normal, but looking at the othere node dmesg's we saw this messages:
First the o2net detected that node #0 was dead: (It
2006 Nov 28
4
how to prevent filesystem check
Hi all,
I want to setup a RAID storage system, where i have two systems connected to
it. the filesystems are mapped out to both connectors. I want the master host
mount them read write, and the slave read only.
in my fstab on the slave I have a line like the following:
/dev/sdb1 /mount ext3 acl,noauto,user_xattr,nosuid,ro 0 0
so in man 5 fstab, it is written, that when the 6. field
2002 Oct 03
1
kjournald tuning
While investigating erratic performance on one our our servers,
I'm getting some very odd performance stats coming from vmstat.
What initially appeared to be happening is the machine goes into a hard loop
in some mod_perl webserver code.
Now there still may be an issue with the code, but my code examinations show
no possible way this could be happening, but what I'm writing to you