similar to: ext3 and data=journal bug

Displaying 20 results from an estimated 300 matches similar to: "ext3 and data=journal bug"

2002 Dec 06
2
[patch] fix the ext3 data=journal unmount bug
This patch fixes the data loss which can occur when unmounting a data=journal ext3 filesystem. The core problem is that the VFS doesn't tell the filesystem enough about what is happening. ext3 _needs_ to know the difference between regular memory-cleansing writeback and sync-for-data-integrity purposes. (These two operations are really quite distinct, and the kernel has got it wrong for
2002 Dec 15
1
ext3 updates for 2.4.20
There are three patches at http://www.zip.com.au/~akpm/linux/patches/2.4/2.4.20/ sync_fs.patch: Fix the ext3 data=journal data-loss-on-unmount bug sync_fs-fix.patch: Fix sync_fs.patch to not deadlock the fs when running `mount -o remount' against a heavily loaded filesystem. ext3-use-after-free.patch Fix a use-after-free bug which can cause memory corruption if the filesystem runs
2003 Apr 08
2
nasty ext3 problem
Using kernel 2.4.20 with the following patches ext3-scheduling-storm.patch ext3-use-after-free.patch sync_fs-fix-2.patch sync_fs-fix.patch sync_fs.patch Note: this problem started happening before applying the patches I have a small partition for / since I don't have much there. Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda6 303344 98934
2004 Feb 05
3
increasing ext3 or io responsiveness
Our Invoice posting routine (intensive harddrive io) freezes every few seconds to flush the cache. Reading this: https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html I decided to try: # elvtune -r 2048 -w 131072 /dev/sda # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush # run_post_routine # elvtune -r 128 -w 512 /dev/sda # echo "30 500 0 0
2002 Jun 21
0
ext3 and bdflush tweaking
I apologize if this is off-topic, because I'm not sure if /proc/sys/vm/bdflush has anything to do with ext3 performance or not. However, I've been searching around for a while and can't find the answer I need. If somebody could shed some light, I'd really appreciate it. In "Securing and Optimizing Linux: RedHat Edition -A Hands on Guide", the author gives these values to
2003 Mar 20
2
[Patch] ext3_journal_stop inode access
Hi Andrew, The patch below addresses the problem we were talking about earlier where ext3_writepage ends up accessing the inode after the page lock has been dropped (and hence at a point where it is possible for the inode to have been reclaimed.) Tested minimally (it builds and boots.) It makes ext3_journal_stop take an sb, not an inode, as its final parameter. It also sets
2009 Jan 24
2
[PATCH] btrfs: flushoncommit mount option
Hi Chris- Here''s a simpler version of the patch that drops the unrelated sync_fs stuff. thanks- sage The ''flushoncommit'' mount option forces any data dirtied by a write in a prior transaction to commit as part of the current commit. This makes the committed state a fully consistent view of the file system from the application''s perspective (i.e., it
2002 Jun 06
2
More ext3 fileserver woes ...
Well.... you might remember that I have had problems will my NFS fileserver that run ext3 with data=journal. The filesystem corruption now seems too be solved with the patch (plus amendment) that I posted, so I am happy about that... but there is more. I have known for a while that ext3 doesn't behave very well when the journal fills up. If it finds that the journal is full, and the
2001 Aug 29
1
kupdated, bdflush and kjournald stuck in D state on RAID1 device (deadlock?)
(Sent to linux-raid, linux-kernel and ext3-users since I'm not sure what type of issue this is) I've got a test system here running Redhat 7.1 + stock 2.4.9 with these patches: http://www.fys.uio.no/~trondmy/src/2.4.9/linux-2.4.9-NFS_ALL.dif http://www.zip.com.au/~akpm/ext3-2.4-0.9.6-249.gz http://domsch.com/linux/aacraid/linux-2.4.9-aacraid-20010816.patch All three patches applied
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcommit_memory=2 # Memory to
2003 May 01
3
Performance problem with mysql on a 3ware 1+0 raid array
Hi all, We are observing a consistent interval of about 4 minutes at which there are large sustained writes to disk that causes mysqld to block and not respond for the entire period. We are using data=journal with a 128M journal and the filesystem is 150GB in size. We get about 300kb/sec in writes and that will jump to about 2000kb/sec during the periods of large sustained writes. Those
2002 Dec 15
2
problem with Andrew's patch ext3
Hello Andrew, I patched 2.4.20 with your patch found out on http://lwn.net/Articles/17447/ and I have a big problem with: once server is booted on 2.4.20 with your patch, when I want to reboot with /sbin/reboot, server makes a Segmentation fault and it crashs. I tested it on 50-60 servers and it is the same problem. I tested kernel 2.4.20 without your patch: no problem. # uname -a Linux XXXXXX
2009 Jul 20
1
[PATCH] ocfs2: flush dentry lock drop when sync ocfs2 volume.
In commit ea455f8ab68338ba69f5d3362b342c115bea8e13, we move the dentry lock put process into ocfs2_wq. This is OK for most case, but as for umount, it lead to at least 2 bugs. See http://oss.oracle.com/bugzilla/show_bug.cgi?id=1133 and http://oss.oracle.com/bugzilla/show_bug.cgi?id=1135. And it happens easily if we have opened a lot of inodes. For 1135, the reason is that during umount will call
2002 Nov 21
2
/proc/sys/vm/bdflush
I'm lacking some understanding of how to tune / when to tune /proc/sys/vm/bdflush Where can I read up on this? Our current problem: Load is low, but ever so often the system decides to do some serious disk I/O which causes all processes to wait for disk I/O -- load explodes (rises linear up into the 20-30ies) just to fall linearly (spelling?) right after that. We think there might be some
2003 Apr 02
1
Kernel lockup (kjournald?)
I am getting an odd situation when backing up a number of ext3 filesystems and was wondering if it could be caused by journalling. Over the space of a minute the load average will jump from 2 to over 40 and the system will be unresponsive for anywhere from 8 to 25 minutes. I am going to be trying a number of things, but was wondering if anyone could see the reason for the high load given the
2008 Feb 22
1
[PATCH] IGET: Remove initialisation of read_inode() super op from BTRFS
Remove the initialisation of read_inode() super op from BTRFS as it has been dropped. Signed-off-by: David Howells <dhowells@redhat.com> --- fs/btrfs/super.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index a46300c..612a34f 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -462,7 +462,6 @@ static struct
2004 Sep 04
0
[PATCH] remove ocfs_put_inode
this doesn't do anything but noisy debug printks anymore Index: src/super.c =================================================================== --- src/super.c (revision 1426) +++ src/super.c (working copy) @@ -196,9 +196,7 @@ static struct super_operations ocfs_sops = { .statfs = ocfs_statfs, - .put_inode = ocfs_put_inode, .clear_inode = ocfs_clear_inode, - //put_inode =
2002 May 20
1
ext3 buffer leak/memory leak?
Hi, I am a new ext3 user and I am having some problems. I seem to have introduced a memory leak after adding ext3 support to the kernel. I noticed when running top or viewing /proc/meminfo my free memory pool seems to be decreasing while my buffers are increasing (around the same rate). I am currently using a root partition and a /var partition. I have listed the ext3 boot messages below.
2002 Nov 25
3
Ordered vs. journal real-worl performance
Maybe I should've started a new thread with this question (it was in the /proc/sys/vm/bdflush thread), so I am now :) According to tests performed for this article: http://www-106.ibm.com/developerworks/linux/library/l-fs8/ "ext3's data=journal mode is incredibly well-suited to situations where data needs to be read from and written to disk at the same time." This is the
2003 Jul 27
0
data=journal with large external journals
I have a heavily loaded Apache 2 server which is experiencing what appear to be "write storms". The server is a dual Xeon box with hyperthreading, and appears to linux as a four-cpu box. It has 4GB of physical RAM and is never hitting swap. It's serving a large number of static image files off of four 135GB SCSI drives with external journals on a fifth volume. The journal volume