Displaying 20 results from an estimated 700 matches similar to: "2.2.19/0.0.7a: bonnie -> VM problems"
2001 Jul 13
0
0.0.7a + rh2.2.19: help solve rejects
I get 2 rejects applying 2.2.19-ext3 to latest errata rh 2.2.19 kernel.
1)
fs/buffer.c
Should I put "J_ASSERT(buf->b_count > 0);" before or after " *(int *)0 = 0;"?
===== ext3 0.0.7a patch
--- 934,946 ----
if (buf->b_count) {
buf->b_count--;
+ if (!buf->b_count &&
+ (buf->b_jlist != BJ_None && buf->b_jlist
2001 Aug 23
2
EXT3 Trouble on 2.4.4
All,
I know that there is no official port to Kernel 2.4.4, thus I may not get any
help, however I am hoping someone could point me in the right direction for
my problem. I am currently forced to use kernel 2.4.4 for reasons out of
my control (embedded board).
Here are the exact versions of everything I'm running:
ExT3 Version: ext3-2.4-0.9.6-248
Util Version: util-linux-2.11f.tar.bz2
e2fs
2001 Apr 19
1
0.0.6b conflict with raid patch
Hello all,
I am trying to integerate 0.0.6b with our kernel RPM here and have come
across an interesting conflict. I want to include the raid patch that
Red Hat includes in their kernel but that patch includes the following
hunk:
--- linux/include/linux/fs.h.orig Tue Jan 16 13:30:09 2001
+++ linux/include/linux/fs.h Tue Jan 16 13:47:18 2001
@@ -191,6 +191,7 @@
#define BH_Req 3 /* 0 if the
2001 May 16
1
Re: [linux-lvm] lvm deadlock with 2.4.x kernel?
I think I have this one solved, I hope.
I think what Andreas and I are running into are a few different
assertions. One being the LVM lvm_do_pv_flush caused assertion which is
related directly to invalidate_buffers() being called which then triggers
refile_buffer() on a journaled buffer, which appears clean in all other
ways according to the checks in refile_buffer().
The following is what
2005 Mar 17
1
ocfs seek-performance
hi list,
i have a little problem with 2-node RAC using OCFS. the application running on this cluster does
heavily index-based accesses. the data volumes are SAN volumes connected by fibrechannel.
the throughput does not exceed 10mb/s, average is 7-8 mb/s. i've used 'iostat -x' and got rkB/s=8000
while %util=100% (device was saturated) from kernel's POV.
i did some
2001 Mar 12
2
Software RAID & Ext3 v0.0.6b
I've just set up a brand new system with software raid1 (in degraded mode)
with one IDE 20GB drive, using kernel 2.2.19pre16 with ext3 0.0.6b.
It's split like this..
32MB /dev/hda1 /boot
2GB /dev/hda2 /
~18GB /dev/hda3 /home
all partitions are marked as 0xfd (autostart raid) with the patches from
http://people.redhat.com/mingo/raid-patches for 2.2.17. And I've made all the
ext3
2009 Oct 01
1
3-layer structure and the bonnie rewrite problem
Hello list
First of all: Good work and thanks for GlusterFS!
I'm totally new to GlusterFS, but i like it a lot and think about
migrating my NFS setup completely to GlusterFS. But i ran into some
problems with my chosen structure. Hopefully someone can help out.
The first questions: i ran into some performance issues with a certain
structure/setup and like to know (before i continue testing)
2005 Jan 09
0
[PATCH] ext3: s/0/NULL/ in pointer context
Signed-off-by: Alexey Dobriyan <adobriyan at mail.ru>
Index: linux-2.6.10-bk11-warnings/fs/ext3/inode.c
===================================================================
--- linux-2.6.10-bk11-warnings/fs/ext3/inode.c (revision 11)
+++ linux-2.6.10-bk11-warnings/fs/ext3/inode.c (revision 12)
@@ -803,7 +803,7 @@
if (create) {
handle = ext3_journal_current_handle();
- J_ASSERT(handle
2005 Sep 09
7
[PATCH 0/6] jbd cleanup
The following 6 patches cleanup the jbd code and kill about 200 lines.
First of 4 patches can apply to 2.6.13-git8 and 2.6.13-mm2.
The rest of them can apply to 2.6.13-mm2.
fs/jbd/checkpoint.c | 179 +++++++++++--------------------------------
fs/jbd/commit.c | 101 ++++++++++--------------
fs/jbd/journal.c | 11 +-
fs/jbd/revoke.c | 158
2001 Feb 01
1
one question
Hi Stephen,
I'm one of developers of SnapFS, which based on Ext3. I got Assertion failure
from SnapFS, at ext3_new_block() in fs/ext3/balloc.c:
J_ASSERT (!test_and_set_bit(BH_Alloced, &bh->b_state))
If J_ASSERT is only use as debug, why it will modify data?
I found the 'BH_Alloced' flag only occures at two place: one is balloc.c as
above, the other is at journal_forget() in
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi,
I've got a new server with a 3ware 9550SXU with the
Battery. I am using FreeBSD 7.0-Release (tried both
4BSD and ULE) using AMD64 and the 3ware performance
for writes is just plain horrible. Something is
obviously wrong but I'm not sure what.
I've got a 4 disk RAID 10 array.
According to 3dm2 the cache is on. I even tried
setting The StorSave preference to
2015 Aug 21
2
[PATCH 2/2] core/graphics: fix lss16 parsing
getnybble() needs to return four bits at a time from every byte.
During rle decode, rows are rounded to an integer number of bytes.
The rle length needs to be able to hold values > 255.
Signed-off-by: Chas Williams <3chas3 at gmail.com>
---
core/graphics.c | 35 +++++++++++++++++++++--------------
1 file changed, 21 insertions(+), 14 deletions(-)
diff --git a/core/graphics.c
2003 Nov 16
1
Bug in 2.6.0-9
Assertion failure in journal_add_journal_head() at fs/jbd/journal.c:1679
: "(((&bh->b_count)->counter) > 0) || (bh->b_page && bh->b_page->mapping)"
------------[ cut here ]------------
kernel BUG at fs/jbd/journal.c:1679!
invalid operand: 0000 [#2]
CPU: 0
EIP: 0060:[<c017637f>] Not tainted
EFLAGS: 00010282
EIP is at
2001 Mar 30
1
Re: Bug in __invalidate_buffers?
I previously wrote:
> OK, my previous patch cleans up the ASSERT for invalidate_buffers()
> (modulo the fact that it was missing a ')' at the end of the line)
> but it hasn't really fixed the whole problem. If a file write is in
> progress when invalidate_buffers() is called, I get an oops:
> The oops is caused from __invalidate_buffers() calling put_last_free(bh)
>
2001 Mar 29
1
Re: Bug in __invalidate_buffers?
I previously wrote:
> I have come across what appears to be a bug in __invalidate_buffers()
> w.r.t. the change in ext3-0.0.6 using BH_JDirty instead of BH_Dirty
> for buffers held in the journal. If invalidate_buffers() is called
> on a device (LVM likes to do this a lot, for whatever reason), it yanks
> JDirty buffers out from underneath the journal layer, and causes an
> oops
2003 Jan 18
2
[patch 2.4] Fix ext3 scheduling storm and lockup
This patch fixes an inefficiency and potential system lockup in the 2.4
kernel's ext3 filesystem. The problem has been present since 2.4.20-pre5.
This patch is applicable to 2.4.20. A copy is at
http://www.zip.com.au/~akpm/linux/patches/2.4/2.4.20/ext3-scheduling-storm.patch
Anyone who is using tasks which have realtime scheduling policy on ext3
systems should apply this change.
2003 Jun 13
1
jbd count incremented *even* if volume is mounted RO?
Continuing on with my earlier post . . . after looking
through code of JBD, is the following perhaps the
difference in why the md5 values differ;
When a journalled filesystem that uses jbd is mounted
the journal b_count is incremented by one?
*EVEN* if the volume was mounted read only, this
b_count is still increased by one?
curious as ever!
lt
__________________________________
Do you
2001 Jul 12
1
ext3 0.9.1 doubt
Hi,
I found the following, suspect looking, gem in
ext3-2.4-0.9.1-246.gz. Is this supposed to compile
or is it just a tyypo ?
+enum jbd_state_bits {
+ BH_JWrite
+ = BH_PrivateStart, /* 1 if being written to log (@@@ DEBUGGING) */
+ BH_Freed, /* 1 if buffer has been freed (truncated) */
Rik
--
Virtual memory is like a game you can't win;
However, without
2001 Jan 19
1
Assertion failure in journal.c
Hi,
While doing some stress testing with presto module from the intermezzo project layered over ext3, I got the
following assertion failure:
Jan 17 23:09:55 planck kernel: Assertion failure in jfs_prelock_buffer_check() at journal.c line 410:
"bh->b_jlist == 0 || bh->b_jlist == BJ_LogCtl || bh->b_jlist == BJ_IO || bh->b_jlist == BJ_Data"
Jan 17 23:09:55 planck kernel:
2014 Mar 18
3
Tar Compression issue
I have a file Server CentOS 5.10, its on the internet, so I compress all csv
into one file using (tar -czvf compressed_files.tar.gz *.csv) on this
server so that I can download them as one compressed file to save bandwidth,
Disk space on this server available is 50Gig, so when I copy the files onto
Redhat EL 5.9 and decompress them using (tar -zxvf *.gz) It decompresses
maybe 80% then get error: