search for: buffercach

Displaying 10 results from an estimated 10 matches for "buffercach".

Did you mean: buffercache
2002 Jan 30
2
buffered memory grows
Hi. I have shifted from redhat 7.1 to 7.2, and have several machines running both versions now. I have noticed, that the memory usage patern is very different on machines using ext2 and ext3 - the ones using ext2 usually use 5-10 MB of "buff" memory, but the ones with ext3 grow to 50MB on machines with 128MB, and to 250MB with on machine with 512 MB. I have conducted a test, and changed
2002 May 21
4
Bad directories appearing in ext3 after upgrade 2.4.16 -> 2.4.18+cvs
...been browers caches, so no real data has been lost (I think), but it is worrisome. I will probably revert to 2.4.16 plus the relevant bits of the CVS patch hand-applied. But I wonder if anyone else has this or has any idea what might be happening? The directories haven't been moved from buffercache to pagecache between 2.4.16 and 2.4.18 or anything like that have they? Possibly related... My ext3 filesystem is on a raid5 array, with the journal on a separate raid1 array. (data=journal mode). I get quite a few messages in the logs which say: May 21 14:20:06 glass kernel: raid5: mul...
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
...03 17:40 To: Kevin Miller Cc: ocfs-devel@oss.oracle.com; ocfs-users@oss.oracle.com Subject: Re: [Ocfs-users] Disappointing Performance Using 9i RAC with OCFS on Linux yes there is a very easy explanation fro this you do selects so read only every access t a local filesystem goes through the linxu buffercache (same with solaris) so if you have enough memory or you hav a small database and plenty of free ram, all that free ram gets used as a free cache, raeds are basically going to be buffered reads for ocfs and raw the reads have to also come from disk what the "right" test would be is to...
2003 Nov 13
2
Disappointing Performance Using 9i RAC with OCFS on Linux
...03 17:40 To: Kevin Miller Cc: ocfs-devel@oss.oracle.com; ocfs-users@oss.oracle.com Subject: Re: [Ocfs-users] Disappointing Performance Using 9i RAC with OCFS on Linux yes there is a very easy explanation fro this you do selects so read only every access t a local filesystem goes through the linxu buffercache (same with solaris) so if you have enough memory or you hav a small database and plenty of free ram, all that free ram gets used as a free cache, raeds are basically going to be buffered reads for ocfs and raw the reads have to also come from disk what the "right" test would be is to...
2001 Nov 03
1
Patch for kernel 2.4.14pre7
...*** 115,120 **** set_page_dirty(page); goto drop_pte; } /* * Check PageDirty as well as pte_dirty: page may * have been brought back from swap by swapoff. --- 114,129 ---- set_page_dirty(page); goto drop_pte; } + + if (page->buffers) { + /* + * Anonymous buffercache page left behind by + * truncate. + */ + printk(__FUNCTION__ ": page has buffers!\n"); + goto preserve; + } + /* * Check PageDirty as well as pte_dirty: page may * have been brought back from swap by swapoff. Dirk
2001 Oct 11
4
ext3 0.9.12 for 2.4.10-ac11
...s to a lot of kernel warnings when mounting a bad filesystem or a fs with errors - Make sure we set the error flag both in the journal and fs superblocks on error (unless we're doing panic-on-error) 0.9.9 ----- - Fix the buffer-already-revoked assertion failure by looking up an aliased buffercache buffer and clearing the revoke bits in there as well as in the journalled data buffer. - Reorganise page truncation code so we don't take the address of block_flushpage(). This is to simplify merging with Andrea's O_DIRECT patch, which turns block_flushpage() into a macro. 0.9.10...
2001 Sep 07
4
ext3-2.4-0.9.9
...s to a lot of kernel warnings when mounting a bad filesystem or a fs with errors - Make sure we set the error flag both in the journal and fs superblocks on error (unless we're doing panic-on-error) 0.9.9 ----- - Fix the buffer-already-revoked assertion failure by looking up an aliased buffercache buffer and clearing the revoke bits in there as well as in the journalled data buffer. - Reorganise page truncation code so we don't take the address of block_flushpage(). This is to simplify merging with Andrea's O_DIRECT patch, which turns block_flushpage() into a macro. -
2003 Nov 13
1
E-Business 11i.9 and RAC 9.2.0.3
Hi, I am currently involved in a project with E-Business 11i.9. For the production enviroment, ct. wants to implement Load Balance and Fail Over - both middle tier and database tier, the last with RAC. As E11i creates lots of tablespaces, the best way seems to be OCFS. We installed & configured OCFS partitions. The next step was to install E11i multi tier, single instance - already
2012 Oct 17
0
cgroup blkio.weight working, but not for KVM guests
...5,bus=pci.0,addr= 0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 For fun I tried a few different cache options to try to force a bypass the host buffercache, including writethough and directsync, but the number of virtio kernel threads appeared to explode (especially for directsync) and the throughput dropped quite low: ~50% of "none" for writethrough and ~5% for directsync. With cache=none, when I generate write loads inside the VMs, I...
2005 Jan 13
3
rsyncd.conf: "timeout=<minimal>" crazyness
....3 GB), and debian-30r4 (lots og GBs too), the latter with a huge amount of pure "ethical" acceptance, regardless of functionality. the former with additional aspects of affinity... I guess there is no real machine at this world which is capable to hold all this extremely hot stuff in buffercache at once, so all the servers serving SUSE plus DEBIAN (at least those) are stressed with disk I/O like never before, like me. You can NOT understand what I say if you only have a 100 MBit/sec connection to the internat, and I beg you to imagine what would happen to you if you had "unlimite...