similar to: page cache keeps growing untill system runs out of memory on a MIPS platform

Displaying 20 results from an estimated 700 matches similar to: "page cache keeps growing untill system runs out of memory on a MIPS platform"

2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this "the hang". It has been doing this for four days now. There are no log messages of any kind pertaining
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems come up with filesystem errors that took a manual 'fsck -y' after what should have been a clean reboot. This is particularly annoying on remote systems where I have to talk someone else through the recovery. Is there some time limit on the cache write with a 'reboot' (no options) command or is ext4 that
2013 Nov 07
0
GlusterFS with NFS client hang up some times
I have the following setup with GlusterFS. Server: 4 - CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - RAM: 32G - HDD: 1T, 7200 RPM (x 10) - Network card: 1G x 4 (bonding) OS: Centos 6.4 - File system: XFS > Disk /dev/sda: 1997.1 GB, 1997149306880 bytes > 255 heads, 63 sectors/track, 242806 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes >
2013 Dec 30
2
oom situation
I have continous oom&panic situation unresolved. I am not sure system fills up all the ram (36GB). Why this system triggered this oom situation? Is it about some other memory? highmem? lowmem? stack size? Best Regards, Kernel 3.10.24 Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0 Dec 27 09:19:05 2013 kernel: :
2011 Oct 05
1
Performance tuning questions for mail server
Hi, I have a fedora15 x86_64 host with one fedora15 guest running amavis+spamassassin+postfix and performance is horrible. The host is a quad-core E13240 with 16GB and 3 1TB Seagate ST31000524NS and all partitions are ext4. I've allocated 4 processors and 8GB of RAM to this guest. I really hoped someone could help me identify areas in which performance can be improved at both the guest and
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: > I've had a few systems with a lot of RAM and very busy filesystems > come up with filesystem errors that took a manual 'fsck -y' after what > should have been a clean reboot. This is particularly annoying on > remote systems where I have to talk someone else through the recovery. > > Is there some time
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Hi all, I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 boxes, distributed-replicate) My testing shows the same thing -- running a find on a directory dramatically increases lstat performance. To add another clue, the performance degrades again after issuing a call to reset the system's cache of dentries and inodes: # sync; echo 2 > /proc/sys/vm/drop_caches I
2008 Jan 20
9
Ferret Gem Installation on Windows
Trying to install ferret on my windows XP environment. Using InstantRails 2.0 with RoR 2.0.2, and NetBeans 6.0. I had successfully installed and built RailsSpace in InstantRails 1.7, but am trying to upgrade RailsSpace to RoR 2.0.2 using the code that Michael has kindly provided for us on the website. When I run the gem install ferret command, I get the following error: C:\Documents and
2013 Mar 02
0
Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance
----- Original Message ----- > From: gluster-users-request at gluster.org > To: gluster-users at gluster.org > Sent: Friday, March 1, 2013 4:03:13 PM > Subject: Gluster-users Digest, Vol 59, Issue 15 > > ------------------------------ > > Message: 2 > Date: Fri, 01 Mar 2013 10:22:21 -0800 > From: Joe Julian <joe at julianfamily.org> > To: gluster-users at
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
On Jan 6, 2015, at 4:28 PM, Fran Garcia <franchu.garcia at gmail.com> wrote: > > On Tue, Jan 6, 2015 at 6:12 PM, Les Mikesell <> wrote: >> I've had a few systems with a lot of RAM and very busy filesystems >> come up with filesystem errors that took a manual 'fsck -y' after what >> should have been a clean reboot. This is particularly annoying on
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078. The server is used as a samba file server. Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth. Here are a snip from the iostat
2012 Oct 01
3
Tunning - cache write (database)
Hi, First, sorry if this isn''t the place to get this kind of help... If not, I appreciate some link , forum, where I can try get some answers... My problem: * Using btrfs + compression , flush of 60 MB/s take 4 minutes.... (on this 4 minutes they keep constatly I/O of +- 4MB/s no disks) (flush from Informix database) The enviroment : * Virtualized environment * OpenSuse 12.1 64bits,
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a lustre file system will cause a significant system overhead for applications with high memory demands. We have seen a 50% slowdown or worse for applications. Even High Performance Linpack, that have no file IO whatsoever is affected. The only remedy seems to be to empty the buffer cache from memory by running
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval.  Is this adjustable? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2006 Jun 13
0
Asterisk keeps running after hungup untill I press #
Hi, I'm running SER with Asterisk, and I've configured VoicemailMain like this: exten => 201,1,VoicemailMain(@default) exten => 201,2,Hangup() Although, after any user enter his voicemailmain mailbox, when the phone is hung up, the call still continues running in Asterisk, because I can see it in the debug output of the Asterisk CLI. The call only stops if before hung up, I
2015 Aug 19
2
Optimum Block Size to use
Am 19.08.2015 um 10:24 schrieb John Hodrien <J.H.Hodrien at leeds.ac.uk>: > On Wed, 19 Aug 2015, Jatin Davey wrote: > >> Hi All >> >> We use CentOS 6.6 for our application. I have profiled the application and find that we have a heavy requirement in terms of Disk writes. On an average when our application operates at a certain load i can observe that the disk writes
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Any updates on this one? On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite <tomfite at gmail.com> wrote: > Hi all, > > I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2 > boxes, distributed-replicate) My testing shows the same thing -- running a > find on a directory dramatically increases lstat performance. To add > another clue, the performance degrades
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback