search for: vfs_cache_pressur

Displaying 12 results from an estimated 12 matches for "vfs_cache_pressur".

Did you mean: vfs_cache_pressure
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
...eats up almost about 100MB and occasionally the system runs out of memory. I even tried tweaking the /proc/sys/vm settings with the following values, but it did not help. /proc/sys/vm/dirty_background_ratio = 2 /proc/sys/vm/dirty_ratio = 5 /proc/sys/vm/dirty_expire_centisecs = 1000 /proc/sys/vm/vfs_cache_pressure = 10000 I also tried copying the huge file locally from one folder to another through the USB interface using dd oflag=direct flag (unbuffered write). But the page cache again ate away about 100MB RAM. Has anybody here seen the same problem? Is there a possible fix? I believe it's more to do...
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
...system cache that's actually doing the heavy lifting here. There are a couple of sysctl tunables that I've found helps out with this. See here: http://docs.gluster.org/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/ Contrary to what that doc says, I've found that setting vm.vfs_cache_pressure to a low value increases performance by allowing more dentries and inodes to be retained in the cache. # Set the swappiness to avoid swap when possible. vm.swappiness = 10 # Set the cache pressure to prefer inode and dentry cache over file cache. This is done to keep as many # dentries and inode...
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a lustre file system will cause a significant system overhead for applications with high memory demands. We have seen a 50% slowdown or worse for applications. Even High Performance Linpack, that have no file IO whatsoever is affected. The only remedy seems to be to empty the buffer cache from memory by running
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
...heavy lifting here. There are a couple of sysctl tunables that I've > found helps out with this. > > See here: > > http://docs.gluster.org/en/latest/Administrator%20Guide/ > Linux%20Kernel%20Tuning/ > > Contrary to what that doc says, I've found that setting > vm.vfs_cache_pressure to a low value increases performance by allowing more > dentries and inodes to be retained in the cache. > > # Set the swappiness to avoid swap when possible. > vm.swappiness = 10 > > # Set the cache pressure to prefer inode and dentry cache over file cache. > This is done to...
2013 Nov 07
0
GlusterFS with NFS client hang up some times
...> Units = cylinders of 16065 * 512 = 8225280 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disk identifier: 0x000efb6f - GlusterFS: 3.4.0-8.el6 - Sysctl.conf: > vm.swappiness = 0 > vm.vfs_cache_pressure = 1000 > net.core.rmem_max = 4096000 > net.core.wmem_max = 4096000 > net.ipv4.neigh.default.gc_thresh2 = 2048 > net.ipv4.neigh.default.gc_thresh3 = 4096 > vm.dirty_background_ratio = 1 > vm.dirty_ratio = 16 I use only default config for GlusterFS (follow...
2009 Oct 16
3
Nice little performance improvement
...is building its file list, the du is warming up the file cache on the destination. Then when rsync looks to see what it needs to do on the destination, it can do this more efficiently. Looks like a keeper so far. Any other suggestions? (was thinking of a previous suggestion of setting /proc/sys/vm/vfs_cache_pressure to a low value). Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.samba.org/pipermail/rsync/attachments/20091015/58056c0d/attachment.html>
2011 Oct 05
1
Performance tuning questions for mail server
...ts = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 vm.vfs_cache_pressure = 35 vm.nr_hugepages = 512 net.ipv4.tcp_max_syn_backlog = 2048 fs.aio-max-nr = 1048576 vm.dirty_background_ratio = 3 vm.dirty_ratio = 40 After making changes, do you have any recommendations on which tools to use to monitor those changes and see how they perform? I have noatime set in fstab in t...
2008 Feb 01
2
Aplication slow after migration
Hi, everybody! I have been using samab on Debian for years and I have recently migrated my file server from version 3.0.14a-3sarge2 to 3.0.24-6etch4. One or our applications stores its data in a shared folder. This data is distributed in over 29000 files of about 1k-40k and is so much slower when it runs on the new server. I have thoroughly revised both smb.conf files, but can't see
2010 Apr 22
1
Odd behavior
Hi Y'all, I'm seeing some interesting behavior that I was hoping someone could shed some light on. Basically I'm trying to rsync a lot of files, in a series of about 60 rsyncs, from one server to another. There are about 160 million files. I'm running 3 rsyncs concurrently to increase the speed, and as each one finishes, another starts, until all 60 are done. The machine
2011 Nov 10
13
dom0 - oom-killer - memory leak somewhere ?
Hello, I work in a hosting company, we have tens of Xen dom0 running just fine, but unfortunately we do have a few that get out of control. Reported behaviour : - dom0 uses more and more memory - no process can be found using that memory - at some point, oom killer kicks in, and kills everything, until even ssh the box becomes hard - when there is really no more process to kill, it crashes
2013 Dec 30
2
oom situation
...r_hugepages = 0 vm.nr_overcommit_hugepages = 0 vm.nr_pdflush_threads = 0 vm.overcommit_memory = 0 vm.overcommit_ratio = 50 vm.page-cluster = 3 vm.percpu_pagelist_fraction = 0 vm.scan_unevictable_pages = 0 vm.stat_interval = 1 vm.swappiness = 30 vm.user_reserve_kbytes = 131072 vm.vdso_enabled = 1 vm.vfs_cache_pressure = 100 # ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 292370 max locked memory (kbytes, -l) 64 max memory size...