search for: dirty_ratio

Displaying 20 results from an estimated 27 matches for "dirty_ratio".

2016 Dec 07
1
vm.dirty_ratio
...gt; > -----Original Message----- > From: CentOS-virt [mailto:centos-virt-bounces at centos.org] On Behalf Of Gokan > Atmaca > Sent: Wednesday, December 7, 2016 9:16 PM > To: Discussion about the virtualization on CentOS <centos-virt at centos.org> > Subject: [CentOS-virt] vm.dirty_ratio > > Hello > > I get the following error on the server. The problem is resolved when you > restart. But is this causing the problem? > > messages_log: > INFO: task jbd2/dm-1-8:674 blocked for more than 120 seconds echo 0 > > /proc/sys/kernel/hung_task_timeout_secs >...
2016 Dec 07
2
vm.dirty_ratio
Hello I get the following error on the server. The problem is resolved when you restart. But is this causing the problem? messages_log: INFO: task jbd2/dm-1-8:674 blocked for more than 120 seconds echo 0 > /proc/sys/kernel/hung_task_timeout_secs Thanks.
2016 Dec 07
0
vm.dirty_ratio
...r log contained OOM kill? Xlord -----Original Message----- From: CentOS-virt [mailto:centos-virt-bounces at centos.org] On Behalf Of Gokan Atmaca Sent: Wednesday, December 7, 2016 9:16 PM To: Discussion about the virtualization on CentOS <centos-virt at centos.org> Subject: [CentOS-virt] vm.dirty_ratio Hello I get the following error on the server. The problem is resolved when you restart. But is this causing the problem? messages_log: INFO: task jbd2/dm-1-8:674 blocked for more than 120 seconds echo 0 > /proc/sys/kernel/hung_task_timeout_secs Thanks. ______________________________________...
2011 Jun 09
4
Possible to use multiple disk to bypass I/O wait?
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The process basically scans through Maildirs, checking for space usage and quota. Because there are hundred odd user folders and several 10s of thousands of small files, this sends the I/O wait % way high. The server hits a very high load level and stops responding to other requests until the crawl is done. I am wondering if I add
2012 Apr 17
1
Help needed with NFS issue
...d. Disk volumes being served are RAID-5 sets with write-back cache enabled (BBU is good). RAID controller logs are free of errors. NFS servers used dual bonded gigabit links in balance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reor...
2011 Mar 11
3
Server locking up everyday around 3:30 AM
PJ wrote: > This may or may not be CentOS related, but am out of ideas at this point and wanted to bounce this off the list. > > I'm running a CentOS 5.5 server, running the latest kernel > 2.6.18-194.32.1.el5. > > Almost everyday around 3:30 AM the server completely locks up and has to be power cycled before it will come back online. > (this means someone hat to wake up
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
...d to queue depth, nr_requests and possibly VM params (the latter from https://bugzilla.redhat.com/show_bug.cgi?id=121434#c275). These are the default settings: /sys/block/sda/device/queue_depth = 254 /sys/block/sda/queue/nr_requests = 8192 /proc/sys/vm/dirty_expire_centisecs = 3000 /proc/sys/vm/dirty_ratio = 30 3Ware mentions elevator=deadline, blockdev --setra 16384 along with nr_requests=512 in their performance tuning doc - these alone seem to make no difference to the latency problem. Setting dirty_expire_centisecs = 1000 and dirty_ratio = 5 does indeed reduce the number of processes in '...
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
...(100MB) from a client on to the USB hard disk, I find that the page cache eats up almost about 100MB and occasionally the system runs out of memory. I even tried tweaking the /proc/sys/vm settings with the following values, but it did not help. /proc/sys/vm/dirty_background_ratio = 2 /proc/sys/vm/dirty_ratio = 5 /proc/sys/vm/dirty_expire_centisecs = 1000 /proc/sys/vm/vfs_cache_pressure = 10000 I also tried copying the huge file locally from one folder to another through the USB interface using dd oflag=direct flag (unbuffered write). But the page cache again ate away about 100MB RAM. Has anybody her...
2009 Jan 27
1
paravirtualized vs HVM disk interference (85% vs 15%)
...you want a fair share of the disk transfer rate). What we are asking is if this is a consequence of the Xen design and a known behavior, and if there is a workaround to ameliorate the interference (the problem persists and in the same degree even if both guests perform writes below dom0''s dirty_ratio). Regards, Duilio J. Protti, Alejandro E. Paredes Intel Argentina Software Development Center _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2013 Mar 02
0
Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance
...; > upgrade upgrading system very, very slow, freezing on "Unpacking > > replacement" and other io-related steps. > > If you don't have a fast connection to storage, the Linux VM will buffer write data in the kernel buffer cache until it runs out of memory for that (vm.dirty_ratio), then it will freeze any process that issues the writes. If your VM has a lot of memory relative to storage speed, this can result in very long delays. Try reducing Linux kernel vm.dirty_background_ratio to get writes going sooner and vm.dirty_ratio so that the freezes don't last as long....
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems come up with filesystem errors that took a manual 'fsck -y' after what should have been a clean reboot. This is particularly annoying on remote systems where I have to talk someone else through the recovery. Is there some time limit on the cache write with a 'reboot' (no options) command or is ext4 that
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
...ns) command or is ext4 that fragile? I'd say there's no limit in the amount of time the kernel waits until the blocks have been written to disk; driven by there parameters: vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirty_writeback_centisecs = 500 ie, if the data cached on RAM is older than 30s or larger than 10% available RAM, the kernel will try to flush it to disk. Depending how much data needs to be flushed at poweroff/reboot time, this could have a significant effect on the time taken. Regarding...
2007 Feb 13
1
RE: [PATCH][TOOLS] Reducing impact of domainsave/restore/dump on Dom0
...this is based on using fadvise64(DONTNEED) to throw > the page cache away once it has been written to disk -- with this in > place, memory usage does go up somewhat but then immediately drops > again > when the action is done and this change, in conjunction with setting > the > vm.dirty_ratio sysctl parameter seems to gives very good results. > Simon ----------------------------------- Reduce impact of saving/restoring/dumping large domains on Dom0 memory usage by means of fadvise64() to tell the OS to discard the cache pages used for the save/dump file. Signed-off-by: Simon Grah...
2013 Nov 07
0
GlusterFS with NFS client hang up some times
...ctl.conf: > vm.swappiness = 0 > vm.vfs_cache_pressure = 1000 > net.core.rmem_max = 4096000 > net.core.wmem_max = 4096000 > net.ipv4.neigh.default.gc_thresh2 = 2048 > net.ipv4.neigh.default.gc_thresh3 = 4096 > vm.dirty_background_ratio = 1 > vm.dirty_ratio = 16 I use only default config for GlusterFS (follow http://gluster.org/community/documentation/index.php/Getting_started_overview). After testing between NFS client and FUSE client, I choose NFS because the performance is much better. NFS mount options: svr385-1212.localdomain:/gv0 on /glusterf...
2015 Dec 24
0
systemd-sysctl not running on boot
...e settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file # # For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv4.ip_forward = 0 kernel.panic = 20 kernel.sem = 250 65000 32 256 vm.swappiness = 10 net.ipv4.conf.all.log_martians = 1 kernel.dmesg_restrict = 1 vm.dirty_ratio = 15 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv4.tcp_syncookies = 1 net.ipv6.conf.all.disable_ipv6 = 1 kernel.kptr_restrict = 1 [root at web-devel-local-1 ~]# systemctl status systemd-sysctl ? systemd-sysctl.service - Apply Kernel Variables Loaded: loaded (/usr/lib/systemd/system/systemd-s...
2015 Dec 24
2
systemd-sysctl not running on boot
also in /etc/sysctl.d/ On Thu, Dec 24, 2015 at 8:58 AM, Gordon Messmer <gordon.messmer at gmail.com> wrote: > On 12/23/2015 05:08 AM, Ofer Hasson wrote: > >> By running "systemctl status systemd-sysctl" I also receive the same >> output, but a simple "cat /proc/sys/vm/swappiness" returns the default >> value, and not the one set by my conf file.
2011 Oct 05
1
Performance tuning questions for mail server
...conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 vm.vfs_cache_pressure = 35 vm.nr_hugepages = 512 net.ipv4.tcp_max_syn_backlog = 2048 fs.aio-max-nr = 1048576 vm.dirty_background_ratio = 3 vm.dirty_ratio = 40 After making changes, do you have any recommendations on which tools to use to monitor those changes and see how they perform? I have noatime set in fstab in the guest for the /var partition, where much of the spamassassin occurs. I've included below my libvirt xml config for the guest...
2012 Oct 01
3
Tunning - cache write (database)
...y and recover aren''t a priority for now, because this is part of a database bulkload ...after finish , integrity will be desirable (not a obligation, since this is a test environment) For now, performance is the mainly requirement... A plus : root@jdivm06:/proc/sys/fs# cat /proc/sys/vm/dirty_ratio 50 root@jdivm06:/proc/sys/fs# cat /proc/sys/vm/dirty_background_ratio 10 Thanks Cesar -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...ost problem too? small chance probably because the first 10 - 20GB it works great!) - tried disabling Selinux, upgrading to newest kernels (elrepo ml and lt), played around with dirty_cache thingeys like proc/sys/vm/dirty_writeback_centisecs /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and migration threashold of dmsetup, and other probably non important stuff like vm.dirty_bytes - when in "slow state" the systems kworkers are exessively using IO (10 - 20 MB per kworker process). This seems to be the writeback process (CPY%Sync) because the cache wants to flush t...
2011 Jul 25
11
Btrfs slowdown
Hi, we are running a ceph cluster with btrfs as it''s base filesystem (kernel 3.0). At the beginning everything worked very well, but after a few days (2-3) things are getting very slow. When I look at the object store servers I see heavy disk-i/o on the btrfs filesystems (disk utilization is between 60% and 100%). I also did some tracing on the Cepp-Object-Store-Daemon, but I''m