search for: dirty_writeback_centisecs

Displaying 14 results from an estimated 14 matches for "dirty_writeback_centisecs".

2015 Aug 19
2
Optimum Block Size to use
...t; > Obviously the exact type of writes is important (lots of small writes written > and flushed vs fewer big unsynced writes), so you'd want to poke it with > iostat to see what kind of writes you're talking about. to address this we use (sysctl) vm.dirty_expire_centisecs vm.dirty_writeback_centisecs furthermore check the fs alignment with the underlying disk ... -- LF
2012 Mar 16
1
NFS Hanging Under Heavy Load
...ackups. RPCNFSDCOUNT is set to 256. During backups from clients the system exhibits odd hangs that interfere with some of our sensitive system's backup windows. On the NFS server side we see the following in dmesg. Originally I thought it was related to dirty writeback cache, but I adjusted dirty_writeback_centisecs and am still seeing the issue. dmesg during the problem window: Mar 16 07:01:21 *****store01 kernel: __ratelimit: 11 callbacks suppressed Mar 16 07:01:21 *****store01 kernel: nfsd: page allocation failure. order:2, mode:0x20 Mar 16 07:01:21 *****store01 kernel: Pid: 6041, comm: nfsd Not tainted 2....
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems come up with filesystem errors that took a manual 'fsck -y' after what should have been a clean reboot. This is particularly annoying on remote systems where I have to talk someone else through the recovery. Is there some time limit on the cache write with a 'reboot' (no options) command or is ext4 that
2012 Apr 17
1
Help needed with NFS issue
...good). RAID controller logs are free of errors. NFS servers used dual bonded gigabit links in balance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net....
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
...t4 that fragile? I'd say there's no limit in the amount of time the kernel waits until the blocks have been written to disk; driven by there parameters: vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirty_writeback_centisecs = 500 ie, if the data cached on RAM is older than 30s or larger than 10% available RAM, the kernel will try to flush it to disk. Depending how much data needs to be flushed at poweroff/reboot time, this could have a significant effect on the time taken. Regarding systems with lots of RAM, I'v...
2010 Dec 03
0
sysctl and gnome desktop
Hi all, I use CentOS 5.5 x86_64 on my laptop. I have couple of entries in /etc/sysctl.conf entries according to the recommendations http://www.lesswatts.org/projects/powertop/ When Gnome Desktop starts something resets them to defaults. For example 'vm.dirty_writeback_centisecs = 1500' resets back to 30. Does anybody have any idea how to workaround this? Thanks. Andrej.
2015 Aug 19
4
Optimum Block Size to use
Hi All We use CentOS 6.6 for our application. I have profiled the application and find that we have a heavy requirement in terms of Disk writes. On an average when our application operates at a certain load i can observe that the disk writes / second is around 2 Mbps (Average). The block size set is 4k ******************* [root at localhost ~]# blockdev --getbsz /dev/sda3 4096
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...kfs in stead of LVM inside VM (so could be LVM inside VM on top of LVM on KVM host problem too? small chance probably because the first 10 - 20GB it works great!) - tried disabling Selinux, upgrading to newest kernels (elrepo ml and lt), played around with dirty_cache thingeys like proc/sys/vm/dirty_writeback_centisecs /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and migration threashold of dmsetup, and other probably non important stuff like vm.dirty_bytes - when in "slow state" the systems kworkers are exessively using IO (10 - 20 MB per kworker process). This seems to be th...
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval.  Is this adjustable? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2012 Aug 03
5
CentOS 6 : Tip for significantly increasing battery life / reducing power consumption (Thinkpad X220 Tablet)
Hello, I was not happy with the power consumption of CentOS 6 x86_64 on a new Lenovo Thinkpad x220 Tablet and I worked on reducing it. I just wanted to share with the list one of the changes which gave me the most significant improvement. As per http://www.williambrownstreet.net/blog/?p=387, add the following kernel arguments to the GRUB boot configuration: pcie_aspm=force
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
...s no limit in the amount of time the kernel waits until > the blocks have been written to disk; driven by there parameters: > > vm.dirty_background_bytes = 0 > vm.dirty_background_ratio = 10 > vm.dirty_bytes = 0 > vm.dirty_expire_centisecs = 3000 > vm.dirty_ratio = 20 > vm.dirty_writeback_centisecs = 500 > > ie, if the data cached on RAM is older than 30s or larger than 10% > available RAM, the kernel will try to flush it to disk. Depending how > much data needs to be flushed at poweroff/reboot time, this could have > a significant effect on the time taken. > > Regardin...
2013 Dec 30
2
oom situation
...DirectMap2M: 901120 kB sysctl: vm.oom_dump_tasks = 0 vm.oom_kill_allocating_task = 1 vm.panic_on_oom = 1 vm.admin_reserve_kbytes = 8192 vm.block_dump = 0 vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirty_writeback_centisecs = 500 vm.drop_caches = 0 vm.highmem_is_dirtyable = 0 vm.hugepages_treat_as_movable = 0 vm.hugetlb_shm_group = 0 vm.laptop_mode = 0 vm.legacy_va_layout = 0 vm.lowmem_reserve_ratio = 256 32 32 vm.max_map_count = 65530 vm.min_free_kbytes = 3084 vm.mmap_min_addr = 4096 vm.nr_hugepages = 0 vm.nr_...
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...inside VM (so could be LVM > inside VM on top of LVM on KVM host problem too? small chance probably > because the first 10 - 20GB it works great!) > > - tried disabling Selinux, upgrading to newest kernels (elrepo ml and lt), > played around with dirty_cache thingeys like proc/sys/vm/dirty_writeback_centisecs > /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and > migration threashold of dmsetup, and other probably non important stuff > like vm.dirty_bytes > > - when in "slow state" the systems kworkers are exessively using IO (10 - > 20 MB per kworker proc...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...M inside VM on top of LVM on KVM host > problem too? small chance probably because the first 10 - 20GB it > works great!) > > - tried disabling Selinux, upgrading to newest kernels (elrepo ml > and lt), played around with dirty_cache thingeys like > proc/sys/vm/dirty_writeback_centisecs > /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , > and migration threashold of dmsetup, and other probably non > important stuff like vm.dirty_bytes > > - when in "slow state" the systems kworkers are exessively using > IO (10 - 20...