Displaying 14 results from an estimated 14 matches for "dirty_expire_centisecs".
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
...ound makes me think that this may be related to queue
depth, nr_requests and possibly VM params (the latter from
https://bugzilla.redhat.com/show_bug.cgi?id=121434#c275). These are
the default settings:
/sys/block/sda/device/queue_depth = 254
/sys/block/sda/queue/nr_requests = 8192
/proc/sys/vm/dirty_expire_centisecs = 3000
/proc/sys/vm/dirty_ratio = 30
3Ware mentions elevator=deadline, blockdev --setra 16384 along with
nr_requests=512 in their performance tuning doc - these alone seem to
make no difference to the latency problem.
Setting dirty_expire_centisecs = 1000 and dirty_ratio = 5 does indeed
reduce...
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
...e USB hard disk, I find that the page cache eats up almost about 100MB and occasionally the system runs out of memory. I even tried tweaking the /proc/sys/vm settings with the following values, but it did not help.
/proc/sys/vm/dirty_background_ratio = 2
/proc/sys/vm/dirty_ratio = 5
/proc/sys/vm/dirty_expire_centisecs = 1000
/proc/sys/vm/vfs_cache_pressure = 10000
I also tried copying the huge file locally from one folder to another through the USB interface using dd oflag=direct flag (unbuffered write). But the page cache again ate away about 100MB RAM. Has anybody here seen the same problem? Is there a possi...
2015 Aug 19
2
Optimum Block Size to use
...ptimise
> everything.
>
> Obviously the exact type of writes is important (lots of small writes written
> and flushed vs fewer big unsynced writes), so you'd want to poke it with
> iostat to see what kind of writes you're talking about.
to address this we use (sysctl)
vm.dirty_expire_centisecs
vm.dirty_writeback_centisecs
furthermore check the fs alignment with
the underlying disk ...
--
LF
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems
come up with filesystem errors that took a manual 'fsck -y' after what
should have been a clean reboot. This is particularly annoying on
remote systems where I have to talk someone else through the recovery.
Is there some time limit on the cache write with a 'reboot' (no
options) command or is ext4 that
2012 Apr 17
1
Help needed with NFS issue
...write-back cache enabled (BBU is good). RAID
controller logs are free of errors.
NFS servers used dual bonded gigabit links in balance-alb mode. Turning
off one interface in the bond made no difference.
Relevant /etc/sysctl.conf parameters:
vm.dirty_ratio = 50
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4...
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
...a 'reboot' (no
> options) command or is ext4 that fragile?
I'd say there's no limit in the amount of time the kernel waits until
the blocks have been written to disk; driven by there parameters:
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
ie, if the data cached on RAM is older than 30s or larger than 10%
available RAM, the kernel will try to flush it to disk. Depending how
much data needs to be flushed at poweroff/reboot time, this could have
a significant effect on the...
2008 Jan 09
0
XEN server stalling .. problem spotted - solution required
...instances, typically 5Gb with 1Gb swap.
Dual / Dual Core 2.8G Xeon (4 in total) with 6Gb RAM.
Twin 500Gb SATA HDD (software RAID1)
To my way of thinking (!) when it runs out of memory, it should force a sync (or similar) and it''s not, it''s just sitting there. If I wait for the dirty_expire_centisecs timer to expire, I may get some life back, some instances will survive and some will have hung.
Here''s a working "meminfo";
MemTotal: 860160 kB
MemFree: 22340 kB
Buffers: 49372 kB
Cached: 498416 kB
SwapCached: 15096 kB
Active: 92452 kB
Inactive: 491840 kB
SwapTotal: 41...
2015 Aug 19
4
Optimum Block Size to use
Hi All
We use CentOS 6.6 for our application. I have profiled the application
and find that we have a heavy requirement in terms of Disk writes. On an
average when our application operates at a certain load i can observe
that the disk writes / second is around 2 Mbps (Average).
The block size set is 4k
*******************
[root at localhost ~]# blockdev --getbsz /dev/sda3
4096
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...be LVM inside VM on top of LVM on KVM host problem too? small
chance probably because the first 10 - 20GB it works great!)
- tried disabling Selinux, upgrading to newest kernels (elrepo ml and
lt), played around with dirty_cache thingeys like
proc/sys/vm/dirty_writeback_centisecs
/proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and
migration threashold of dmsetup, and other probably non important stuff
like vm.dirty_bytes
- when in "slow state" the systems kworkers are exessively using IO (10
- 20 MB per kworker process). This seems to be the writeback process
(CPY%Sync) becau...
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
...mand or is ext4 that fragile?
>
> I'd say there's no limit in the amount of time the kernel waits until
> the blocks have been written to disk; driven by there parameters:
>
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 10
> vm.dirty_bytes = 0
> vm.dirty_expire_centisecs = 3000
> vm.dirty_ratio = 20
> vm.dirty_writeback_centisecs = 500
>
> ie, if the data cached on RAM is older than 30s or larger than 10%
> available RAM, the kernel will try to flush it to disk. Depending how
> much data needs to be flushed at poweroff/reboot time, this could hav...
2013 Dec 30
2
oom situation
...gepagesize: 2048 kB
DirectMap4k: 10232 kB
DirectMap2M: 901120 kB
sysctl:
vm.oom_dump_tasks = 0
vm.oom_kill_allocating_task = 1
vm.panic_on_oom = 1
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 0
vm.highmem_is_dirtyable = 0
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 32 32
vm.max_map_count = 65530
vm.min_free_kbytes =...
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...top of LVM on KVM host problem too? small chance probably
> because the first 10 - 20GB it works great!)
>
> - tried disabling Selinux, upgrading to newest kernels (elrepo ml and lt),
> played around with dirty_cache thingeys like proc/sys/vm/dirty_writeback_centisecs
> /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and
> migration threashold of dmsetup, and other probably non important stuff
> like vm.dirty_bytes
>
> - when in "slow state" the systems kworkers are exessively using IO (10 -
> 20 MB per kworker process). This seems to be the writeback proc...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...roblem too? small chance probably because the first 10 - 20GB it
> works great!)
>
> - tried disabling Selinux, upgrading to newest kernels (elrepo ml
> and lt), played around with dirty_cache thingeys like
> proc/sys/vm/dirty_writeback_centisecs
> /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio ,
> and migration threashold of dmsetup, and other probably non
> important stuff like vm.dirty_bytes
>
> - when in "slow state" the systems kworkers are exessively using
> IO (10 - 20 MB per kworker process). This seems to be th...
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval. Is this adjustable?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html