Displaying 14 results from an estimated 14 matches for "dirty_background_ratio".
2008 Dec 04
1
page cache keeps growing untill system runs out of memory on a MIPS platform
...When I try to transfer huge files above (100MB) from a client on to the USB hard disk, I find that the page cache eats up almost about 100MB and occasionally the system runs out of memory. I even tried tweaking the /proc/sys/vm settings with the following values, but it did not help.
/proc/sys/vm/dirty_background_ratio = 2
/proc/sys/vm/dirty_ratio = 5
/proc/sys/vm/dirty_expire_centisecs = 1000
/proc/sys/vm/vfs_cache_pressure = 10000
I also tried copying the huge file locally from one folder to another through the USB interface using dd oflag=direct flag (unbuffered write). But the page cache again ate away ab...
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078.
The server is used as a samba file server.
Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth.
Here are a snip from the iostat
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems
come up with filesystem errors that took a manual 'fsck -y' after what
should have been a clean reboot. This is particularly annoying on
remote systems where I have to talk someone else through the recovery.
Is there some time limit on the cache write with a 'reboot' (no
options) command or is ext4 that
2012 Apr 17
1
Help needed with NFS issue
...g
served are RAID-5 sets with write-back cache enabled (BBU is good). RAID
controller logs are free of errors.
NFS servers used dual bonded gigabit links in balance-alb mode. Turning
off one interface in the bond made no difference.
Relevant /etc/sysctl.conf parameters:
vm.dirty_ratio = 50
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem...
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
...Is there some time limit on the cache write with a 'reboot' (no
> options) command or is ext4 that fragile?
I'd say there's no limit in the amount of time the kernel waits until
the blocks have been written to disk; driven by there parameters:
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
ie, if the data cached on RAM is older than 30s or larger than 10%
available RAM, the kernel will try to flush it to disk. Depending how
much data needs to be flushed at poweroff/reboot...
2013 Nov 07
0
GlusterFS with NFS client hang up some times
...b6f
- GlusterFS: 3.4.0-8.el6
- Sysctl.conf:
> vm.swappiness = 0
> vm.vfs_cache_pressure = 1000
> net.core.rmem_max = 4096000
> net.core.wmem_max = 4096000
> net.ipv4.neigh.default.gc_thresh2 = 2048
> net.ipv4.neigh.default.gc_thresh3 = 4096
> vm.dirty_background_ratio = 1
> vm.dirty_ratio = 16
I use only default config for GlusterFS (follow
http://gluster.org/community/documentation/index.php/Getting_started_overview).
After testing between NFS client and FUSE client, I choose NFS because
the performance is much better.
NFS mount options:
svr385-1212.l...
2013 Mar 02
0
Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance
...e Linux VM will buffer write data in the kernel buffer cache until it runs out of memory for that (vm.dirty_ratio), then it will freeze any process that issues the writes. If your VM has a lot of memory relative to storage speed, this can result in very long delays. Try reducing Linux kernel vm.dirty_background_ratio to get writes going sooner and vm.dirty_ratio so that the freezes don't last as long. You can even reduce VM's block device queue depth. But most of all make sure that gluster writes are performing near a typical local block device speed.
> > Does glusterfs have any tuning options,...
2011 Oct 05
1
Performance tuning questions for mail server
...all.log_martians = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
vm.vfs_cache_pressure = 35
vm.nr_hugepages = 512
net.ipv4.tcp_max_syn_backlog = 2048
fs.aio-max-nr = 1048576
vm.dirty_background_ratio = 3
vm.dirty_ratio = 40
After making changes, do you have any recommendations on which tools
to use to monitor those changes and see how they perform?
I have noatime set in fstab in the guest for the /var partition, where
much of the spamassassin occurs.
I've included below my libvirt xml c...
2012 Oct 01
3
Tunning - cache write (database)
...his is part
of a database bulkload ...after finish , integrity will be desirable
(not a obligation, since this is a test environment)
For now, performance is the mainly requirement...
A plus :
root@jdivm06:/proc/sys/fs# cat /proc/sys/vm/dirty_ratio
50
root@jdivm06:/proc/sys/fs# cat /proc/sys/vm/dirty_background_ratio
10
Thanks
Cesar
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
2016 Apr 13
3
Bug#820862: xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds
...[ 1200.060576] [<ffffffff8113f352>] ? __generic_file_write_iter+0x132/0x340
-------------------------------------------------------------------------------------------------
* What exactly did you do (or not do) that was effective (or
ineffective)?
I changed the values of vm.dirty_background_ratio and vm.dirty_ratio, but had no influence
* What was the outcome of this action?
nothing
* What outcome did you expect instead?
I expected, the VM does not freeze
-- System Information:
Debian Release: 8.4
APT prefers stable-updates
APT policy: (500, 'stable-updates'...
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
...ache write with a 'reboot' (no
>> options) command or is ext4 that fragile?
>
> I'd say there's no limit in the amount of time the kernel waits until
> the blocks have been written to disk; driven by there parameters:
>
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 10
> vm.dirty_bytes = 0
> vm.dirty_expire_centisecs = 3000
> vm.dirty_ratio = 20
> vm.dirty_writeback_centisecs = 500
>
> ie, if the data cached on RAM is older than 30s or larger than 10%
> available RAM, the kernel will try to flush it to disk. Depending how
> much data...
2013 Dec 30
2
oom situation
...gePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10232 kB
DirectMap2M: 901120 kB
sysctl:
vm.oom_dump_tasks = 0
vm.oom_kill_allocating_task = 1
vm.panic_on_oom = 1
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 0
vm.highmem_is_dirtyable = 0
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 32...
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval. Is this adjustable?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>