Displaying 9 results from an estimated 9 matches for "dirty_byt".
Did you mean:
dirty_bit
2011 Sep 28
3
[PATCH] Btrfs: fix missing clear_extent_bit
We forget to clear inode''s dirty_bytes and EXTENT_DIRTY at the end of write.
Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
---
fs/btrfs/file.c | 1 -
fs/btrfs/inode.c | 5 ++++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index e7872e4..3f3b4a8 100644
--- a/fs/btrf...
2015 Jan 06
2
reboot - is there a timeout on filesystem flush?
I've had a few systems with a lot of RAM and very busy filesystems
come up with filesystem errors that took a manual 'fsck -y' after what
should have been a clean reboot. This is particularly annoying on
remote systems where I have to talk someone else through the recovery.
Is there some time limit on the cache write with a 'reboot' (no
options) command or is ext4 that
2015 Jan 07
0
reboot - is there a timeout on filesystem flush?
...e cache write with a 'reboot' (no
> options) command or is ext4 that fragile?
I'd say there's no limit in the amount of time the kernel waits until
the blocks have been written to disk; driven by there parameters:
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
ie, if the data cached on RAM is older than 30s or larger than 10%
available RAM, the kernel will try to flush it to disk. Depending how
much data needs to be flushed at poweroff/reboot time, this could h...
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...ing Selinux, upgrading to newest kernels (elrepo ml and
lt), played around with dirty_cache thingeys like
proc/sys/vm/dirty_writeback_centisecs
/proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and
migration threashold of dmsetup, and other probably non important stuff
like vm.dirty_bytes
- when in "slow state" the systems kworkers are exessively using IO (10
- 20 MB per kworker process). This seems to be the writeback process
(CPY%Sync) because the cache wants to flush to HDD. But the strange
thing is that after a good sync (0% left), the disk may become slow
again...
2015 Jan 07
5
reboot - is there a timeout on filesystem flush?
...no
>> options) command or is ext4 that fragile?
>
> I'd say there's no limit in the amount of time the kernel waits until
> the blocks have been written to disk; driven by there parameters:
>
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 10
> vm.dirty_bytes = 0
> vm.dirty_expire_centisecs = 3000
> vm.dirty_ratio = 20
> vm.dirty_writeback_centisecs = 500
>
> ie, if the data cached on RAM is older than 30s or larger than 10%
> available RAM, the kernel will try to flush it to disk. Depending how
> much data needs to be flushed at...
2013 Dec 30
2
oom situation
...s_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10232 kB
DirectMap2M: 901120 kB
sysctl:
vm.oom_dump_tasks = 0
vm.oom_kill_allocating_task = 1
vm.panic_on_oom = 1
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 0
vm.highmem_is_dirtyable = 0
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 32 32
vm.max_map_co...
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
...grading to newest kernels (elrepo ml and lt),
> played around with dirty_cache thingeys like proc/sys/vm/dirty_writeback_centisecs
> /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio , and
> migration threashold of dmsetup, and other probably non important stuff
> like vm.dirty_bytes
>
> - when in "slow state" the systems kworkers are exessively using IO (10 -
> 20 MB per kworker process). This seems to be the writeback process
> (CPY%Sync) because the cache wants to flush to HDD. But the strange thing
> is that after a good sync (0% left), the disk m...
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
...(elrepo ml
> and lt), played around with dirty_cache thingeys like
> proc/sys/vm/dirty_writeback_centisecs
> /proc/sys/vm/dirty_expire_centisecs cat /proc/sys/vm/dirty_ratio ,
> and migration threashold of dmsetup, and other probably non
> important stuff like vm.dirty_bytes
>
> - when in "slow state" the systems kworkers are exessively using
> IO (10 - 20 MB per kworker process). This seems to be the
> writeback process (CPY%Sync) because the cache wants to flush to
> HDD. But the strange thing is that after a good sync (0% l...
2013 Jul 31
11
Is the checkpoint interval adjustable?
I believe 30 sec is the default for the checkpoint interval. Is this adjustable?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html