Displaying 20 results from an estimated 127 matches for "drop_cache".
Did you mean:
drop_caches
2015 Jun 01
1
GlusterFS 3.7 - slow/poor performances
...low performances.
For my benches, as you can read below, I do some actions (untar, du, find, tar, rm) with linux kernel sources, dropping cache, each on distributed, replicated, distributed-replicated, single (single brick) volumes and the native FS of one brick.
# time (echo 3 > /proc/sys/vm/drop_caches; tar xJf ~/linux-4.1-rc5.tar.xz; sync; echo 3 > /proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; du -sh linux-4.1-rc5/; echo 3 > /proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; find linux-4.1-rc5/|wc -l; echo 3 > /proc/sys/vm/drop_caches)
# t...
2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
...benches, as you can read below, I do some actions (untar, du,
> find, tar, rm) with linux kernel sources, dropping cache, each on
> distributed, replicated, distributed-replicated, single (single brick)
> volumes and the native FS of one brick.
>
> # time (echo 3 > /proc/sys/vm/drop_caches; tar xJf
> ~/linux-4.1-rc5.tar.xz; sync; echo 3 > /proc/sys/vm/drop_caches)
> # time (echo 3 > /proc/sys/vm/drop_caches; du -sh linux-4.1-rc5/; echo
> 3 > /proc/sys/vm/drop_caches)
> # time (echo 3 > /proc/sys/vm/drop_caches; find linux-4.1-rc5/|wc -l;
> echo 3 > /...
2019 Oct 06
0
VIRTIO_BALLOON_F_FREE_PAGE_HINT
...ge cache.
One solution is to move the page cache to the hypervisor, e.g., using
emulated NVDIMMs or virtio-pmem.
> Would it be possible?to add a mechanism that explicitly causes page
> cache to shrink without requiring the system to be under memory pressure?
>
We do have a sysctl "drop_caches" which calls
iterate_supers(drop_pagecache_sb, NULL) and drop_slab().
doc/Documentation/sysctl/vm.txt:
==============================================================
drop_caches
Writing to this will cause the kernel to drop clean caches, as well as
reclaimable slab objects like dentries an...
2013 Dec 02
2
lastes sources don't include "drop_cache" option
Was there some reason that patch got dropped?
Otherwise rsync eats up all the buffer memory.
Note -- I tried directio -- didn't work due to alignment
issues -- buffers have to be aligned to sectors.
The kernel, if I remember correctly, has been on again/off again
on requiring alignment on directio -- because most of the drivers
and devices do for directio to work, at.
"dd"
2010 Dec 13
3
Slow I/O on ocfs2 file system
Hello,
I have found, that ocfs2 is very slow when doing I/O operation without
cache. See a simple test:
ng-vvv1:~# dd if=/data/verejna/dd-1G bs=1k | dd of=/dev/null
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 395.183 s, 2.7 MB/s
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 395.184 s, 2.7 MB/s
The underlying block device is quite
2019 Oct 04
4
VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Fri, Oct 04, 2019 at 10:06:03AM +0200, David Hildenbrand wrote:
> On 04.10.19 01:15, Tyler Sanderson wrote:
> > I was mistaken, the problem with overcommit accounting is not fixed by
> > the change to shrinker interface.
> > This means that large allocations are stopped even if they could succeed
> > by deflating the balloon.
>
> Please note that some people
2019 Oct 04
4
VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Fri, Oct 04, 2019 at 10:06:03AM +0200, David Hildenbrand wrote:
> On 04.10.19 01:15, Tyler Sanderson wrote:
> > I was mistaken, the problem with overcommit accounting is not fixed by
> > the change to shrinker interface.
> > This means that large allocations are stopped even if they could succeed
> > by deflating the balloon.
>
> Please note that some people
2020 Feb 06
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:10 PM, David Hildenbrand wrote:
> so dropping caches (echo 3 > /proc/sys/vm/drop_caches) will no longer
> deflate the balloon when conservative_shrinker=true?
>
Should be. Need Tyler's help to test it.
Best,
Wei
2020 Feb 06
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:10 PM, David Hildenbrand wrote:
> so dropping caches (echo 3 > /proc/sys/vm/drop_caches) will no longer
> deflate the balloon when conservative_shrinker=true?
>
Should be. Need Tyler's help to test it.
Best,
Wei
2013 Sep 11
1
Possible memory leak ?
...lusterfs-3.3.1-1.el6.x86_64.rpm rpms.
I am seeing the Committed_AS memory continually increasing and the
processes using the memory are glusterfsd instances.
see http://imgur.com/K3dalTW for graph.
Both nodes are exhibiting the same behaviour, I have tried the suggested
echo 2 > /proc/sys/vm/drop_caches
but it made no difference. It there a known issue with 3.3.1 ?
Thanks
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130911/0b9be3d7/attachment.html>
2006 Nov 21
2
Memory leak in ocfs2/dlm?
...cfs2_initialize_super+0x55e/0xd7f [ocfs2]
size-512: 26439 ocfs2_dentry_attach_lock+0x2d1/0x423 [ocfs2]
ocfs2_inode_cache: 26450 ocfs2_alloc_inode+0x13/0x29 [ocfs2]
size-512: 52879 dlm_new_lockres+0x22/0x189 [ocfs2_dlm]
# Clear caches
[root@lnxp-1038:/mail/store/backend1]$ echo 3 > /proc/sys/vm/drop_caches
[root@lnxp-1038:/mail/store/backend1]$ echo 0 > /proc/sys/vm/drop_caches
[root@lnxp-1038:/backend1]$ cat /proc/slab_allocators | egrep '(size.512|ocfs2_inode_cache)' | grep ocfs | sort -k 2 -n
size-512: 1 o2hb_heartbeat_group_make_item+0x1b/0x79 [ocfs2_nodemanager]
size-512: 1 o2hb_m...
2009 Jan 15
5
[PATCH 0/3] ocfs2: Inode Allocation Strategy Improvement.v2
...eat improvement with the second "ls -lR".
echo 'y'|mkfs.ocfs2 -b 4K -C 4K -M local /dev/sda11
mount -t ocfs2 /dev/sda11 /mnt/ocfs2/
time tar jxvf /home/taoma/linux-2.6.28.tar.bz2 -C /mnt/ocfs2/ 1>/dev/null
real 0m20.548s 0m20.106s
umount /mnt/ocfs2/
echo 2 > /proc/sys/vm/drop_caches
mount -t ocfs2 /dev/sda11 /mnt/ocfs2/
time ls -lR /mnt/ocfs2/ 1>/dev/null
real 0m13.965s 0m13.766s
umount /mnt/ocfs2/
echo 2 > /proc/sys/vm/drop_caches
mount -t ocfs2 /dev/sda11 /mnt/ocfs2/
time rm /mnt/ocfs2/linux-2.6.28/ -rf
real 0m13.198s 0m13.091s
umount /mnt/ocfs2/
echo 2 > /proc...
2020 Feb 06
1
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:32 PM, David Hildenbrand wrote:
>
> If the page cache is empty, a drop_slab() will deflate the whole balloon if I
> am not wrong.
>
> Especially, a echo 3 > /proc/sys/vm/drop_caches
>
> will first drop the page cache and then drop_slab()
Then that's the problem of "echo 3 > /proc/sys/vm/drop_cache" itself. It invokes other shrinkers as well (if considered an issue), need to be tweaked in the mm.
Best,
Wei
2014 Nov 24
1
Re: [PATCH 3/3] New APIs: bmap-file, bmap-device, bmap.
...char *device)
> +{
> + return bmap_prepare (device, device);
> +}
> +
> +static char buffer[BUFSIZ];
> +
> +int
> +do_bmap (void)
> +{
> + size_t n;
> + ssize_t r;
> + struct dirent *d;
> +
> + /* Drop caches before starting the read. */
> + if (do_drop_caches (3) == -1)
> + return -1;
> +
> + if (fd >= 0) {
> + n = statbuf.st_size;
> +
> + while (n > 0) {
> + r = read (fd, buffer, n > BUFSIZ ? BUFSIZ : n);
> + if (r == -1) {
> + reply_with_perror ("read");
> + close (fd)...
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
...m overhead for applications
with high memory demands. We have seen a 50% slowdown or worse for
applications. Even High Performance Linpack, that have no file IO whatsoever
is affected. The only remedy seems to be to empty the buffer cache from memory
by running "echo 3 > /proc/sys/vm/drop_caches"
Any hints on how to improve the situation is greatly appreciated.
System setup:
Client: Dual socket Sandy Bridge, with 32GB ram and infiniband connection to
lustre server. CentOS 6.4, with kernel 2.6.32-358.11.1.el6.x86_64 and lustre
v2.1.6 rpms downloaded from whamcloud download site....
2020 Feb 10
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Monday, February 10, 2020 11:57 AM, Tetsuo Handa wrote:
> Then, "node-A's NR_FILE_PAGES is already 0 and node-B's NR_FILE_PAGES is
> not 0, but allocation request which triggered this shrinker wants to allocate
> from only node-A"
> would be confused by this change, for the pagecache pages for allocating
> thread's interested node are already depleted but
2020 Feb 10
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Monday, February 10, 2020 11:57 AM, Tetsuo Handa wrote:
> Then, "node-A's NR_FILE_PAGES is already 0 and node-B's NR_FILE_PAGES is
> not 0, but allocation request which triggered this shrinker wants to allocate
> from only node-A"
> would be confused by this change, for the pagecache pages for allocating
> thread's interested node are already depleted but
2010 Apr 15
4
new ocfs2 release?
Hi ocfs2 developers,
there are some news about the schedule for a new ocfs2 release that solve the
actual bug/limitations? I can see an 1.4.7 release tagged here:
http://oss.oracle.com/git/?p=ocfs2-1.4.git;a=summary
Is there a planned release date?
in my environment (about 5000000 files with 300000 new files/deletion per day) I
see load until 1000 and I/O almost blocked for some hours of the
2009 Jan 27
9
rsync compression (-z) and timestamp
Hi @all!
Sorry about that many questions, but after searching and reading tons different web sites, I didn't find exactly what I am searching for.
So, I know that with the -z Option rsync compresses the files with gzip, than the files are transfared and at the target machine uncompressed.
I want to know, is there a possibility to see how big the compressed file is, which rsync generates
2007 Dec 11
6
Where does xen chache domUs grub.conf ?
Hi all, I''m a newbie on this list. I''m using xen 3.0.3 on CentOS50.
I installed a new flavor of kernel in a domU via a standard RPM, which
modified my grub.conf by adding a new entry to boot on the new kernel.
When rebooting the domU by pygrub, I didn''t see the new grub entry. I
tried to restart xend, with no effect.
By googling a little, I understood that kernel