similar to: GlusterFS 3.7 - slow/poor performances

Displaying 20 results from an estimated 900 matches similar to: "GlusterFS 3.7 - slow/poor performances"

2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
hi Geoffrey, Since you are saying it happens on all types of volumes, lets do the following: 1) Create a dist-repl volume 2) Set the options etc you need. 3) enable gluster volume profile using "gluster volume profile <volname> start" 4) run the work load 5) give output of "gluster volume profile <volname> info" Repeat the steps above on new and old
2006 Nov 21
2
Memory leak in ocfs2/dlm?
Hi! Seems we're facing some memory leak here. This is vanilla 2.6.19-rc6 on a x86_64 box, 4GB RAM. A simple `ls -Rn' on a filesystem with lots of files makes the box leak so much RAM that the OOM killer starts to kick in. With slab alloc debugging turned on, we see this: # mount; ls -Rn; wait some seconds; Ctrl-C [root@lnxp-1038:/backend1]$ cat /proc/slab_allocators | egrep
2020 Feb 06
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:10 PM, David Hildenbrand wrote: > so dropping caches (echo 3 > /proc/sys/vm/drop_caches) will no longer > deflate the balloon when conservative_shrinker=true? > Should be. Need Tyler's help to test it. Best, Wei
2020 Feb 06
2
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:10 PM, David Hildenbrand wrote: > so dropping caches (echo 3 > /proc/sys/vm/drop_caches) will no longer > deflate the balloon when conservative_shrinker=true? > Should be. Need Tyler's help to test it. Best, Wei
2013 Sep 11
1
Possible memory leak ?
Hi, I am using gluster 3.3.1 on Centos 6, installed from the glusterfs-3.3.1-1.el6.x86_64.rpm rpms. I am seeing the Committed_AS memory continually increasing and the processes using the memory are glusterfsd instances. see http://imgur.com/K3dalTW for graph. Both nodes are exhibiting the same behaviour, I have tried the suggested echo 2 > /proc/sys/vm/drop_caches but it made no
2020 Feb 06
1
[PATCH RFC] virtio_balloon: conservative balloon page shrinking
On Thursday, February 6, 2020 5:32 PM, David Hildenbrand wrote: > > If the page cache is empty, a drop_slab() will deflate the whole balloon if I > am not wrong. > > Especially, a echo 3 > /proc/sys/vm/drop_caches > > will first drop the page cache and then drop_slab() Then that's the problem of "echo 3 > /proc/sys/vm/drop_cache" itself. It invokes
2009 Jan 15
5
[PATCH 0/3] ocfs2: Inode Allocation Strategy Improvement.v2
Changelog from V1 to V2: 1. Modify some codes according to Mark's advice. 2. Attach some test statistics in the commit log of patch 3 and in this e-mail also. See below. Hi all, In ocfs2, when we create a fresh file system and create inodes in it, they are contiguous and good for readdir+stat. While if we delete all the inodes and created again, the new inodes will get spread out and that
2020 Feb 14
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
On Wed 05-02-20 17:34:02, David Hildenbrand wrote: > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker") > changed the behavior when deflation happens automatically. Instead of > deflating when called by the OOM handler, the shrinker is used. > > However, the balloon is not simply some slab cache that should be > shrunk when under memory
2020 Feb 14
2
[PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
On Wed 05-02-20 17:34:02, David Hildenbrand wrote: > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker") > changed the behavior when deflation happens automatically. Instead of > deflating when called by the OOM handler, the shrinker is used. > > However, the balloon is not simply some slab cache that should be > shrunk when under memory
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a lustre file system will cause a significant system overhead for applications with high memory demands. We have seen a 50% slowdown or worse for applications. Even High Performance Linpack, that have no file IO whatsoever is affected. The only remedy seems to be to empty the buffer cache from memory by running
2014 Jun 04
1
limit samba page cache in linux
Hi, I am using linux kernel 3.10.12 (mips) on my embedded router box with samba server version: 3.0.24 Every time i transfer files from a hard disk connected to router to my windows PC, all my router RAM is eaten up in form of page cache ( though it can be reclaimed by kernel in case of need ). The problem is i get lot of page allocation failures from some kernel modules since page reclaim isnt
2019 Oct 06
0
VIRTIO_BALLOON_F_FREE_PAGE_HINT
On 04.10.19 21:03, Tyler Sanderson wrote: > I think DEFLATE_ON_OOM makes sense conceptually, it's just that the > implementation doesn't play well with the rest of memory management > under memory pressure. > It could probably be fixed with enough effort, but IMO free page hinting > gets 90% of the benefit without poking the dark corners of memory > management and so is a
2010 Dec 13
3
Slow I/O on ocfs2 file system
Hello, I have found, that ocfs2 is very slow when doing I/O operation without cache. See a simple test: ng-vvv1:~# dd if=/data/verejna/dd-1G bs=1k | dd of=/dev/null 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 395.183 s, 2.7 MB/s 2097152+0 records in 2097152+0 records out 1073741824 bytes (1.1 GB) copied, 395.184 s, 2.7 MB/s The underlying block device is quite
2013 Aug 07
2
libvirt possibly ignoring cache=none ?
Hi, I have an instance with 8G ram assigned. All block devices have cache disabled (cache=none) on host. However, cgroup is reporting 4G of cache associated to the instance (on host) # cgget -r memory.stat libvirt/qemu/i-000009fa libvirt/qemu/i-000009fa: memory.stat: cache 4318011392 rss 8676360192 ... When I drop all system caches on host.. # echo 3 > /proc/sys/vm/drop_caches # ..cache
2019 Oct 04
4
VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Fri, Oct 04, 2019 at 10:06:03AM +0200, David Hildenbrand wrote: > On 04.10.19 01:15, Tyler Sanderson wrote: > > I was mistaken, the problem with overcommit accounting is not fixed by > > the change to shrinker interface. > > This means that large allocations are stopped even if they could succeed > > by deflating the balloon. > > Please note that some people
2019 Oct 04
4
VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Fri, Oct 04, 2019 at 10:06:03AM +0200, David Hildenbrand wrote: > On 04.10.19 01:15, Tyler Sanderson wrote: > > I was mistaken, the problem with overcommit accounting is not fixed by > > the change to shrinker interface. > > This means that large allocations are stopped even if they could succeed > > by deflating the balloon. > > Please note that some people
2018 Mar 08
0
fuse vs libgfapi LIO performances comparison: how to make tests?
Dear support, I need to export gluster volume with LIO for a virtualization system. In this moment I have a very basic test configuration: 2x HP 380 G7(2 * Intel X5670 (Six core @ 2,93GHz), 72GB ram, hd RAID10 6xsas 10krpm, lan Intel X540 T2 10GB) directly interconnected. Gluster configuration is replica 2. OS is Fedora 27 For my tests I used dd and I found strange results. Apparently the
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting: ??????????????????? Before using gfapi: ]# dd if=/dev/urandom of=test.file bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824
2017 Oct 19
1
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Michael S. Tsirkin wrote: > On Wed, Oct 18, 2017 at 07:59:23PM +0900, Tetsuo Handa wrote: > > Do you see anything wrong with the patch I used for emulating > > VIRTIO_BALLOON_F_DEFLATE_ON_OOM path (shown below) ? > > > > ---------------------------------------- > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > > index
2017 Oct 19
1
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
Michael S. Tsirkin wrote: > On Wed, Oct 18, 2017 at 07:59:23PM +0900, Tetsuo Handa wrote: > > Do you see anything wrong with the patch I used for emulating > > VIRTIO_BALLOON_F_DEFLATE_ON_OOM path (shown below) ? > > > > ---------------------------------------- > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > > index