Displaying 20 results from an estimated 40000 matches similar to: "Documentation on readdir performance"
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad,
I'm sorry, I don't want to test this again on my system just yet! It caused
too much instability for my users and I don't have enough resources for a
development environment. The only other variables that changed before the
crashes was the group metadata-cache[0], which I enabled the same day as
the parallel-readdir and readdir-ahead options:
$ gluster volume set homes
2018 Jan 07
0
performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7
Guess it is same as [Gluster-users] A Problem of readdir-optimize
http://lists.gluster.org/pipermail/gluster-users/2018-January/033170.html
but on 3.13
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives
disconnects? so we know which to disable
parallel-readdir doing magic ran on pdf from last year
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
-v
On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote:
> By the way, on a slightly related note, I'm pretty
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I actually saw that post already and even asked a question 4 days ago (
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode#comment1172497_540917).
The accepted answer also seems to go against your suggestion to enable
direct-io-mode as it says it should be disabled for better performance when
used just for file accesses.
It'd be great if someone from the Gluster team
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Wish I knew or was able to get detailed description of those options myself.
here is direct-io-mode
https://serverfault.com/questions/517775/glusterfs-direct-i-o-mode
Same as you I ran tests on a large volume of files, finding that main
delays are in attribute calls, ending up with those mount options to add
performance.
I discovered those options through basically googling this user list with
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:14 AM, Artem Russakovskii wrote:
> Following up here on a related and very serious for us issue.
>
> I took down one of the 4 replicate gluster servers for maintenance
> today. There are 2 gluster volumes totaling about 600GB. Not that much
> data. After the server comes back online, it starts auto healing and
> pretty much all operations on gluster freeze for
2018 Apr 10
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Vlad,
I'm using only localhost: mounts.
Can you please explain what effect each option has on performance issues
shown in my posts? "negative-timeout=10,attribute-timeout=30,fopen-
keep-cache,direct-io-mode=enable,fetch-attempts=5" From what I remember,
direct-io-mode=enable didn't make a difference in my tests, but I suppose I
can try again. The explanations about
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 10:35 AM, Artem Russakovskii wrote:
> Hi Ravi,
>
> Could you please expand on how these would help?
>
> By forcing full here, we move the logic from the CPU to network, thus
> decreasing CPU utilization, is that right?
Yes, 'diff' employs the rchecksum FOP which does a sha256? checksum
which can consume CPU. So yes it is sort of shifting the load from CPU
2018 Apr 18
3
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Following up here on a related and very serious for us issue.
I took down one of the 4 replicate gluster servers for maintenance today.
There are 2 gluster volumes totaling about 600GB. Not that much data. After
the server comes back online, it starts auto healing and pretty much all
operations on gluster freeze for many minutes.
For example, I was trying to run an ls -alrt in a folder with 7300
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either
parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We
are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6.
I updated my servers and clients to 3.12.4 and enabled these two options
after reading about them in the 3.10.0 and 3.11.0 release notes. In the
days after enabling these two options all of my
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Hi Ravi,
Could you please expand on how these would help?
By forcing full here, we move the logic from the CPU to network, thus
decreasing CPU utilization, is that right? This is assuming the CPU and
disk utilization are caused by the differ and not by lstat and other calls
or something.
> Option: cluster.data-self-heal-algorithm
> Default Value: (null)
> Description: Select between
2018 Apr 18
0
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Btw, I've now noticed at least 5 variations in toggling binary option
values. Are they all interchangeable, or will using the wrong value not
work in some cases?
yes/no
true/false
True/False
on/off
enable/disable
It's quite a confusing/inconsistent practice, especially given that many
options will accept any value without erroring out/validation.
Sincerely,
Artem
--
Founder, Android
2018 Apr 18
1
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
On 04/18/2018 11:59 AM, Artem Russakovskii wrote:
> Btw, I've now noticed at least 5 variations in toggling binary option
> values. Are they all interchangeable, or will using the wrong value
> not work in some cases?
>
> yes/no
> true/false
> True/False
> on/off
> enable/disable
>
> It's quite a confusing/inconsistent practice, especially given that
2018 Apr 18
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
Thanks for the link. Looking at the status of that doc, it isn't quite
ready yet, and there's no mention of the option.
Does it mean that whatever is ready now in 4.0.1 is incomplete but can be
enabled via granular-entry-heal=on, and when it is complete, it'll become
the default and the flag will simply go away?
Is there any risk enabling the option now in 4.0.1?
Sincerely,
Artem
2018 Feb 01
0
Tiered volume performance degrades badly after a volume stop/start or system restart.
This problem appears to be related to the sqlite3 DB files
that are used for the tiering file access counters, stored on
each hot and cold tier brick in .glusterfs/<volname>.db.
When the tier is first created, these DB files do not exist,
they are created, and everything works fine.
On a stop/start or service restart, the .db files are already
present, albeit empty since I don't have
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem,
Looks like the issue is about cache warming up. Specially, I suspect rsync
doing a 'readdir(), stat(), file operations' loop, where as when a find or
ls is issued, we get 'readdirp()' request, which contains the stat
information along with entries, which also makes sure cache is up-to-date
(at md-cache layer).
Note that this is just a off-the memory
2018 Apr 10
2
performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs
you definitely need mount options to /etc/fstab
use ones from here
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I went on with using local mounts to achieve performance as well
Also, 3.12 or 3.10 branches would be preferable for production
On Fri, Apr 6, 2018 at 4:12 AM, Artem Russakovskii <archon810 at gmail.com>
wrote:
> Hi again,
>
> I'd like to
2018 Feb 05
0
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Hi all,
I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
boxes, distributed-replicate) My testing shows the same thing -- running a
find on a directory dramatically increases lstat performance. To add
another clue, the performance degrades again after issuing a call to reset
the system's cache of dentries and inodes:
# sync; echo 2 > /proc/sys/vm/drop_caches
I
2018 Apr 04
0
Invisible files and directories
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
On Tue, Apr 3, 2018 at 6:43 PM, Serg Gulko <s.gulko at gmail.com> wrote:
> Hello!
>
> We are running distributed volume that contains 7 bricks.
> Volume is mounted using native fuse client.
>
> After an unexpected system reboot, some files are disappeared from fuse
> mount point but still available
2018 Jan 18
0
Documentation on readdir performance
All,
A github issue on this (tracking mostly DHT stuff) at: https://github.com/gluster/glusterfs/issues/117
Slides of the talk on the same topic presented at Vault 2017:
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
regards,
Raghavendra