similar to: Confusing lstat() performance

Displaying 20 results from an estimated 9000 matches similar to: "Confusing lstat() performance"

2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled: [root at dell-per730-03 ~]# gluster v info Volume Name: vmstore Type: Replicate Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.50.1:/rhgs/brick1/vmstore Brick2:
2017 Sep 17
3
Confusing lstat() performance
On 17/09/17 18:03, Niklas Hamb?chen wrote: > So far the only difference between `ls` and `bup index` I could observe > is that `bup index` chdir()s into the directory to index, ls doesn't. > > But when I `cd` into the dir and run `ls` without directory argument, it > is still much faster than bup index for each stat(). Hmm, bup uses the fchdir() syscall to go into the target
2017 Sep 17
0
Confusing lstat() performance
I found the reason now, at least for this set of lstat()s I was looking at. bup first does all getdents(), obtaining all file names in the directory, and then stat()s them. Apparently this destroys some of gluster's caching, making stat()s ~100x slower. What caching could this be, and how could I convince gluster to serve these stat()s as fast as if a getdents() had been done just before
2017 Sep 18
2
Confusing lstat() performance
Hi Ben, do you know if the smallfile benchmark also does interleaved getdents() and lstat, which is what I found as being the key difference that creates the performance gap (further down this thread)? Also, wouldn't `--threads 8` change the performance numbers by factor 8 versus the plain `ls` and `rsync` that I did? Would you mind running those commands directly/plainly on your cluster to
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hello Joe, > > > > > > I just did a mount like this (added the bold): > > > mount -t glusterfs -o > *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache* > ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log >
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow. ? I did not try the mount options yet, but I will now! ? ? With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all, ? ? One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :) ? Initial cluster was created like this: ? gluster volume create www replica 3 transport tcp
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I just did a mount like this (added the bold): ? mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?Results: ? root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay, ? ? What do you mean exactly? What info is missing? ? PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again. ? ? Regards Jo ? -----Original message----- From:Vijay Bellur <vbellur at redhat.com> Sent:Tue 11-07-2017 18:16 Subject:Re: [Gluster-users] Gluster native mount is
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello, ? ? Here is the volume info as requested by soumya: ? #gluster volume info www ?Volume Name: www Type: Replicate Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.140.41:/gluster/www Brick2: 192.168.140.42:/gluster/www Brick3: 192.168.140.43:/gluster/www Options Reconfigured:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote: > RE: [Gluster-users] Gluster native mount is really slow compared to nfs > > Hello Joe, > > I really appreciate your feedback, but I already tried the opcache > stuff (to not valildate at all). It improves of course then, but not > completely somehow. Still quite slow. > > I did not try the mount options yet, but I will now!
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello, ? ? Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster: ? ? root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www traffic is generally, "you're doing it wrong". https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ That said, you might also look at these mount options: attribute-timeout, entry-timeout, negative-timeout (set to some large amount of time), and fopen-keep-cache. On 07/11/2017 07:48 AM, Jo
2017 Sep 18
1
Confusing lstat() performance
On 18/09/17 17:23, Ben Turner wrote: > Do you want tuned or untuned? If tuned I'd like to try one of my tunings for metadata, but I will use yours if you want. (Re-CC'd list) I would be interested in both, if possible: To confirm that it's not only my machines that exhibit this behaviour given my settings, and to see what can be achieved with your tuned settings. Thank you!
2017 Sep 18
0
Confusing lstat() performance
----- Original Message ----- > From: "Niklas Hamb?chen" <mail at nh2.me> > To: "Ben Turner" <bturner at redhat.com> > Cc: gluster-users at gluster.org > Sent: Sunday, September 17, 2017 9:49:10 PM > Subject: Re: [Gluster-users] Confusing lstat() performance > > Hi Ben, > > do you know if the smallfile benchmark also does interleaved
2017 Sep 15
0
Confusing lstat() performance
Hi Niklas, Out of interest have you tried testing performance with performance.stat-prefetch enabled? -- Sam McLeod @s_mcleod https://smcleod.net > On 14 Sep 2017, at 10:42 pm, Niklas Hamb?chen <mail at nh2.me> wrote: > > Hi, > > I have a gluster 3.10 volume with a dir with ~1 million small files in > them, say mounted at /mnt/dir with FUSE, and I'm observing
2017 Jan 09
2
Trouble removing files in chrooted sftp
Hi, I have trouble setting up chrooted SFTP for our user. I got the basic SFTP chroot working, user is chrooted to its home directory, I've added /home/userb/etc directory with dummy passwd, group and localtime files. The problem is that instead of only accessing its own files, I need the user to be able to remove another users files. I have web application which runs as different user, the
2017 Sep 15
2
Confusing lstat() performance
On 15/09/17 02:45, Sam McLeod wrote: > Out of interest have you tried testing performance > with performance.stat-prefetch enabled? Not yet, because I'm still struggling to understand the current more basic setup's performance behaviour (with it being off), but it's definitely on my list and I'll report the outcome.
2017 Sep 17
0
Confusing lstat() performance
On 15/09/17 03:46, Niklas Hamb?chen wrote: >> Out of interest have you tried testing performance >> with performance.stat-prefetch enabled? I have now tested with `performance.stat-prefetch: on` but am not observing a difference. So far the only difference between `ls` and `bup index` I could observe is that `bup index` chdir()s into the directory to index, ls doesn't. But when
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output.