search for: smallfiles

Displaying 20 results from an estimated 23 matches for "smallfiles".

Did you mean: smallfile
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com> wrote: > Hello Joe, > > > > > > I just did a mount like this (added the bold): > > > mount -t glusterfs -o > *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache* > ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log >
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I just did a mount like this (added the bold): ? mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?Results: ? root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay, ? ? What do you mean exactly? What info is missing? ? PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again. ? ? Regards Jo ? -----Original message----- From:Vijay Bellur <vbellur at redhat.com> Sent:Tue 11-07-2017 18:16 Subject:Re: [Gluster-users] Gluster native mount is
2006 Oct 31
0
4849565 smallfile too small - change to 64 bit
Author: rbourbon Repository: /hg/zfs-crypto/gate Revision: a40f0552fb65649aa4c6751b7dfa343fad066ef8 Log message: 4849565 smallfile too small - change to 64 bit 6207772 UFS freebehind can slow application performance due to text segment paging 6279932 35% drop in SPECweb2005 Support workload performance from snv_07 to snv_08 Files: update: usr/src/uts/common/fs/ufs/ufs_vnops.c
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote: > RE: [Gluster-users] Gluster native mount is really slow compared to nfs > > Hello Joe, > > I really appreciate your feedback, but I already tried the opcache > stuff (to not valildate at all). It improves of course then, but not > completely somehow. Still quite slow. > > I did not try the mount options yet, but I will now!
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello, ? ? Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster: ? ? root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow. ? I did not try the mount options yet, but I will now! ? ? With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello, ? ? Here is the volume info as requested by soumya: ? #gluster volume info www ?Volume Name: www Type: Replicate Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.140.41:/gluster/www Brick2: 192.168.140.42:/gluster/www Brick3: 192.168.140.43:/gluster/www Options Reconfigured:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www traffic is generally, "you're doing it wrong". https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ That said, you might also look at these mount options: attribute-timeout, entry-timeout, negative-timeout (set to some large amount of time), and fopen-keep-cache. On 07/11/2017 07:48 AM, Jo
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all, ? ? One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :) ? Initial cluster was created like this: ? gluster volume create www replica 3 transport tcp
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled: [root at dell-per730-03 ~]# gluster v info Volume Name: vmstore Type: Replicate Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.50.1:/rhgs/brick1/vmstore Brick2:
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello, ? ? While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case?? ? This would better be default I suppose ...? ? I'm still
2017 Oct 13
1
small files performance
...performance problems with small files writes on Gluster. > >> The read performance has been improved in many ways in recent releases > >> (md-cache, parallel-readdir, hot-tier). > >> But write performance is more or less the same and you cannot go above > >> 10K smallfiles create - even with SSD or Optane drives. > >> Even ramdisk is not helping much here, because the bottleneck is not > >> in the storage performance. > >> Key problems I've noticed: > >> - LOOKUPs are expensive, because there is separate query for every > &g...
2017 Sep 18
2
Confusing lstat() performance
Hi Ben, do you know if the smallfile benchmark also does interleaved getdents() and lstat, which is what I found as being the key difference that creates the performance gap (further down this thread)? Also, wouldn't `--threads 8` change the performance numbers by factor 8 versus the plain `ls` and `rsync` that I did? Would you mind running those commands directly/plainly on your cluster to
2017 Sep 14
5
Confusing lstat() performance
Hi, I have a gluster 3.10 volume with a dir with ~1 million small files in them, say mounted at /mnt/dir with FUSE, and I'm observing something weird: When I list and stat them all using rsync, then the lstat() calls that rsync does are incredibly fast (23 microseconds per call on average, definitely faster than a network roundtrip between my 3-machine bricks connected via Ethernet). But
2017 Sep 18
0
Confusing lstat() performance
----- Original Message ----- > From: "Niklas Hamb?chen" <mail at nh2.me> > To: "Ben Turner" <bturner at redhat.com> > Cc: gluster-users at gluster.org > Sent: Sunday, September 17, 2017 9:49:10 PM > Subject: Re: [Gluster-users] Confusing lstat() performance > > Hi Ben, > > do you know if the smallfile benchmark also does interleaved
2009 Feb 21
1
samba 3.2.6 - Does locking.tdb has a maximum size?
Hi, I've noticed that locking.tdb file grows over time. This happened while running the following test: - delete locking.tdb and restart samba - connect a linux client using cifs mount -run the following script on the client: #!bin/bash for i in `seq 1 130000`;do echo $i echo === KB_rand=$(((RANDOM % 300 + 1)*(1000)) dd
2017 Sep 27
0
sparse files on EC volume
Have you done any testing with replica 2/3? IIRC my replica 2/3 tests out performed EC on smallfile workloads, it may be worth looking into if you can't get EC up to where you need it to be. -b ----- Original Message ----- > From: "Dmitri Chebotarov" <4dimach at gmail.com> > Cc: "gluster-users" <Gluster-users at gluster.org> > Sent: Tuesday,
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote: > is possible to add an arbiter node on the client? I've been running in that configuration for a couple months now with no problems. I have 6 data + 3 arbiter bricks hosting VM disk images and all three of my arbiter bricks are on one of the kvm hosts. > Can I use multiple arbiter for the same volume ? In example,
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client? Let's assume a gluster storage made with 2 storage server. This is prone to split-brains. An arbiter node can be added, but can I put the arbiter on one of the client ? Can I use multiple arbiter for the same volume ? In example, one arbiter on each client.