Displaying 20 results from an estimated 100 matches similar to: "4849565 smallfile too small - change to 64 bit"
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On Tue, Jul 11, 2017 at 11:39 AM, Jo Goossens <jo.goossens at hosted-power.com>
wrote:
> Hello Joe,
>
>
>
>
>
> I just did a mount like this (added the bold):
>
>
> mount -t glusterfs -o
> *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache*
> ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
>
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I just did a mount like this (added the bold):
?
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?Results:
?
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay,
?
?
What do you mean exactly? What info is missing?
?
PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again.
?
?
Regards
Jo
?
-----Original message-----
From:Vijay Bellur <vbellur at redhat.com>
Sent:Tue 11-07-2017 18:16
Subject:Re: [Gluster-users] Gluster native mount is
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote:
> RE: [Gluster-users] Gluster native mount is really slow compared to nfs
>
> Hello Joe,
>
> I really appreciate your feedback, but I already tried the opcache
> stuff (to not valildate at all). It improves of course then, but not
> completely somehow. Still quite slow.
>
> I did not try the mount options yet, but I will now!
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:
?
?
root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow.
?
I did not try the mount options yet, but I will now!
?
?
With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is the volume info as requested by soumya:
?
#gluster volume info www
?Volume Name: www
Type: Replicate
Volume ID: 5d64ee36-828a-41fa-adbf-75718b954aff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.140.41:/gluster/www
Brick2: 192.168.140.42:/gluster/www
Brick3: 192.168.140.43:/gluster/www
Options Reconfigured:
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
My standard response to someone needing filesystem performance for www
traffic is generally, "you're doing it wrong".
https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/
That said, you might also look at these mount options:
attribute-timeout, entry-timeout, negative-timeout (set to some large
amount of time), and fopen-keep-cache.
On 07/11/2017 07:48 AM, Jo
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hi all,
?
?
One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :)
?
Initial cluster was created like this:
?
gluster volume create www replica 3 transport tcp
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled:
[root at dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2:
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
While there are probably other interesting parameters and options in gluster itself, for us the largest difference with this speedtest and also for our website (real world performance) was the negative-timeout value during mount. Only 1 seems to solve so many problems, is there anyone knowledgeable why this is the case??
?
This would better be default I suppose ...?
?
I'm still
2017 Sep 18
2
Confusing lstat() performance
Hi Ben,
do you know if the smallfile benchmark also does interleaved getdents()
and lstat, which is what I found as being the key difference that
creates the performance gap (further down this thread)?
Also, wouldn't `--threads 8` change the performance numbers by factor 8
versus the plain `ls` and `rsync` that I did?
Would you mind running those commands directly/plainly on your cluster
to
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2017 Sep 14
5
Confusing lstat() performance
Hi,
I have a gluster 3.10 volume with a dir with ~1 million small files in
them, say mounted at /mnt/dir with FUSE, and I'm observing something weird:
When I list and stat them all using rsync, then the lstat() calls that
rsync does are incredibly fast (23 microseconds per call on average,
definitely faster than a network roundtrip between my 3-machine bricks
connected via Ethernet).
But
2017 Sep 18
0
Confusing lstat() performance
----- Original Message -----
> From: "Niklas Hamb?chen" <mail at nh2.me>
> To: "Ben Turner" <bturner at redhat.com>
> Cc: gluster-users at gluster.org
> Sent: Sunday, September 17, 2017 9:49:10 PM
> Subject: Re: [Gluster-users] Confusing lstat() performance
>
> Hi Ben,
>
> do you know if the smallfile benchmark also does interleaved
2009 Feb 21
1
samba 3.2.6 - Does locking.tdb has a maximum size?
Hi,
I've noticed that locking.tdb file grows over time. This happened while running the following test:
- delete locking.tdb and restart samba
- connect a linux client using cifs mount
-run the following script on the client:
#!bin/bash
for i in `seq 1 130000`;do
echo $i
echo ===
KB_rand=$(((RANDOM % 300 + 1)*(1000))
dd
2007 Jan 08
11
NFS and ZFS, a fine combination
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
____________________________________________________________________________________
Performance, Availability & Architecture Engineering
Roch Bourbonnais Sun Microsystems, Icnc-Grenoble
Senior Performance Analyst 180, Avenue De L''Europe, 38330,
Montbonnot Saint
2006 Sep 28
13
jbod questions
Folks,
We are in the process of purchasing new san/s that our mail server
runs on (JES3). We have moved our mailstores to zfs and continue to
have checksum errors -- they are corrected but this improves on the
ufs inode errors that require system shutdown and fsck.
So, I am recommending that we buy small jbods, do raidz2 and let zfs
handle the raiding of these boxes. As we need more
2017 Sep 27
0
sparse files on EC volume
Have you done any testing with replica 2/3? IIRC my replica 2/3 tests out performed EC on smallfile workloads, it may be worth looking into if you can't get EC up to where you need it to be.
-b
----- Original Message -----
> From: "Dmitri Chebotarov" <4dimach at gmail.com>
> Cc: "gluster-users" <Gluster-users at gluster.org>
> Sent: Tuesday,
2017 Sep 26
2
sparse files on EC volume
Hi Xavi
At this time I'm using 'plain' bricks with XFS. I'll be moving to LVM
cached bricks.
There is no RAID for data bricks, but I'll be using hardware RAID10 for SSD
cache disks (I can use 'writeback' cache in this case).
'small file performance' is the main reason I'm looking at different
options, i.e. using formated sparse files.
I spent considerable