Displaying 20 results from an estimated 23 matches for "smallfil".
Did you mean:
smallfile
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...added the bold):
>
>
> mount -t glusterfs -o
> *attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache*
> ,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
> 192.168.140.41:/www /var/www
>
>
> Results:
>
>
> root at app1:~/smallfile-master# ./smallfile_cli.py --top /var/www/test
> --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64
> --record-size 64
> smallfile version 3.0
> hosts in test : ['192.168.140.41']
> top test directory(s) : ['/...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe,
?
?
I just did a mount like this (added the bold):
?
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?Results:
?
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64
smallfile version 3.0
? ? ? ? ? ? ? ? ? ? ? ? ? ?hosts in test : ['192.168.140.41']
? ? ? ? ? ? ? ? ? ?top test directory(s) : ['/var/www/test']
? ? ?...
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
...wrote:
Hello Joe,
?
?
I just did a mount like this (added the bold):
?
mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?Results:
?
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64
smallfile version 3.0
? ? ? ? ? ? ? ? ? ? ? ? ? ?hosts in test : ['192.168.140.41']
? ? ? ? ? ? ? ? ? ?top test directory(s) : ['/var/www/test']
? ? ?...
2006 Oct 31
0
4849565 smallfile too small - change to 64 bit
Author: rbourbon
Repository: /hg/zfs-crypto/gate
Revision: a40f0552fb65649aa4c6751b7dfa343fad066ef8
Log message:
4849565 smallfile too small - change to 64 bit
6207772 UFS freebehind can slow application performance due to text segment paging
6279932 35% drop in SPECweb2005 Support workload performance from snv_07 to snv_08
Files:
update: usr/src/uts/common/fs/ufs/ufs_vnops.c
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...added/changed while
>> testing. But it was always slow, by tuning some kernel parameters
>> it improved slightly (just a few percent, nothing reasonable)
>> I also tried ceph just to compare, I got this with default
>> settings and no tweaks:
>> ./smallfile_cli.py --top /var/www/test --host-set
>> 192.168.140.41 --threads 8 --files 5000 --file-size 64
>> --record-size 64
>> smallfile version 3.0
>> hosts in test : ['192.168.140.41']
>> top test di...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
Hello,
?
?
Here is some speedtest with a new setup we just made with gluster 3.10, there are no other differences, except glusterfs versus nfs. The nfs is about 80 times faster:
?
?
root at app1:~/smallfile-master# mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 500 --file-size 64 --record-size 64
smallfile version...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...m-type: auto
?
I started with none of them set and I added/changed while testing. But it was always slow, by tuning some kernel parameters it improved slightly (just a few percent, nothing reasonable)
?
I also tried ceph just to compare, I got this with default settings and no tweaks:
?
?./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64
smallfile version 3.0
? ? ? ? ? ? ? ? ? ? ? ? ? ?hosts in test : ['192.168.140.41']
? ? ? ? ? ? ? ? ? ?top test directory(s) : ['/var/www/test']
? ? ? ? ? ? ? ? ?...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...luster.quorum-type: auto
?I started with none of them set and I added/changed while testing. But it was always slow, by tuning some kernel parameters it improved slightly (just a few percent, nothing reasonable)
?I also tried ceph just to compare, I got this with default settings and no tweaks:
??./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64
smallfile version 3.0
? ? ? ? ? ? ? ? ? ? ? ? ? ?hosts in test : ['192.168.140.41']
? ? ? ? ? ? ? ? ? ?top test directory(s) : ['/var/www/test']
? ? ? ? ? ? ? ? ? ? ? ? ?...
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
...I started with none of them set and I added/changed while testing. But
> it was always slow, by tuning some kernel parameters it improved
> slightly (just a few percent, nothing reasonable)
> I also tried ceph just to compare, I got this with default settings
> and no tweaks:
> ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41
> --threads 8 --files 5000 --file-size 64 --record-size 64
> smallfile version 3.0
> hosts in test : ['192.168.140.41']
> top test directory(s) : ['/var/www/test']
>...
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
...t;
>
>
> After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was
> a crazy performance difference.
>
>
>
> e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0
>
>
>
> I tried a test like this to confirm the slowness:
>
>
>
> ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41
> --threads 8 --files 5000 --file-size 64 --record-size 64
>
> This test finished in around 1.5 seconds with NFS and in more than 250
> seconds without nfs (can't remember exact numbers, but I reproduced it
> several times fo...
2017 Sep 18
0
Confusing lstat() performance
...2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.50.1:/rhgs/brick1/vmstore
Brick2: 192.168.50.2:/rhgs/brick1/vmstore
Brick3: 192.168.50.3:/rhgs/ssd/vmstore (arbiter)
Options Reconfigured:
features.quota-deem-statfs: on
nfs.disable: on
features.inode-quota: on
features.quota: on
And I ran the smallfile benchmark, created 80k 64KB files. After that I clear cache everywhere and ran a smallfile stat test
[root at dell-per730-06-priv ~]# python /smallfile/smallfile_cli.py --files 10000 --file-size 64 --threads 8 --top /gluster-mount/s-file/ --operation stat
version...
2017 Jul 12
0
Gluster native mount is really slow compared to nfs
...timeout=1,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?mount -t glusterfs -o use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www
?So it means only 1 second negative timeout...
?In this particular test:?./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 50000 --file-size 64 --record-size 64
??The result is about 4 seconds with the negative timeout of 1 second defined and many many minutes without the negative timeout (I quit after 15 minutes of waiting)
?I will go over to...
2017 Oct 13
1
small files performance
...performance problems with small files writes on Gluster.
> >> The read performance has been improved in many ways in recent releases
> >> (md-cache, parallel-readdir, hot-tier).
> >> But write performance is more or less the same and you cannot go above
> >> 10K smallfiles create - even with SSD or Optane drives.
> >> Even ramdisk is not helping much here, because the bottleneck is not
> >> in the storage performance.
> >> Key problems I've noticed:
> >> - LOOKUPs are expensive, because there is separate query for every
>...
2017 Sep 18
2
Confusing lstat() performance
Hi Ben,
do you know if the smallfile benchmark also does interleaved getdents()
and lstat, which is what I found as being the key difference that
creates the performance gap (further down this thread)?
Also, wouldn't `--threads 8` change the performance numbers by factor 8
versus the plain `ls` and `rsync` that I did?
Would you...
2017 Sep 14
5
Confusing lstat() performance
Hi,
I have a gluster 3.10 volume with a dir with ~1 million small files in
them, say mounted at /mnt/dir with FUSE, and I'm observing something weird:
When I list and stat them all using rsync, then the lstat() calls that
rsync does are incredibly fast (23 microseconds per call on average,
definitely faster than a network roundtrip between my 3-machine bricks
connected via Ethernet).
But
2017 Sep 18
0
Confusing lstat() performance
...amb?chen" <mail at nh2.me>
> To: "Ben Turner" <bturner at redhat.com>
> Cc: gluster-users at gluster.org
> Sent: Sunday, September 17, 2017 9:49:10 PM
> Subject: Re: [Gluster-users] Confusing lstat() performance
>
> Hi Ben,
>
> do you know if the smallfile benchmark also does interleaved getdents()
> and lstat, which is what I found as being the key difference that
> creates the performance gap (further down this thread)?
I am not sure, you can have a look at it:
https://github.com/bengland2/smallfile
>
> Also, wouldn't `--threa...
2009 Feb 21
1
samba 3.2.6 - Does locking.tdb has a maximum size?
...locking.tdb and restart samba
- connect a linux client using cifs mount
-run the following script on the client:
#!bin/bash
for i in `seq 1 130000`;do
echo $i
echo ===
KB_rand=$(((RANDOM % 300 + 1)*(1000))
dd if=/dev/urandom of=/mnt/cifs/files/smallfile$i bs=$KB_rand count=1
done
So there's only one client writing many files.
The locking.tdb file grew to 2 megs, at which point space was exhausted (I put the temp files in tmpfs, perhaps a mistake :)).
Is this to be expected? what causes this increase? does it saturate at some point...
2017 Sep 27
0
sparse files on EC volume
Have you done any testing with replica 2/3? IIRC my replica 2/3 tests out performed EC on smallfile workloads, it may be worth looking into if you can't get EC up to where you need it to be.
-b
----- Original Message -----
> From: "Dmitri Chebotarov" <4dimach at gmail.com>
> Cc: "gluster-users" <Gluster-users at gluster.org>
> Sent: Tuesday, Septemb...
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote:
> is possible to add an arbiter node on the client?
I've been running in that configuration for a couple months now with no
problems. I have 6 data + 3 arbiter bricks hosting VM disk images and
all three of my arbiter bricks are on one of the kvm hosts.
> Can I use multiple arbiter for the same volume ? In example,
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client?
Let's assume a gluster storage made with 2 storage server. This is prone to
split-brains.
An arbiter node can be added, but can I put the arbiter on one of the
client ?
Can I use multiple arbiter for the same volume ? In example, one arbiter on
each client.