Jo Goossens
2017-Jul-11 09:01 UTC
[Gluster-users] Gluster native mount is really slow compared to nfs
Hello, ? ? We tried tons of settings to get a php app running on a native gluster mount: ? e.g.:?192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0 ? I tried some mount variants in order to speed up things without luck. ? ? After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was a crazy performance difference. ? e.g.:?192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 ? I tried a test like this to confirm the slowness: ? ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64 ?This test finished in around 1.5 seconds with NFS and in more than 250 seconds without nfs (can't remember exact numbers, but I reproduced it several times for both). ?With the native gluster mount the php app had loading times of over 10 seconds, with the nfs mount the php app loaded around 1 second maximum and even less. (reproduced several times) ??I tried all kind of performance settings and variants of this but not helped , the difference stayed huge, here are some of the settings played with in random order: ? gluster volume set www features.cache-invalidation on gluster volume set www features.cache-invalidation-timeout 600 gluster volume set www performance.stat-prefetch on gluster volume set www performance.cache-samba-metadata on gluster volume set www performance.cache-invalidation on gluster volume set www performance.md-cache-timeout 600 gluster volume set www network.inode-lru-limit 250000 ?gluster volume set www performance.cache-refresh-timeout 60 gluster volume set www performance.read-ahead disable gluster volume set www performance.readdir-ahead on gluster volume set www performance.parallel-readdir on gluster volume set www performance.write-behind-window-size 4MB gluster volume set www performance.io-thread-count 64 ?gluster volume set www performance.client-io-threads on ?gluster volume set www performance.cache-size 1GB gluster volume set www performance.quick-read on gluster volume set www performance.flush-behind on gluster volume set www performance.write-behind on gluster volume set www nfs.disable on ?gluster volume set www client.event-threads 3 gluster volume set www server.event-threads 3 ?? ? The NFS ha adds a lot of complexity which we wouldn't need at all in our setup, could you please explain what is going on here? Is NFS the only solution to get acceptable performance? Did I miss one crucial settting perhaps? ? We're really desperate, thanks a lot for your help! ? ? PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible performance when not used with nfs. ? ? Kind regards Jo Goossens ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170711/bb661c4c/attachment.html>
Soumya Koduri
2017-Jul-11 09:16 UTC
[Gluster-users] Gluster native mount is really slow compared to nfs
+ Ambarish On 07/11/2017 02:31 PM, Jo Goossens wrote:> Hello, > > > > > > We tried tons of settings to get a php app running on a native gluster > mount: > > > > e.g.: 192.168.140.41:/www /var/www glusterfs > defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable > 0 0 > > > > I tried some mount variants in order to speed up things without luck. > > > > > > After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was > a crazy performance difference. > > > > e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 > > > > I tried a test like this to confirm the slowness: > > > > ./smallfile_cli.py --top /var/www/test --host-set 192.168.140.41 > --threads 8 --files 5000 --file-size 64 --record-size 64 > > This test finished in around 1.5 seconds with NFS and in more than 250 > seconds without nfs (can't remember exact numbers, but I reproduced it > several times for both). > > With the native gluster mount the php app had loading times of over 10 > seconds, with the nfs mount the php app loaded around 1 second maximum > and even less. (reproduced several times) > > > > I tried all kind of performance settings and variants of this but not > helped , the difference stayed huge, here are some of the settings > played with in random order: >Request Ambarish & Karan (cc'ed who have been working on evaluating performance of various access protocols gluster supports) to look at the below settings and provide inputs. Thanks, Soumya> > > gluster volume set www features.cache-invalidation on > gluster volume set www features.cache-invalidation-timeout 600 > gluster volume set www performance.stat-prefetch on > gluster volume set www performance.cache-samba-metadata on > gluster volume set www performance.cache-invalidation on > gluster volume set www performance.md-cache-timeout 600 > gluster volume set www network.inode-lru-limit 250000 > > gluster volume set www performance.cache-refresh-timeout 60 > gluster volume set www performance.read-ahead disable > gluster volume set www performance.readdir-ahead on > gluster volume set www performance.parallel-readdir on > gluster volume set www performance.write-behind-window-size 4MB > gluster volume set www performance.io-thread-count 64 > > gluster volume set www performance.client-io-threads on > > gluster volume set www performance.cache-size 1GB > gluster volume set www performance.quick-read on > gluster volume set www performance.flush-behind on > gluster volume set www performance.write-behind on > gluster volume set www nfs.disable on > > gluster volume set www client.event-threads 3 > gluster volume set www server.event-threads 3 > > > > > > > The NFS ha adds a lot of complexity which we wouldn't need at all in our > setup, could you please explain what is going on here? Is NFS the only > solution to get acceptable performance? Did I miss one crucial settting > perhaps? > > > > We're really desperate, thanks a lot for your help! > > > > > > PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible > performance when not used with nfs. > > > > > > > > Kind regards > > Jo Goossens > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >
Jo Goossens
2017-Jul-11 09:26 UTC
[Gluster-users] Gluster native mount is really slow compared to nfs
Hi all, ? ? One more thing, we have 3 apps servers with the gluster on it, replicated on 3 different gluster nodes. (So the gluster nodes are app servers at the same time). We could actually almost work locally if we wouldn't need to have the same files on the 3 nodes and redundancy :) ? Initial cluster was created like this: ? gluster volume create www replica 3 transport tcp 192.168.140.41:/gluster/www 192.168.140.42:/gluster/www 192.168.140.43:/gluster/www force gluster volume set www network.ping-timeout 5 gluster volume set www performance.cache-size 1024MB gluster volume set www nfs.disable on # No need for NFS currently gluster volume start www ?To my understanding it still wouldn't explain why nfs has such great performance compared to native ... ?? Regards Jo ? ? -----Original message----- From:Soumya Koduri <skoduri at redhat.com> Sent:Tue 11-07-2017 11:16 Subject:Re: [Gluster-users] Gluster native mount is really slow compared to nfs To:Jo Goossens <jo.goossens at hosted-power.com>; gluster-users at gluster.org; CC:Ambarish Soman <asoman at redhat.com>; Karan Sandha <ksandha at redhat.com>; + Ambarish On 07/11/2017 02:31 PM, Jo Goossens wrote:> Hello, > > > > > > We tried tons of settings to get a php app running on a native gluster > mount: > > > > e.g.: 192.168.140.41:/www /var/www glusterfs > defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable > 0 0 > > > > I tried some mount variants in order to speed up things without luck. > > > > > > After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was > a crazy performance difference. > > > > e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 > > > > I tried a test like this to confirm the slowness: > > > > ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 > --threads 8 --files 5000 --file-size 64 --record-size 64 > > This test finished in around 1.5 seconds with NFS and in more than 250 > seconds without nfs (can't remember exact numbers, but I reproduced it > several times for both). > > With the native gluster mount the php app had loading times of over 10 > seconds, with the nfs mount the php app loaded around 1 second maximum > and even less. (reproduced several times) > > > > I tried all kind of performance settings and variants of this but not > helped , the difference stayed huge, here are some of the settings > played with in random order: >Request Ambarish & Karan (cc'ed who have been working on evaluating performance of various access protocols gluster supports) to look at the below settings and provide inputs. Thanks, Soumya> > > gluster volume set www features.cache-invalidation on > gluster volume set www features.cache-invalidation-timeout 600 > gluster volume set www performance.stat-prefetch on > gluster volume set www performance.cache-samba-metadata on > gluster volume set www performance.cache-invalidation on > gluster volume set www performance.md-cache-timeout 600 > gluster volume set www network.inode-lru-limit 250000 > > gluster volume set www performance.cache-refresh-timeout 60 > gluster volume set www performance.read-ahead disable > gluster volume set www performance.readdir-ahead on > gluster volume set www performance.parallel-readdir on > gluster volume set www performance.write-behind-window-size 4MB > gluster volume set www performance.io-thread-count 64 > > gluster volume set www performance.client-io-threads on > > gluster volume set www performance.cache-size 1GB > gluster volume set www performance.quick-read on > gluster volume set www performance.flush-behind on > gluster volume set www performance.write-behind on > gluster volume set www nfs.disable on > > gluster volume set www client.event-threads 3 > gluster volume set www server.event-threads 3 > > > > > > > The NFS ha adds a lot of complexity which we wouldn't need at all in our > setup, could you please explain what is going on here? Is NFS the only > solution to get acceptable performance? Did I miss one crucial settting > perhaps? > > > > We're really desperate, thanks a lot for your help! > > > > > > PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible > performance when not used with nfs. > > > > > > > > Kind regards > > Jo Goossens > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170711/6374f3ee/attachment.html>
lemonnierk at ulrar.net
2017-Jul-11 15:02 UTC
[Gluster-users] Gluster native mount is really slow compared to nfs
Hi, We've been doing that for some clients, basically it works fine if you configure your OPCache very very agressivly. Increase the available ram for it, disable any form of opcache validating from disk and it'll work great, 'cause your app won't touch gluster. Then whenever you make a change in the PHP, just restart PHP to force it to reload the source from gluster. For example : zend_extension = opcache.so [opcache] opcache.enable = 1 opcache.enable_cli = 1 opcache.memory_consumption = 1024 opcache.max_accelerated_files = 80000 opcache.revalidate_freq = 300 opcache.validate_timestamps = 1 opcache.interned_strings_buffer = 32 opcache.fast_shutdown = 1 With that config, it works well. Needs some getting used to though, since you'll need to restart php to see any change in the sources applied. If you use something with an on-disk cache (Prestashop, magento, typo3 ..) do think of storing that in a redis or something, never on gluster, that'd kill performances. I've seen a gain of ~10 seconds by just moving the cache from gluster to redis for Magento for example. On Tue, Jul 11, 2017 at 11:01:52AM +0200, Jo Goossens wrote:> Hello, > > ? > ? > We tried tons of settings to get a php app running on a native gluster mount: > > ? > e.g.:?192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0 > > ? > I tried some mount variants in order to speed up things without luck. > > ? > ? > After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was a crazy performance difference. > > ? > e.g.:?192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0 > > ? > I tried a test like this to confirm the slowness: > > ? > ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000 --file-size 64 --record-size 64 > ?This test finished in around 1.5 seconds with NFS and in more than 250 seconds without nfs (can't remember exact numbers, but I reproduced it several times for both). > ?With the native gluster mount the php app had loading times of over 10 seconds, with the nfs mount the php app loaded around 1 second maximum and even less. (reproduced several times) > ??I tried all kind of performance settings and variants of this but not helped , the difference stayed huge, here are some of the settings played with in random order: > > ? > gluster volume set www features.cache-invalidation on > gluster volume set www features.cache-invalidation-timeout 600 > gluster volume set www performance.stat-prefetch on > gluster volume set www performance.cache-samba-metadata on > gluster volume set www performance.cache-invalidation on > gluster volume set www performance.md-cache-timeout 600 > gluster volume set www network.inode-lru-limit 250000 > ?gluster volume set www performance.cache-refresh-timeout 60 > gluster volume set www performance.read-ahead disable > gluster volume set www performance.readdir-ahead on > gluster volume set www performance.parallel-readdir on > gluster volume set www performance.write-behind-window-size 4MB > gluster volume set www performance.io-thread-count 64 > ?gluster volume set www performance.client-io-threads on > ?gluster volume set www performance.cache-size 1GB > gluster volume set www performance.quick-read on > gluster volume set www performance.flush-behind on > gluster volume set www performance.write-behind on > gluster volume set www nfs.disable on > ?gluster volume set www client.event-threads 3 > gluster volume set www server.event-threads 3 > ?? > ? > The NFS ha adds a lot of complexity which we wouldn't need at all in our setup, could you please explain what is going on here? Is NFS the only solution to get acceptable performance? Did I miss one crucial settting perhaps? > > ? > We're really desperate, thanks a lot for your help! > > ? > ? > PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible performance when not used with nfs. > > ? > ? > > > Kind regards > > Jo Goossens > > ? > ? > ?> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170711/3e2ce5d3/attachment.sig>
Possibly Parallel Threads
- Gluster native mount is really slow compared to nfs
- Gluster native mount is really slow compared to nfs
- Gluster native mount is really slow compared to nfs
- Gluster native mount is really slow compared to nfs
- Gluster native mount is really slow compared to nfs