Hi, I have similar scenario, for a cars classified with millions of small files, mounted with gluster native client in a replica config. The gluster server has 16gb RAM and 4 cores and mount the glusterfs with direct-io-mode=enable. Then i export to all servers ( windows included with CIFS ) performance.cache-refresh-timeout: 60 performance.read-ahead: enable performance.write-behind-window-size: 4MB performance.io-thread-count: 64 performance.cache-size: 12GB performance.quick-read: on performance.flush-behind: on performance.write-behind: on nfs.disable: on -- Saludos, LG On Sat, May 28, 2016 at 6:46 AM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote:> if i remember properly, each stat() on a file needs to be sent to all host > in replica to check if are in sync > > Is this true for both gluster native client and nfs ganesha? > > Which is the best for a shared hosting storage with many millions of small > files? About 15.000.000 small files in 800gb ? Or even for Maildir hosting > > Ganesha can be configured for HA and loadbalancing so the biggest issue > that was present in standard NFS now is gone > > Any advantage about native gluster over Ganesha? Removing the fuse > requirement should also be a performance advantage for Ganesha over native > client > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160621/49046653/attachment.html>
Luciano, how do you enable direct-io-mode?
    On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta <ldgiacchetta at
gmail.com> wrote:
 
 Hi,
I have similar scenario, for a cars classified with millions of small files,
mounted with gluster native client in a replica config.
The gluster server has 16gb RAM and 4 cores and mount the glusterfs with
direct-io-mode=enable. Then i export to all servers ( windows included with CIFS
)
performance.cache-refresh-timeout: 60
performance.read-ahead: enable
performance.write-behind-window-size: 4MB
performance.io-thread-count: 64
performance.cache-size: 12GB
performance.quick-read: on
performance.flush-behind: on
performance.write-behind: on
nfs.disable: on
--Saludos, LG
On Sat, May 28, 2016 at 6:46 AM, Gandalf Corvotempesta <gandalf.corvotempesta
at gmail.com> wrote:
if i remember properly, each stat() on a file needs to be sent to all host in
replica to check if are in syncIs this true for both gluster native client and
nfs ganesha?Which is the best for a shared hosting storage with many millions of
small files? About 15.000.000 small files in 800gb ? Or even for Maildir
hostingGanesha can be configured for HA and loadbalancing so the biggest issue
that was present in standard NFS now is goneAny advantage about native gluster
over Ganesha? Removing the fuse requirement should also be a performance
advantage for Ganesha over native client
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
   
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160622/be3d5683/attachment.html>
Il 21 giu 2016 19:02, "Luciano Giacchetta" <ldgiacchetta at gmail.com> ha scritto:> > Hi, > > I have similar scenario, for a cars classified with millions of smallfiles, mounted with gluster native client in a replica config.> The gluster server has 16gb RAM and 4 cores and mount the glusterfs withdirect-io-mode=enable. Then i export to all servers ( windows included with CIFS )> > performance.cache-refresh-timeout: 60 > performance.read-ahead: enable > performance.write-behind-window-size: 4MB > performance.io-thread-count: 64 > performance.cache-size: 12GB > performance.quick-read: on > performance.flush-behind: on > performance.write-behind: on > nfs.disable: onWhich performance are you getting? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160622/da6cb142/attachment.html>