> On Jun 1, 2016, at 1:25 PM, Gandalf Corvotempesta <gandalf.corvotempesta
at gmail.com> wrote:
>
> Il 01 giu 2016 22:06, "Gmail" <b.s.mikhael at gmail.com
<mailto:b.s.mikhael at gmail.com>> ha scritto:
> > stat() on NFS, is just a single stat() from the client to the storage
node, then all the storage nodes in the same replica group talk to each other
using libgfapi (no FUSE overhead)
> >
> > conclusion, I?d prefer NFS over FUSE with small files.
> > drawback, NFS HA is more complicated to setup and maintain than FUSE.
>
> NFS HA with ganesha should be easier than kernel NFS
>
> Skipping the whole fuse stack should be good also for big files
>
with big files, i don?t notice much difference in performance for NFS over
FUSE> with nfs replication is made directly by gluster servers with no client
involved?
>
correct> In this case would be possibile to split the gluster networks with 10gb
used for replication and multiple 1gb bonded for clients.
>
don?t forget the complication of Ganesha HA setup, pacemaker is pain in the
butt.> I can see only advantage for nfs over native gluster
>
> One question: with no gluster client that always know on which node a
single file is located, who is telling nfs where to find the required file? Is
nfs totally distributed with no "gateway"/"proxy" or any
centralized server?
>
the NFS client talks to only one NFS server (the one which it mounts), the NFS
HA setup is only to failover a virtual IP to another healthy node. so the NFS
client will just do 3 minor timeouts then it will do a major timeout, when that
happens, the virtual IP failover will be already done.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160601/ea308db9/attachment.html>