Hi all, As is known to us all, gluster provides NFS mount. However, if the mount point fails, clients will lose connection to Gluster. While if we use gluster native client, this fail will have no effect on clients. For example: mount -t glusterfs host1:/vol1 /mnt If host1 goes down for some reason, client works still, it has no sense about the failure(suppose we have multiple gluster servers). However, if we use the following: mount -t nfs -o vers=3 host1:/vol1 /mnt If host1 failed, client will lose connection to gluster servers. Now, we want to use NFS way. Could anyone give us some suggestion to solve the issue? Thanks Zhenghua -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130131/674567d2/attachment.html>
On Thursday, January 31, 2013 11:28:04 AM glusterzhxue wrote:> Hi all, > As is known to us all, gluster provides NFS mount. However, if the mount > point fails, clients will lose connection to Gluster. While if we use > gluster native client, this fail will have no effect on clients. For > example:mount -t glusterfs host1:/vol1 /mnt> > If host1 goes down for some reason, client works still, it has no sense > about the failure(suppose we have multiple gluster servers).The client will still fail (in most cases) since host1 (if I follow you) is part of the gluster groupset. Certainly if it's a distributed-only, maybe not if it's a dist/repl gluster. But if host1 goes down, the client will not be able to find a gluster vol to mount.> However, if > we use the following:mount -t nfs -o vers=3 host1:/vol1 /mnt> If host1 failed, client will lose connection to gluster servers.If the client was mounting the glusterfs via a re-export from an intermediate host, you might be able to failover to another intermediate NFS server, but if it was a gluster host, it would fail due to the reasons above.> Now, we want to use NFS way. Could anyone give us some suggestion to solve > the issue?Multiple intermediate NFS servers with round-robin addressing? Anyone tried this?> Thanks > > Zhenghua--- Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487 415 South Circle View Dr, Irvine, CA, 92697 [shipping] MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps) --- "Something must be done. [X] is something. Therefore, we must do it." Bruce Schneier, on American response to just about anything.
Thanks. But if we use NFS even plus software like keepalived, we 'll lose the load balance of gluster. All traffic from clients will firstly reach in and finally flow out from NFS mount server. It becomes a bottleneck. Zhenghua From: R, Robin Date: 2013-01-31 11:47 To: glusterzhxue Subject: Re: [Gluster-users] NFS availability Hi, You can use software like keepalived to manage a floating IP. So, you can get an IP say 10.1.1.3 and it will float between your host 1 and host 2. Only one host at a time has the IP 10.1.1.3. If host 1 dies, host 2 can assume the floating IP. You'll mount your NFS against the floating IP so the NFS mount still works even if one of the hosts dies. Robin On Wed, Jan 30, 2013 at 10:28 PM, glusterzhxue <glusterzhxue at 163.com> wrote: Hi all, As is known to us all, gluster provides NFS mount. However, if the mount point fails, clients will lose connection to Gluster. While if we use gluster native client, this fail will have no effect on clients. For example: mount -t glusterfs host1:/vol1 /mnt If host1 goes down for some reason, client works still, it has no sense about the failure(suppose we have multiple gluster servers). However, if we use the following: mount -t nfs -o vers=3 host1:/vol1 /mnt If host1 failed, client will lose connection to gluster servers. Now, we want to use NFS way. Could anyone give us some suggestion to solve the issue? Thanks Zhenghua _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130131/965fc09c/attachment.html>