mki-glusterfs at mozone.net
2010-Nov-13 02:06 UTC
[Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed
Hi When the client volume file as supplied by one of the servers in a distribute/replicate setup changes, my clients can't remount the filesystem correctly. Turning on debug mode shows these messages: [2010-11-13 01:46:45] D [client-protocol.c:6178:client_setvolume_cbk] 10.12.47.106-3: setvolume failed (Stale NFS file handle) The config was generated using glusterfs-volgen. All I was trying to accomplish was comment out statprefetch volume definition and remount the fs but remounting results in only the first primary/backup server in the replicate group to get mounted. Heck if I even change the transport.remote-port to just read report-port and update the config, the clients cant mount the filesystem anymore. The moment I revert the config back, then they are fine... This is with 3.0.4, although I've seen this happen with 3.0.5 as well. Yes I know 3.1 is out, but I'm not comfortable moving to it just yet, so it's not an option... If I copy that exact volfile to the client and then use that to mount the filesystem, it has no problems... Any ideas as to what is going on here? Why would changing the client volume file on the volfile server break the mount? Thanks. Mohan
Craig Carl
2010-Nov-15 09:20 UTC
[Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed
Mohan - All the client and server volume files must be in sync, having different client vol files on different clients will result in these types of errors, it is also the primary cause of split-brain, so please be cautious when making these kind of changes. Thanks, Craig --> Craig Carl Gluster, Inc. Cell - (408) 829-9953 (California, USA) Gtalk - craig.carl at gmail.com From: mki-glusterfs at mozone.net To: gluster-users at gluster.org Sent: Friday, November 12, 2010 6:06:55 PM Subject: [Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed Hi When the client volume file as supplied by one of the servers in a distribute/replicate setup changes, my clients can't remount the filesystem correctly. Turning on debug mode shows these messages: [2010-11-13 01:46:45] D [client-protocol.c:6178:client_setvolume_cbk] 10.12.47.106-3: setvolume failed (Stale NFS file handle) The config was generated using glusterfs-volgen. All I was trying to accomplish was comment out statprefetch volume definition and remount the fs but remounting results in only the first primary/backup server in the replicate group to get mounted. Heck if I even change the transport.remote-port to just read report-port and update the config, the clients cant mount the filesystem anymore. The moment I revert the config back, then they are fine... This is with 3.0.4, although I've seen this happen with 3.0.5 as well. Yes I know 3.1 is out, but I'm not comfortable moving to it just yet, so it's not an option... If I copy that exact volfile to the client and then use that to mount the filesystem, it has no problems... Any ideas as to what is going on here? Why would changing the client volume file on the volfile server break the mount? Thanks. Mohan _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users