Hi, I'm wondering is it safe to mount my gluster volume using the native gluster client and nfs from the same client machine? Example from one client machine do mount -o rw,rsize=8192,wsize=8192,nfsvers=3 fs1:/GFS /gfs-nfs mount -t glusterfs fs1:/GFS /gfs and then access both mount points simultaneously. Is that ok to do and safe? Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111206/088c7610/attachment.html>
gluster-users-bounces at gluster.org wrote on 12/06/2011 12:30:54 PM:> > I'm wondering is it safe to mount my gluster volume using the native > gluster client and nfs from the same client machine? > > Example from one client machine do > > mount -o rw,rsize=8192,wsize=8192,nfsvers=3 fs1:/GFS /gfs-nfs > mount -t glusterfs fs1:/GFS /gfs > > and then access both mount points simultaneously. Is that ok to do andsafe? I don't know if its recommended, but I've done it a bit with no noticable problems (mainly for testing purposes) and in theory its just two different clients accessing the volume which is designed to support multiple clients, so should be fine. -greg
I remember seeing instructions given recently on how to revert a glusterfs setup from 3.2.* back to 3.1.*. Does anyone remember those instructions? Today I I upgraded a cluster from glusterfs 3.1.5 to 3.2.5 and have ended up with many log errors of this sort: [2011-12-06 19:19:56.300768] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1011) It also now seems to be responding very slowly to the glusterfs client and responds so slowly to a nfs mount that the mount attempt always times out and fails. This is a distributed replicated setup. The OS is Centos 5.7. "gluster peer status" shows all servers are up. To upgrade I shut down the filesystem with gluster volume stop yorp then stopped all of the servers with service glusterd stop and went through each host and killed all stray gluster processes. I then ran rpm -Uvh to install the new rpms, then did service glusterd start on all servers to start the daemons back up, and finally ran gluster volume start yorp to start up the filesystem. The filesystem actually gets mounted on the clients by the automounter. This is all as documented in the release notes: I did no fancy tricks. Bill Sebok Computer Software Manager, Univ. of Maryland, Astronomy Internet: wls at astro.umd.edu URL: http://furo.astro.umd.edu/