When I a start a glusterfs client and kill it. An attempt to restart the glusterfs client process fails with the error below. It looks like the channel glusterfs uses to communicate with fuse is not released. If I use a different directory argument when I start a client glusters process it starts fine. Is there a way to clean up resources so that a restart of client glusterfs wouldn't fail? $ /usr/local/sbin/glusterfs -f ./client_test.vol -N -l /dev/stdout --log-level=DEBUG /tmp/local 2008-11-11 12:48:46 D [xlator.c:115:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/1.3.12/xlator/mount/fuse.so fuse: failed to access mountpoint /tmp/local: Transport endpoint is not connected 2008-11-11 12:48:46 E [fuse-bridge.c:2699:init] glusterfs-fuse: fuse_mount failed (Transport endpoint is not connected) 2008-11-11 12:48:46 E [glusterfs.c:547:main] glusterfs: Initializing FUSE failed -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081111/5d3e72fe/attachment.html>
Aleynikov, Serge
2008-Nov-11 19:08 UTC
[Gluster-users] SPAM WARNING!: Transport is not connected
Sorry for noise - found solution in the troubleshooting guide. Does glusterfs client need to run as a root user? ________________________________ From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Aleynikov, Serge Sent: Tuesday, November 11, 2008 1:52 PM To: gluster-users at gluster.org Subject: SPAM WARNING!: [Gluster-users] Transport is not connected When I a start a glusterfs client and kill it. An attempt to restart the glusterfs client process fails with the error below. It looks like the channel glusterfs uses to communicate with fuse is not released. If I use a different directory argument when I start a client glusters process it starts fine. Is there a way to clean up resources so that a restart of client glusterfs wouldn't fail? $ /usr/local/sbin/glusterfs -f ./client_test.vol -N -l /dev/stdout --log-level=DEBUG /tmp/local 2008-11-11 12:48:46 D [xlator.c:115:xlator_set_type] xlator: attempt to load file /usr/local/lib/glusterfs/1.3.12/xlator/mount/fuse.so fuse: failed to access mountpoint /tmp/local: Transport endpoint is not connected 2008-11-11 12:48:46 E [fuse-bridge.c:2699:init] glusterfs-fuse: fuse_mount failed (Transport endpoint is not connected) 2008-11-11 12:48:46 E [glusterfs.c:547:main] glusterfs: Initializing FUSE failed -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081111/a5fae4e1/attachment.html>
Aleynikov, Please do 'umount <mount-directory>' to release the stale mount. Killing glusterfs client does not guarantee that the mounted directory is unmounted. -- gowda On Wed, Nov 12, 2008 at 12:21 AM, Aleynikov, Serge <Serge.Aleynikov at gs.com>wrote:> When I a start a glusterfs client and kill it. An attempt to restart the > glusterfs client process fails with the error below. It looks like the > channel glusterfs uses to communicate with fuse is not released. If I use a > different directory argument when I start a client glusters process it > starts fine. > > Is there a way to clean up resources so that a restart of client glusterfs > wouldn't fail? > > $ /usr/local/sbin/glusterfs -f ./client_test.vol -N -l /dev/stdout > --log-level=DEBUG /tmp/local > > 2008-11-11 12:48:46 D [xlator.c:115:xlator_set_type] xlator: attempt to > load file /usr/local/lib/glusterfs/1.3.12/xlator/mount/fuse.so > > fuse: failed to access mountpoint /tmp/local: Transport endpoint is not > connected > 2008-11-11 12:48:46 E [fuse-bridge.c:2699:init] glusterfs-fuse: fuse_mount > failed (Transport endpoint is not connected) > > 2008-11-11 12:48:46 E [glusterfs.c:547:main] glusterfs: Initializing FUSE > failed > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >-- hard work often pays off after time, but laziness always pays off now -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081112/22159041/attachment.html>