>From looking at the source, it appears that fuse_mount_sys failed.
There's no logging from there, unfortunately.
Is there a way to get logging from FUSE, or other layers, to debug this?
Thanks,
Rob
From: Rob Leitman
Sent: Wednesday, April 16, 2014 2:03 PM
To: 'gluster-users at gluster.org'
Subject: Gluster mount troubleshooting
I'm running glusterfs 3.4.3 built on Apr 14 2014 22:05:07
I have a two-brick volume created, but when I try to mount from the client
machine I get an error:
# mount -t glusterfs -o
backupvolfile-server=server2,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log
server1:/webs /mnt/glusterfs/
/usr/bin/fusermount-glusterfs: mount failed: Invalid argument
/usr/bin/fusermount-glusterfs: mount failed: Invalid argument
Mount failed. Please check the log file for more details.
Can I get troubleshooting advice, please?
The log file shows:
[2014-04-16 20:46:25.081640] E [mount.c:298:gf_fuse_mount] 0-glusterfs-fuse:
mount of server1:/webs to /mnt/glusterfs/
(default_permissions,volfile-max-fetch-attempts=2,allow_other,max_read=131072)
failed
[2014-04-16 20:46:25.085507] E [glusterfsd.c:1744:daemonize] 0-daemonize: mount
failed
Given volfile:
+------------------------------------------------------------------------------+
1: volume webs-client-0
2: type protocol/client
3: option transport-type tcp
4: option remote-subvolume /export/data
5: option remote-host server1
6: end-volume
7:
8: volume webs-client-1
9: type protocol/client
10: option transport-type tcp
11: option remote-subvolume /export/data
12: option remote-host server2
13: end-volume
14:
15: volume webs-replicate-0
16: type cluster/replicate
17: subvolumes webs-client-0 webs-client-1
18: end-volume
19:
20: volume webs-dht
21: type cluster/distribute
22: subvolumes webs-replicate-0
23: end-volume
24:
25: volume webs-write-behind
26: type performance/write-behind
27: subvolumes webs-dht
28: end-volume
29:
30: volume webs-read-ahead
31: type performance/read-ahead
32: subvolumes webs-write-behind
33: end-volume
34:
35: volume webs-io-cache
36: type performance/io-cache
37: option cache-size 256MB
38: subvolumes webs-read-ahead
39: end-volume
40:
41: volume webs-quick-read
42: type performance/quick-read
43: option cache-size 256MB
44: subvolumes webs-io-cache
45: end-volume
46:
47: volume webs-open-behind
48: type performance/open-behind
49: subvolumes webs-quick-read
50: end-volume
51:
52: volume webs-md-cache
53: type performance/md-cache
54: subvolumes webs-open-behind
55: end-volume
56:
57: volume webs
58: type debug/io-stats
59: option count-fop-hits off
60: option latency-measurement off
61: subvolumes webs-md-cache
62: end-volume
+------------------------------------------------------------------------------+
[2014-04-16 20:46:25.284586] W [socket.c:514:__socket_rwv] 0-webs-client-0:
readv failed (No data available)
[2014-04-16 20:46:25.296724] W [socket.c:514:__socket_rwv] 0-webs-client-1:
readv failed (No data available)
[2014-04-16 20:46:25.304491] E [mount.c:298:gf_fuse_mount] 0-glusterfs-fuse:
mount of server1:/webs to /mnt/glusterfs/
(default_permissions,volfile-max-fetch-attempts=2,allow_other,max_read=131072)
failed
[2014-04-16 20:46:25.305488] E [glusterfsd.c:1744:daemonize] 0-daemonize: mount
failed
Given volfile:
+------------------------------------------------------------------------------+
1: volume webs-client-0
2: type protocol/client
3: option transport-type tcp
4: option remote-subvolume /export/data
5: option remote-host server1
6: end-volume
7:
8: volume webs-client-1
9: type protocol/client
10: option transport-type tcp
11: option remote-subvolume /export/data
12: option remote-host server2
13: end-volume
14:
15: volume webs-replicate-0
16: type cluster/replicate
17: subvolumes webs-client-0 webs-client-1
18: end-volume
19:
20: volume webs-dht
21: type cluster/distribute
22: subvolumes webs-replicate-0
23: end-volume
24:
25: volume webs-write-behind
26: type performance/write-behind
27: subvolumes webs-dht
28: end-volume
29:
30: volume webs-read-ahead
31: type performance/read-ahead
32: subvolumes webs-write-behind
33: end-volume
34:
35: volume webs-io-cache
36: type performance/io-cache
37: option cache-size 256MB
38: subvolumes webs-read-ahead
39: end-volume
40:
41: volume webs-quick-read
42: type performance/quick-read
43: option cache-size 256MB
44: subvolumes webs-io-cache
45: end-volume
46:
47: volume webs-open-behind
48: type performance/open-behind
49: subvolumes webs-quick-read
50: end-volume
51:
52: volume webs-md-cache
53: type performance/md-cache
54: subvolumes webs-open-behind
55: end-volume
56:
57: volume webs
58: type debug/io-stats
59: option count-fop-hits off
60: option latency-measurement off
61: subvolumes webs-md-cache
62: end-volume
+------------------------------------------------------------------------------+
[2014-04-16 20:46:25.391399] W [socket.c:514:__socket_rwv] 0-webs-client-0:
readv failed (No data available)
[2014-04-16 20:46:25.397691] W [socket.c:514:__socket_rwv] 0-webs-client-1:
readv failed (No data available)
[2014-04-16 20:46:25.436985] W [glusterfsd.c:1002:cleanup_and_exit]
(-->/lib64/libc.so.6(clone+0x6d) [0x7f6bf417190d]
(-->/lib64/libpthread.so.0(+0x7851) [0x7f6bf4803851]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x40533d]))) 0-: received
signum (15), shutting down
[2014-04-16 20:46:25.456624] W [glusterfsd.c:1002:cleanup_and_exit]
(-->/lib64/libc.so.6(clone+0x6d) [0x7f5a8696990d]
(-->/lib64/libpthread.so.0(+0x7851) [0x7f5a86ffb851]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x40533d]))) 0-: received
signum (15), shutting down
On both servers, I see good status:
# gluster volume status
Status of volume: webs
Gluster process
Port Online Pid
------------------------------------------------------------------------------
Brick server1:/export/data
49152 Y 6549
Brick server2:/export/data
49152 Y 6536
NFS Server on localhost
2049 Y 6791
Self-heal Daemon on localhost
N/A Y 6798
NFS Server on server1
2049 Y 6724
Self-heal Daemon on server2
N/A Y 6731
There are no active volume tasks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140417/f5d416c9/attachment.html>