I have a 3 node gluster cluster running: gluster --version glusterfs 3.3.1 built on Oct 11 2012 22:01:05 It was previously running a 2.? version, uninstalled and installed 3.3.1 It appears to be working correctly. gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data The client previously I have a client with rpm's installed: rpm -qa | grep gluster glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-3.3.1-1.el6.x86_64 glusterfs-debuginfo-3.3.1-1.el6.x86_64 I have a line in my fstab file of: mseas-data:/gdata /gdata glusterfs defaults 0 0 This is copied from backups before client reinstall I have also tried: mount.glusterfs mseas-data:/gdata \gdata>From /var/log/glusterfs/gdata.logcat gdata.log [2012-11-16 10:43:36.462998] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1 [2012-11-16 10:43:36.471280] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-quick-read: Max cache size is 25332322304 [2012-11-16 10:43:36.471347] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-io-cache: Max cache size is 25332322304 [2012-11-16 10:43:36.474518] I [client.c:2142:notify] 0-gdata-client-0: parent translators are ready, attempting connect on transport [2012-11-16 10:43:36.478166] I [client.c:2142:notify] 0-gdata-client-1: parent translators are ready, attempting connect on transport [2012-11-16 10:43:36.481604] I [client.c:2142:notify] 0-gdata-client-2: parent translators are ready, attempting connect on transport Given volfile: +------------------------------------------------------------------------------+ 1: volume gdata-client-0 2: type protocol/client 3: option remote-host gluster-0-0 4: option remote-subvolume /mseas-data-0-0 5: option transport-type tcp 6: end-volume 7: 8: volume gdata-client-1 9: type protocol/client 10: option remote-host gluster-0-1 11: option remote-subvolume /mseas-data-0-1 12: option transport-type tcp 13: end-volume 14: 15: volume gdata-client-2 16: type protocol/client 17: option remote-host gluster-data 18: option remote-subvolume /data 19: option transport-type tcp 20: end-volume 21: 22: volume gdata-dht 23: type cluster/distribute 24: subvolumes gdata-client-0 gdata-client-1 gdata-client-2 25: end-volume 26: 27: volume gdata-write-behind 28: type performance/write-behind 29: subvolumes gdata-dht 30: end-volume 31: 32: volume gdata-read-ahead 33: type performance/read-ahead 34: subvolumes gdata-write-behind 35: end-volume 36: 37: volume gdata-io-cache 38: type performance/io-cache 39: subvolumes gdata-read-ahead 40: end-volume 41: 42: volume gdata-quick-read 43: type performance/quick-read 44: subvolumes gdata-io-cache 45: end-volume 46: 47: volume gdata-stat-prefetch 48: type performance/stat-prefetch 49: subvolumes gdata-quick-read 50: end-volume 51: 52: volume gdata 53: type debug/io-stats 54: subvolumes gdata-stat-prefetch 55: end-volume +------------------------------------------------------------------------------+ [2012-11-16 10:43:36.485221] E [socket.c:1715:socket_connect_finish] 0-gdata-client-2: connection to failed (No route to host) [2012-11-16 10:43:36.485534] E [client-handshake.c:1717:client_query_portmap_cbk] 0-gdata-client-1: failed to get the port number for remote subvolume [2012-11-16 10:43:36.485607] I [client.c:2090:client_rpc_notify] 0-gdata-client-1: disconnected [2012-11-16 10:43:36.485626] E [client-handshake.c:1717:client_query_portmap_cbk] 0-gdata-client-0: failed to get the port number for remote subvolume [2012-11-16 10:43:36.485647] I [client.c:2090:client_rpc_notify] 0-gdata-client-0: disconnected [2012-11-16 10:43:36.492937] I [fuse-bridge.c:4191:fuse_graph_setup] 0-fuse: switched to graph 0 [2012-11-16 10:43:36.493966] I [fuse-bridge.c:3376:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.13 [2012-11-16 10:43:36.494088] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-16 10:43:36.494214] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-16 10:43:36.494231] W [fuse-bridge.c:513:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Invalid argument) [2012-11-16 10:43:36.500651] I [fuse-bridge.c:4091:fuse_thread_proc] 0-fuse: unmounting /var/log/glusterfs/gdata [2012-11-16 10:43:36.501036] W [glusterfsd.c:831:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3065ae5ccd] (-->/lib64/libpthread.so.0() [0x30666077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d4d]))) 0-: received signum (15), shutting down [2012-11-16 10:43:36.501072] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/var/log/glusterfs/gdata'. It appears I am reaching the file system, but: 0-gdata-client-0: failed to get the port number for remote subvolume 0-gdata-dht: Failed to get hashed subvol for / I have googled/troubleshot but am unable to find solution. Your help would be Greatly appreciated. Thanks, Steve Postma
I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process.>From the client log, my comments in boldInitial contact with server, no errors [2012-11-19 15:07:44.802826] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1 [2012-11-19 15:07:44.811730] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-quick-read: Max cache size is 25332322304 [2012-11-19 15:07:44.811797] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-io-cache: Max cache size is 25332322304 [2012-11-19 15:07:44.815938] I [client.c:2142:notify] 0-gdata-client-0: parent translators are ready, attempting connect on transport [2012-11-19 15:07:44.819667] I [client.c:2142:notify] 0-gdata-client-1: parent translators are ready, attempting connect on transport [2012-11-19 15:07:44.823668] I [client.c:2142:notify] 0-gdata-client-2: parent translators are ready, attempting connect on transport Data from server is returned, looks like proper responses Given volfile: +------------------------------------------------------------------------------+ 1: volume gdata-client-0 2: type protocol/client 3: option remote-host gluster-0-0 4: option remote-subvolume /mseas-data-0-0 5: option transport-type tcp 6: end-volume 7: 8: volume gdata-client-1 9: type protocol/client 10: option remote-host gluster-0-1 11: option remote-subvolume /mseas-data-0-1 12: option transport-type tcp 13: end-volume 14: 15: volume gdata-client-2 16: type protocol/client 17: option remote-host gluster-data 18: option remote-subvolume /data 19: option transport-type tcp 20: end-volume 21: 22: volume gdata-dht 23: type cluster/distribute 24: subvolumes gdata-client-0 gdata-client-1 gdata-client-2 25: end-volume 26: 27: volume gdata-write-behind 28: type performance/write-behind 29: subvolumes gdata-dht 30: end-volume 31: 32: volume gdata-read-ahead 33: type performance/read-ahead 34: subvolumes gdata-write-behind 35: end-volume 36: 37: volume gdata-io-cache 38: type performance/io-cache 39: subvolumes gdata-read-ahead 40: end-volume 41: 42: volume gdata-quick-read 43: type performance/quick-read 44: subvolumes gdata-io-cache 45: end-volume 46: 47: volume gdata-stat-prefetch 48: type performance/stat-prefetch 49: subvolumes gdata-quick-read 50: end-volume 51: 52: volume gdata 53: type debug/io-stats 54: subvolumes gdata-stat-prefetch 55: end-volume +------------------------------------------------------------------------------+ Port is changed and we can no longer connect. [2012-11-19 15:07:44.828213] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 0-gdata-client-2: changing port to 24009 (from 0) [2012-11-19 15:07:44.828285] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 0-gdata-client-0: changing port to 24009 (from 0) [2012-11-19 15:07:44.828329] I [rpc-clnt.c:1657:rpc_clnt_reconfig] 0-gdata-client-1: changing port to 24009 (from 0) [2012-11-19 15:07:48.812899] W [client-handshake.c:1819:client_dump_version_cbk] 0-gdata-client-2: received RPC status error [2012-11-19 15:07:48.812958] W [socket.c:1512:__socket_proto_state_machine] 0-gdata-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.2:24009) [2012-11-19 15:07:48.812985] I [client.c:2090:client_rpc_notify] 0-gdata-client-2: disconnected [2012-11-19 15:07:48.816284] W [client-handshake.c:1819:client_dump_version_cbk] 0-gdata-client-0: received RPC status error [2012-11-19 15:07:48.816325] W [socket.c:1512:__socket_proto_state_machine] 0-gdata-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.10:24009) [2012-11-19 15:07:48.816347] I [client.c:2090:client_rpc_notify] 0-gdata-client-0: disconnected [2012-11-19 15:07:48.819636] W [client-handshake.c:1819:client_dump_version_cbk] 0-gdata-client-1: received RPC status error [2012-11-19 15:07:48.819669] W [socket.c:1512:__socket_proto_state_machine] 0-gdata-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.11:24009) [2012-11-19 15:07:48.819688] I [client.c:2090:client_rpc_notify] 0-gdata-client-1: disconnected [2012-11-19 15:07:48.827123] I [fuse-bridge.c:4191:fuse_graph_setup] 0-fuse: switched to graph 0 [2012-11-19 15:07:48.827368] I [fuse-bridge.c:3376:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.13 [2012-11-19 15:07:48.827492] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.827607] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.827626] W [fuse-bridge.c:513:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Invalid argument) [2012-11-19 15:07:48.827709] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.827779] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.827917] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.828063] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-19 15:07:48.841568] I [fuse-bridge.c:4091:fuse_thread_proc] 0-fuse: unmounting /gdata [2012-11-19 15:07:48.841928] W [glusterfsd.c:831:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3065ae5ccd] (-->/lib64/libpthread.so.0() [0x30666077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d4d]))) 0-: received signum (15), shutting down [2012-11-19 15:07:48.841965] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/gdata'. netstat verifies that glusterfs is listening on 24009 [root at mseas-data ~]# netstat --tcp --listening --programs Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:amanda *:* LISTEN 13890/xinetd tcp 0 0 *:sift-uft *:* LISTEN 23324/rpc.rquotad tcp 0 0 *:38465 *:* LISTEN 26629/glusterfs tcp 0 0 *:kamanda *:* LISTEN 13890/xinetd tcp 0 0 *:nfs *:* LISTEN - tcp 0 0 *:24007 *:* LISTEN 27040/glusterd tcp 0 0 *:sunrpc *:* LISTEN 2682/portmap tcp 0 0 *:51538 *:* LISTEN - tcp 0 0 *:ssh *:* LISTEN 14045/sshd tcp 0 0 *:ipcserver *:* LISTEN 19498/rpc.statd tcp 0 0 localhost.localdomain:smtp *:* LISTEN 3268/sendmail tcp 0 0 localhost.lo:x11-ssh-offset *:* LISTEN 23982/sshd tcp 0 0 *:794 *:* LISTEN 23511/rpc.mountd tcp 0 0 localhost.localdomain:6011 *:* LISTEN 24022/sshd tcp 0 0 localhost.localdomain:6012 *:* LISTEN 9462/sshd tcp 0 0 *:24009 *:* LISTEN 26625/glusterfsd tcp 0 0 *:ssh *:* LISTEN 14045/sshd tcp 0 0 localhost6.l:x11-ssh-offset *:* LISTEN 23982/sshd tcp 0 0 localhost6.localdomain:6011 *:* LISTEN 24022/sshd tcp 0 0 localhost6.localdomain:6012 *:* LISTEN 9462/sshd IPTABLES service has been stopped on all machines. Any help you could give me would be greatly appreciated. Thanks, Steve Postma ________________________________ From: Steve Postma Sent: Friday, November 16, 2012 10:51 AM To: gluster-users at gluster.org Subject: cant mount gluster volume I have a 3 node gluster cluster running: gluster --version glusterfs 3.3.1 built on Oct 11 2012 22:01:05 It was previously running a 2.? version, uninstalled and installed 3.3.1 It appears to be working correctly. gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data The client previously I have a client with rpm's installed: rpm -qa | grep gluster glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-3.3.1-1.el6.x86_64 glusterfs-debuginfo-3.3.1-1.el6.x86_64 I have a line in my fstab file of: mseas-data:/gdata /gdata glusterfs defaults 0 0 This is copied from backups before client reinstall I have also tried: mount.glusterfs mseas-data:/gdata \gdata>From /var/log/glusterfs/gdata.logcat gdata.log [2012-11-16 10:43:36.462998] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1 [2012-11-16 10:43:36.471280] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-quick-read: Max cache size is 25332322304 [2012-11-16 10:43:36.471347] I [io-cache.c:1549:check_cache_size_ok] 0-gdata-io-cache: Max cache size is 25332322304 [2012-11-16 10:43:36.474518] I [client.c:2142:notify] 0-gdata-client-0: parent translators are ready, attempting connect on transport [2012-11-16 10:43:36.478166] I [client.c:2142:notify] 0-gdata-client-1: parent translators are ready, attempting connect on transport [2012-11-16 10:43:36.481604] I [client.c:2142:notify] 0-gdata-client-2: parent translators are ready, attempting connect on transport Given volfile: +------------------------------------------------------------------------------+ 1: volume gdata-client-0 2: type protocol/client 3: option remote-host gluster-0-0 4: option remote-subvolume /mseas-data-0-0 5: option transport-type tcp 6: end-volume 7: 8: volume gdata-client-1 9: type protocol/client 10: option remote-host gluster-0-1 11: option remote-subvolume /mseas-data-0-1 12: option transport-type tcp 13: end-volume 14: 15: volume gdata-client-2 16: type protocol/client 17: option remote-host gluster-data 18: option remote-subvolume /data 19: option transport-type tcp 20: end-volume 21: 22: volume gdata-dht 23: type cluster/distribute 24: subvolumes gdata-client-0 gdata-client-1 gdata-client-2 25: end-volume 26: 27: volume gdata-write-behind 28: type performance/write-behind 29: subvolumes gdata-dht 30: end-volume 31: 32: volume gdata-read-ahead 33: type performance/read-ahead 34: subvolumes gdata-write-behind 35: end-volume 36: 37: volume gdata-io-cache 38: type performance/io-cache 39: subvolumes gdata-read-ahead 40: end-volume 41: 42: volume gdata-quick-read 43: type performance/quick-read 44: subvolumes gdata-io-cache 45: end-volume 46: 47: volume gdata-stat-prefetch 48: type performance/stat-prefetch 49: subvolumes gdata-quick-read 50: end-volume 51: 52: volume gdata 53: type debug/io-stats 54: subvolumes gdata-stat-prefetch 55: end-volume +------------------------------------------------------------------------------+ [2012-11-16 10:43:36.485221] E [socket.c:1715:socket_connect_finish] 0-gdata-client-2: connection to failed (No route to host) [2012-11-16 10:43:36.485534] E [client-handshake.c:1717:client_query_portmap_cbk] 0-gdata-client-1: failed to get the port number for remote subvolume [2012-11-16 10:43:36.485607] I [client.c:2090:client_rpc_notify] 0-gdata-client-1: disconnected [2012-11-16 10:43:36.485626] E [client-handshake.c:1717:client_query_portmap_cbk] 0-gdata-client-0: failed to get the port number for remote subvolume [2012-11-16 10:43:36.485647] I [client.c:2090:client_rpc_notify] 0-gdata-client-0: disconnected [2012-11-16 10:43:36.492937] I [fuse-bridge.c:4191:fuse_graph_setup] 0-fuse: switched to graph 0 [2012-11-16 10:43:36.493966] I [fuse-bridge.c:3376:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.13 [2012-11-16 10:43:36.494088] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-16 10:43:36.494214] E [dht-common.c:1372:dht_lookup] 0-gdata-dht: Failed to get hashed subvol for / [2012-11-16 10:43:36.494231] W [fuse-bridge.c:513:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Invalid argument) [2012-11-16 10:43:36.500651] I [fuse-bridge.c:4091:fuse_thread_proc] 0-fuse: unmounting /var/log/glusterfs/gdata [2012-11-16 10:43:36.501036] W [glusterfsd.c:831:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3065ae5ccd] (-->/lib64/libpthread.so.0() [0x30666077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d4d]))) 0-: received signum (15), shutting down [2012-11-16 10:43:36.501072] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/var/log/glusterfs/gdata'. It appears I am reaching the file system, but: 0-gdata-client-0: failed to get the port number for remote subvolume 0-gdata-dht: Failed to get hashed subvol for / I have googled/troubleshot but am unable to find solution. Your help would be Greatly appreciated. Thanks, Steve Postma
Thanks, your right. Can telnet to both ports. 24009 and 24007 ________________________________ From: John Mark Walker [johnmark at redhat.com] Sent: Monday, November 19, 2012 3:44 PM To: Steve Postma Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] cant mount gluster volume ----- Original Message -----> I connect on 24009 glusterfs and fail on 27040 glusterd > Steve27040 is the PID. Were you connecting to the right port? :) -JM ________________________________
Steve - have you been to #gluster on IRC? I recommend you drop by tomorrow morning. -JM ----- Original Message -----> Thanks, your right. Can telnet to both ports. 24009 and 24007 > ________________________________ > From: John Mark Walker [johnmark at redhat.com] > Sent: Monday, November 19, 2012 3:44 PM > To: Steve Postma > Cc: gluster-users at gluster.org > Subject: Re: [Gluster-users] cant mount gluster volume > > > > ----- Original Message ----- > > I connect on 24009 glusterfs and fail on 27040 glusterd > > Steve > > 27040 is the PID. Were you connecting to the right port? :) > > -JM > ________________________________ > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >
I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed. I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help. However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up. [root at mseas-data ~]# gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data The brick we are attaching to has this in the fstab file. /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0 but "mount -a" does not appear to do anything. I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data" manually to mount it. Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, Thanks, Steve Postma ________________________________ From: Steve Postma Sent: Monday, November 19, 2012 3:29 PM To: gluster-users at gluster.org Subject: cant mount gluster volume I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process.
Steve, The volume is a pure distribute:> Type: DistributeIn order to have files replicate, you need 1) to have a number of bricks that is a multiple of the replica count, e.g., for your three node configuration, you would need two bricks per node to set up replica two. You could set up replica 3, but you will take a performance hit in doing so. 2) to add a replica count during the volume creation, e.g. `gluster volume create <vol name> replica 2 server1:/export server2:/export From the volume info you provided, the export directories are different for all three nodes: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data Which node are you trying to mount to /data? If it is not the gluster-data node, then it will fail if there is not a /data directory. In this case, it is a good thing, since mounting to /data on gluster-0-0 or gluster-0-1 would not accomplish what you need. To clarify, there is a distinction to be made between the export volume mount and the gluster mount point. In this case, you are mounting the brick. In order to see all the files, you would need to mount the volume with the native client, or NFS. For the native client: mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir> For NFS: mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir> Thanks, Eco On 11/20/2012 09:42 AM, Steve Postma wrote:> I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed. > > I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help. > > However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up. > > [root at mseas-data ~]# gluster volume info > Volume Name: gdata > Type: Distribute > Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d > Status: Started > Number of Bricks: 3 > Transport-type: tcp > Bricks: > Brick1: gluster-0-0:/mseas-data-0-0 > Brick2: gluster-0-1:/mseas-data-0-1 > Brick3: gluster-data:/data > > > > The brick we are attaching to has this in the fstab file. > /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0 > > > but "mount -a" does not appear to do anything. > I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data" > manually to mount it. > > > > Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, > Thanks, > Steve Postma > > > > > > > > ________________________________ > From: Steve Postma > Sent: Monday, November 19, 2012 3:29 PM > To: gluster-users at gluster.org > Subject: cant mount gluster volume > > I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi Eco, thanks for your help. If I run on brick 1: mount -t glusterfs gluster-data:/gdata /gdata it mounts but appears as a 18 GB partition with nothing in it. I can mount it from the client, but again, there is nothing in it. Before upgrade this was a 50 TB gluster volume. Was that volume information lost with upgrade? The file structure appears intact on each brick. Steve ________________________________ From: gluster-users-bounces at gluster.org [gluster-users-bounces at gluster.org] on behalf of Eco Willson [ewillson at redhat.com] Sent: Tuesday, November 20, 2012 1:29 PM To: gluster-users at gluster.org Subject: Re: [Gluster-users] FW: cant mount gluster volume Steve, The volume is a pure distribute:> Type: DistributeIn order to have files replicate, you need 1) to have a number of bricks that is a multiple of the replica count, e.g., for your three node configuration, you would need two bricks per node to set up replica two. You could set up replica 3, but you will take a performance hit in doing so. 2) to add a replica count during the volume creation, e.g. `gluster volume create <vol name> replica 2 server1:/export server2:/export From the volume info you provided, the export directories are different for all three nodes: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data Which node are you trying to mount to /data? If it is not the gluster-data node, then it will fail if there is not a /data directory. In this case, it is a good thing, since mounting to /data on gluster-0-0 or gluster-0-1 would not accomplish what you need. To clarify, there is a distinction to be made between the export volume mount and the gluster mount point. In this case, you are mounting the brick. In order to see all the files, you would need to mount the volume with the native client, or NFS. For the native client: mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir> For NFS: mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir> Thanks, Eco On 11/20/2012 09:42 AM, Steve Postma wrote:> I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed. > > I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help. > > However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up. > > [root at mseas-data<mailto:root at mseas-data> ~]# gluster volume info > Volume Name: gdata > Type: Distribute > Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d > Status: Started > Number of Bricks: 3 > Transport-type: tcp > Bricks: > Brick1: gluster-0-0:/mseas-data-0-0 > Brick2: gluster-0-1:/mseas-data-0-1 > Brick3: gluster-data:/data > > > > The brick we are attaching to has this in the fstab file. > /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0 > > > but "mount -a" does not appear to do anything. > I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data" > manually to mount it. > > > > Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, > Thanks, > Steve Postma > > > > > > > > ________________________________ > From: Steve Postma > Sent: Monday, November 19, 2012 3:29 PM > To: gluster-users at gluster.org<mailto:gluster-users at gluster.org> > Subject: cant mount gluster volume > > I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> > http://supercolony.gluster.org/mailman/listinfo/gluster-users_______________________________________________ Gluster-users mailing list Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> http://supercolony.gluster.org/mailman/listinfo/gluster-users ________________________________
Steve, Does df -h show the expected directories on each server, and do they show the expected size? If the file On 11/20/2012 11:09 AM, Steve Postma wrote:> Hi Eco, thanks for your help. > > If I run on brick 1: > mount -t glusterfs gluster-data:/gdata /gdata > > it mounts but appears as a 18 GB partition with nothing in itTo confirm, are the export directories mounted properly on all three servers? Does df -h show the expected directories on each server, and do they show the expected size? Does gluster volume info show the same output on all three servers?> > I can mount it from the client, but again, there is nothing in it. > > > > Before upgrade this was a 50 TB gluster volume. Was that volume information lost with upgrade?Do you have the old vol files from before the upgrade? It would be good to see them to make sure the volume got recreated properly.> The file structure appears intact on each brick.As long as the file structure is intact, you will be able to recreate the volume although it may require a potentially painful rsync in the worst case. - Eco> > Steve > > > ________________________________ > From: gluster-users-bounces at gluster.org [gluster-users-bounces at gluster.org] on behalf of Eco Willson [ewillson at redhat.com] > Sent: Tuesday, November 20, 2012 1:29 PM > To: gluster-users at gluster.org > Subject: Re: [Gluster-users] FW: cant mount gluster volume > > Steve, > > The volume is a pure distribute: > >> Type: Distribute > In order to have files replicate, you need > 1) to have a number of bricks that is a multiple of the replica count, > e.g., for your three node configuration, you would need two bricks per > node to set up replica two. You could set up replica 3, but you will > take a performance hit in doing so. > 2) to add a replica count during the volume creation, e.g. > `gluster volume create <vol name> replica 2 server1:/export server2:/export > > From the volume info you provided, the export directories are different > for all three nodes: > > Brick1: gluster-0-0:/mseas-data-0-0 > Brick2: gluster-0-1:/mseas-data-0-1 > Brick3: gluster-data:/data > > > Which node are you trying to mount to /data? If it is not the > gluster-data node, then it will fail if there is not a /data directory. > In this case, it is a good thing, since mounting to /data on gluster-0-0 > or gluster-0-1 would not accomplish what you need. > To clarify, there is a distinction to be made between the export volume > mount and the gluster mount point. In this case, you are mounting the > brick. > In order to see all the files, you would need to mount the volume with > the native client, or NFS. > For the native client: > mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir> > For NFS: > mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir> > > > Thanks, > > Eco > On 11/20/2012 09:42 AM, Steve Postma wrote: >> I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed. >> >> I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help. >> >> However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up. >> >> [root at mseas-data<mailto:root at mseas-data> ~]# gluster volume info >> Volume Name: gdata >> Type: Distribute >> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d >> Status: Started >> Number of Bricks: 3 >> Transport-type: tcp >> Bricks: >> Brick1: gluster-0-0:/mseas-data-0-0 >> Brick2: gluster-0-1:/mseas-data-0-1 >> Brick3: gluster-data:/data >> >> >> >> The brick we are attaching to has this in the fstab file. >> /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0 >> >> >> but "mount -a" does not appear to do anything. >> I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data" >> manually to mount it. >> >> >> >> Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, >> Thanks, >> Steve Postma >> >> >> >> >> >> >> >> ________________________________ >> From: Steve Postma >> Sent: Monday, November 19, 2012 3:29 PM >> To: gluster-users at gluster.org<mailto:gluster-users at gluster.org> >> Subject: cant mount gluster volume >> >> I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process. >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> > http://supercolony.gluster.org/mailman/listinfo/gluster-users > ________________________________ > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users
The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs from before upgrade. Its interesting that "gluster volume info" shows the correct path for each machine. These are the correct mountpoints on each machine, and from each machine I can see the files and structure. [root at mseas-data data]# gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data ________________________________ From: gluster-users-bounces at gluster.org [gluster-users-bounces at gluster.org] on behalf of Eco Willson [ewillson at redhat.com] Sent: Tuesday, November 20, 2012 3:02 PM To: gluster-users at gluster.org Subject: Re: [Gluster-users] FW: cant mount gluster volume Steve, Does df -h show the expected directories on each server, and do they show the expected size? If the file On 11/20/2012 11:09 AM, Steve Postma wrote:> Hi Eco, thanks for your help. > > If I run on brick 1: > mount -t glusterfs gluster-data:/gdata /gdata > > it mounts but appears as a 18 GB partition with nothing in itTo confirm, are the export directories mounted properly on all three servers? Does df -h show the expected directories on each server, and do they show the expected size? Does gluster volume info show the same output on all three servers?> > I can mount it from the client, but again, there is nothing in it. > > > > Before upgrade this was a 50 TB gluster volume. Was that volume information lost with upgrade?Do you have the old vol files from before the upgrade? It would be good to see them to make sure the volume got recreated properly.> The file structure appears intact on each brick.As long as the file structure is intact, you will be able to recreate the volume although it may require a potentially painful rsync in the worst case. - Eco> > Steve > > > ________________________________ > From: gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org> [gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org>] on behalf of Eco Willson [ewillson at redhat.com<mailto:ewillson at redhat.com>] > Sent: Tuesday, November 20, 2012 1:29 PM > To: gluster-users at gluster.org<mailto:gluster-users at gluster.org> > Subject: Re: [Gluster-users] FW: cant mount gluster volume > > Steve, > > The volume is a pure distribute: > >> Type: Distribute > In order to have files replicate, you need > 1) to have a number of bricks that is a multiple of the replica count, > e.g., for your three node configuration, you would need two bricks per > node to set up replica two. You could set up replica 3, but you will > take a performance hit in doing so. > 2) to add a replica count during the volume creation, e.g. > `gluster volume create <vol name> replica 2 server1:/export server2:/export > > From the volume info you provided, the export directories are different > for all three nodes: > > Brick1: gluster-0-0:/mseas-data-0-0 > Brick2: gluster-0-1:/mseas-data-0-1 > Brick3: gluster-data:/data > > > Which node are you trying to mount to /data? If it is not the > gluster-data node, then it will fail if there is not a /data directory. > In this case, it is a good thing, since mounting to /data on gluster-0-0 > or gluster-0-1 would not accomplish what you need. > To clarify, there is a distinction to be made between the export volume > mount and the gluster mount point. In this case, you are mounting the > brick. > In order to see all the files, you would need to mount the volume with > the native client, or NFS. > For the native client: > mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir> > For NFS: > mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir> > > > Thanks, > > Eco > On 11/20/2012 09:42 AM, Steve Postma wrote: >> I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed. >> >> I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help. >> >> However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up. >> >> [root at mseas-data<mailto:root at mseas-data><mailto:root at mseas-data> ~]# gluster volume info >> Volume Name: gdata >> Type: Distribute >> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d >> Status: Started >> Number of Bricks: 3 >> Transport-type: tcp >> Bricks: >> Brick1: gluster-0-0:/mseas-data-0-0 >> Brick2: gluster-0-1:/mseas-data-0-1 >> Brick3: gluster-data:/data >> >> >> >> The brick we are attaching to has this in the fstab file. >> /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0 >> >> >> but "mount -a" does not appear to do anything. >> I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data" >> manually to mount it. >> >> >> >> Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, >> Thanks, >> Steve Postma >> >> >> >> >> >> >> >> ________________________________ >> From: Steve Postma >> Sent: Monday, November 19, 2012 3:29 PM >> To: gluster-users at gluster.org<mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org> >> Subject: cant mount gluster volume >> >> I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process. >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org> > http://supercolony.gluster.org/mailman/listinfo/gluster-users > ________________________________ > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> > http://supercolony.gluster.org/mailman/listinfo/gluster-users_______________________________________________ Gluster-users mailing list Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> http://supercolony.gluster.org/mailman/listinfo/gluster-users ________________________________