for all three nodes:=0A=0ABrick1: gluster-0-0:/mseas-data-0-0=0ABrick2: gluster-0-1:/mseas-data-0-1=0ABrick3: gluster-data:/data=0A=0A=0AWhich node are you trying to mount to /data? If it is not the=0Agluster-data node, then it will fail if there is not a /data directory.=0AIn this case, it is a good thing, since mounting to /data on gluster-0-0=0Aor gluster-0-1 would not accomplish what you need.=0ATo clarify, there is a distinction to be made between the export volume=0Amount and the gluster mount point. In this case, you are mounting the=0Abrick.=0AIn order to see all the files, you would need to mount the volume with=0Athe native client, or NFS.=0AFor the native client:=0Amount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir>=0AFor NFS:=0Amount -t nfs -o vers=3D3 gluster-data:/gdata /mnt/<gluster mount dir>=0A=0A=0AThanks,=0A=0AEco=0AOn 11/20/2012 09:42 AM, Steve Postma wrote:=0A=0A=0AI have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.=0A=0AI had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help.=0A=0AHowever, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the volume as up.=0A=0A[root at mseas-data<mailto:root at mseas-data><mailto:root at mseas-data><mailto:root at mseas-data><mailto:root at mseas-data><mailto:root at mseas-data><mailto:root at mseas-data> ~]# gluster volume info=0AVolume Name: gdata=0AType: Distribute=0AVolume ID: eccc3a90-212d-4563-ae8d-10a77758738d=0AStatus: Started=0ANumber of Bricks: 3=0ATransport-type: tcp=0ABricks:=0ABrick1: gluster-0-0:/mseas-data-0-0=0ABrick2: gluster-0-1:/mseas-data-0-1=0ABrick3: gluster-data:/data=0A=0A=0A=0AThe brick we are attaching to has this in the fstab file.=0A/dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0=0A=0A=0Abut "mount -a" does not appear to do anything.=0AI have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data"=0Amanually to mount it.=0A=0A=0A=0AAny help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated,=0AThanks,=0ASteve Postma=0A=0A=0A=0A=0A=0A=0A=0A________________________________=0AFrom: Steve Postma=0ASent: Monday, November 19, 2012 3:29 PM=0ATo: gluster-users at gluster.org<mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org><mailto:gluster-users at gluster.org>=0ASubject: cant mount gluster volume=0A=0AI am still unable to mount a new 3.3.1 glusterfs install. I have tried from one of the actual machines in the cluster to itself, as well as from various other clients. They all seem to be failing in the same part of the process.=0A=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A=0A=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A________________________________=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A=0A=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A________________________________=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A=0A=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A________________________________=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org<mailto:Gluster-users at gluster.org>=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A=0A=0A________________________________=0A_______________________________________________=0AGluster-users mailing list=0AGluster-users at gluster.org=0Ahttp://supercolony.gluster.org/mailman/listinfo/gluster-users=0A=