Funny things going on...
Let's start from the beginning:
I am not familiar with ubuntu's way of starting and stopping a service, but
what you did on iptables was adding a line that, in theory, allows any
connection to the server.
Just in case, really, REALLY disable iptables.
on a Red hat-based environment, you'd do something like:
service iptables stop
chkconfig iptables off
If you can do that, it can help.
Now, on the UBUNTU SERVER running gluster, try:
mkdir -p /mnt/glustervol0
mount -t nfs localhost:/gv0 /mnt/glustervol0
This should work.
Next, still on your ubuntu server, do:
df -h
(just to make sure you volume is there)
Let's do this first, to see if gluster and the server are behaving the way
they should.
KR,
Carlos
On Tue, Mar 11, 2014 at 8:29 AM, Daniel Baker
<info at collisiondetection.biz>wrote:
> Dear Carlos
>
> I opened up the glusterfs server firewall to accept all connections
> from my computer :
>
> iptables -I INPUT -p all -s 192.168.100.209 -j ACCEPT
>
> And on my computer to accept all connections from the glusterfs server.
>
> When I try showmount -e localhost
>
> Export list for localhost:
> /gv0 *
>
>
> When I try the same command from my computer I get:
>
>
> showmount -e 192.168.100.170
> Export list for 192.168.100.170:
> /gv0 *
>
>
> When I try from my computer :
>
> sudo mount -t nfs 192.168.100.170:/gv0 /mnt/export
>
> mount.nfs: mount point /mnt/export does not exist
>
> And when I try from my computer:
>
> sudo mount -t glusterfs 192.168.100.170:/gv0 /mnt/export
> Usage: mount.glusterfs <volumeserver>:<volumeid/volumeport> -o
> <options> <mountpoint>
> Options:
> man 8 mount.glusterfs
>
> To display the version number of the mount helper:
> mount.glusterfs --version
>
>
> So it looks like the volume is there but it doesn't see the mount point
?
>
> Then I changed tac. I found this tutorial :
>
>
http://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers
>
>
> I issued this command on my client computer:
>
> mkdir /mnt/glusterfs
>
> mount.glusterfs 192.168.100.170/gv0 /mnt/glusterfs
>
>
> This time I did not receive any messages about how to use the gluster
> commads. So it looked ok.
>
> However when I issue :
>
> mount
>
> I get this :
>
> /dev/sda2 451G 421G 7.2G 99% /
> udev 3.8G 4.0K 3.8G 1% /dev
> tmpfs 1.6G 1008K 1.6G 1% /run
> none 5.0M 0 5.0M 0% /run/lock
> none 3.8G 748K 3.8G 1% /run/shm
> cgroup 3.8G 0 3.8G 0% /sys/fs/cgroup
> /dev/sda1 93M 2.1M 91M 3% /boot/efi
> /home/kam270/.Private 451G 421G 7.2G 99% /home/kam270
>
> No mount point to server.
>
> Maybe I need to try adding hostnames in my /etc/hosts ?
>
>
> Thanks,
>
> Dan
>
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Fri, 7 Mar 2014 17:56:08 +0100
> > From: Carlos Capriotti <capriotti.carlos at gmail.com>
> > To: Daniel Baker <info at collisiondetection.biz>
> > Cc: gluster-users <gluster-users at gluster.org>
> > Subject: Re: [Gluster-users] Testing Gluster 3.2.4 in VMware
> > Message-ID:
> > <CAMShz32JK=B+O+cL3Ej-hxSsQ1s06U0JOiP9GgSkdjfq1> 0s-Q at
mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hello, Daniel.
> > I am also testing gluster on vmware; in my application, it will be a
> > secondary datastore for VM images.
> >
> > So far, I've hit a couple of brick walls, like, for instance,
VMware not
> > reading volumes created as striped, or striped + replicated. It simply
> sits
> > there, trying, four hours, without errors on either sides.
> >
> > But your current configuration WILL work.
> >
> > As a suggestion, to begin with your troubleshooting, try disabling
> firewall
> > and SElinux. nothing to do with your current problem, BUT will matter
in
> > the near future. After you are sure all works, go back an re-enable/
fine
> > tune them.
> >
> > Now to your problem...
> >
> > Your first syntax seem to be a bit off,unless it is a typo;
> >
> > sudo mount.glusterfs 192.168.100.170:gv0 /mnt/export
> >
> > you see, there is a slash missing after. It should read
> >
> > sudo mount.glusterfs 192.168.100.170:/gv0 /mnt/export
> >
> > For the second case, you did not post the error message, so I can only
> > suggest you try copying/pasting this:
> >
> > sudo mount -t glusterfs 192.168.100.170:/gv0 /mnt/export
> >
> > Now, here is another trick: try mounting wit nfs:
> >
> > First, make sure your NFS share is really being shared:
> >
> > # showmount -e 192.168.100.170
> >
> > Alternatively, if you are on one of the gluster servers, just for
> testing,
> > you may try:
> >
> > # showmount -e localhost
> >
> > Make sure your gluster volume is REALLY called gv0.
> >
> > Now you can try mounting with:
> >
> > sudo mount -t nfs 192.168.100.170:/gv0 /mnt/export
> >
> > Again, if you are on one of the servers, try
> >
> > sudo mount -t nfs localhost:/gv0 /mnt/export
> >
> > You might want to "sudo su" to run everything all commands
as root,
> without
> > the hassle of sudoing everything.
> >
> > Give it a try. If nfs works, go for it anyway; It is your only option
for
> > VMware/esxi anyway.
> >
> > There are a few more advanced steps on esxi and on gluster, but
let's get
> > it to work first, right ?
> >
> > Cheers,
> >
> > On Fri, Mar 7, 2014 at 9:15 AM, Daniel Baker <
> info at collisiondetection.biz>wrote:
> >
> >>
> >> Hi,
> >>
> >> I have followed your tutorial to set up glusterfs 3.4.2 in vmware.
> >>
> >>
>
http://www.gluster.org/community/documentation/index.php/Getting_started_configure
> >>
> >> My gluster volume info is the same as this:
> >>
> >>
> >> Volume Name: gv0
> >> Type: Replicate
> >> Volume ID: 8bc3e96b-a1b6-457d-8f7a-a91d1d4dc019
> >> Status: Created
> >> Number of Bricks: 1 x 2 = 2
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: node01.yourdomain.net:/export/sdb1/brick
> >> Brick2: node02.yourdomain.net:/export/sdb1/brick
> >>
> >> In order to test replication I have installed the glusterfs-client
on my
> >> ubuntu 12.04 laptop.
> >>
> >> I issue this command:
> >>
> >> sudo mount.glusterfs 192.168.100.170:gv0 /mnt/export
> >>
> >> but I receive this error :
> >>
> >> Usage: mount.glusterfs
<volumeserver>:<volumeid/volumeport> -o
> >> <options> <mountpoint>
> >> Options:
> >> man 8 mount.glusterfs
> >>
> >> To display the version number of the mount helper:
> >> mount.glusterfs --version
> >>
> >>
> >>
> >> I have also tried this variant :
> >>
> >> # mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
> >>
> >>
> >>
> >> So how do I mount the volumes and test the replication. Your
getting
> >> started tutorial doesn't detail that ?
> >>
> >> Thanks for your help
> >>
> >> Dan
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
>
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140307/1143f397/attachment-0001.html
> >
> >
> > ------------------------------
> >
> > Message: 5
> > Date: Fri, 7 Mar 2014 19:05:24 +0000
> > From: Justin Clift <justin at gluster.org>
> > To: gluster-users at gluster.org, <gluster-devel at nongnu.org>
> > Subject: Re: [Gluster-users] [Gluster-devel] Is there demand for
> > geo-replication on RHEL/CentOS/SL 5.x?
> > Message-ID: <9D72E0A0-589F-48A4-B978-8B8508B56372 at
gluster.org>
> > Content-Type: text/plain; charset=us-ascii
> >
> > On 04/03/2014, at 1:35 PM, Justin Clift wrote:
> >> Hi all,
> >>
> >> Is anyone interested in having geo-replication work on
RHEL/CentOS/SL
> 5.x?
> >
> >
> > Seems pretty clear there's no demand for geo-replication on EL5,
> > so we'll disable the rpm building of it.
> >
> > Patch to do the disabling is up for review:
> >
> > http://review.gluster.org/#/c/7210/
> >
> > If anyone's got the time to do code review of it, please do
(it's
> > a simple one). :)
> >
> > Regards and best wishes,
> >
> > Justin Clift
> >
> > --
> > Open Source and Standards @ Red Hat
> >
> > twitter.com/realjustinclift
> >
> >
> >
> > ------------------------------
> >
> > Message: 6
> > Date: Fri, 7 Mar 2014 11:47:06 -0800
> > From: Justin Dossey <jbd at podomatic.com>
> > To: gluster-users <gluster-users at gluster.org>
> > Subject: [Gluster-users] DNS resolution of gluster servers from
> > client?
> > Message-ID:
> > <
> CAPMPShziV96SvSb-tpAoExFR67qANvqDU4D1uavs9t33EYrULg at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > While testing rolling upgrades from 3.3 to 3.4, I ran into the
"Transport
> > Endpoint is not connected" issue on my test client (running 3.3)
after
> > rebooting two of my four test GlusterFS 3.4 servers
> > (distributed-replicated-2).
> >
> > Unmounting and remounting the volume was the only way I could get the
> error
> > to go away
> >
> > As the nodes in question were actually up at the time I got the error,
> and
> > waiting did not help, I checked the client logs and found this:
> >
> > [2014-03-04 23:19:26.124162] E [dht-common.c:1374:dht_lookup]
> 0-TEST1-dht:
> > Failed to get hashed subvol for /
> > [2014-03-04 23:19:26.124434] E [dht-common.c:1374:dht_lookup]
> 0-TEST1-dht:
> > Failed to get hashed subvol for /
> > [2014-03-04 23:19:27.626845] I [afr-common.c:3843:afr_local_init]
> > 0-TEST1-replicate-0: no subvolumes up
> > [2014-03-04 23:19:27.626928] W [fuse-bridge.c:2525:fuse_statfs_cbk]
> > 0-glusterfs-fuse: 77: ERR => -1 (Transport endpoint is not
connected)
> > [2014-03-04 23:19:27.857455] E [common-utils.c:125:gf_resolve_ip6]
> > 0-resolver: getaddrinfo failed (No address associated with hostname)
> > [2014-03-04 23:19:27.857507] E
> > [name.c:243:af_inet_client_get_remote_sockaddr] 0-TEST1-client-0: DNS
> > resolution failed on host glustertest1
> > [2014-03-04 23:19:28.047913] E [common-utils.c:125:gf_resolve_ip6]
> > 0-resolver: getaddrinfo failed (No address associated with hostname)
> > [2014-03-04 23:19:28.047963] E
> > [name.c:243:af_inet_client_get_remote_sockaddr] 0-TEST1-client-1: DNS
> > resolution failed on host glustertest2
> >
> > These log messages are interesting because although the servers in
> question
> > (glustertest{1,2,3,4} are not in DNS, they *are* in the /etc/hosts
files
> on
> > all of the hosts in question.
> >
> > Is it a bug that the client requires that all the GlusterFS servers be
in
> > DNS?
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140311/36694873/attachment.html>