Hi Carlos,
Thanks for the advice. I will try those things today and let you know
the outcome.
Best Regards,
Dan
> ------------------------------
>
> Message: 4
> Date: Fri, 7 Mar 2014 17:56:08 +0100
> From: Carlos Capriotti <capriotti.carlos at gmail.com>
> To: Daniel Baker <info at collisiondetection.biz>
> Cc: gluster-users <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] Testing Gluster 3.2.4 in VMware
> Message-ID:
> <CAMShz32JK=B+O+cL3Ej-hxSsQ1s06U0JOiP9GgSkdjfq1=0s-Q at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello, Daniel.
> I am also testing gluster on vmware; in my application, it will be a
> secondary datastore for VM images.
>
> So far, I've hit a couple of brick walls, like, for instance, VMware
not
> reading volumes created as striped, or striped + replicated. It simply sits
> there, trying, four hours, without errors on either sides.
>
> But your current configuration WILL work.
>
> As a suggestion, to begin with your troubleshooting, try disabling firewall
> and SElinux. nothing to do with your current problem, BUT will matter in
> the near future. After you are sure all works, go back an re-enable/ fine
> tune them.
>
> Now to your problem...
>
> Your first syntax seem to be a bit off,unless it is a typo;
>
> sudo mount.glusterfs 192.168.100.170:gv0 /mnt/export
>
> you see, there is a slash missing after. It should read
>
> sudo mount.glusterfs 192.168.100.170:/gv0 /mnt/export
>
> For the second case, you did not post the error message, so I can only
> suggest you try copying/pasting this:
>
> sudo mount -t glusterfs 192.168.100.170:/gv0 /mnt/export
>
> Now, here is another trick: try mounting wit nfs:
>
> First, make sure your NFS share is really being shared:
>
> # showmount -e 192.168.100.170
>
> Alternatively, if you are on one of the gluster servers, just for testing,
> you may try:
>
> # showmount -e localhost
>
> Make sure your gluster volume is REALLY called gv0.
>
> Now you can try mounting with:
>
> sudo mount -t nfs 192.168.100.170:/gv0 /mnt/export
>
> Again, if you are on one of the servers, try
>
> sudo mount -t nfs localhost:/gv0 /mnt/export
>
> You might want to "sudo su" to run everything all commands as
root, without
> the hassle of sudoing everything.
>
> Give it a try. If nfs works, go for it anyway; It is your only option for
> VMware/esxi anyway.
>
> There are a few more advanced steps on esxi and on gluster, but let's
get
> it to work first, right ?
>
> Cheers,
>
> On Fri, Mar 7, 2014 at 9:15 AM, Daniel Baker <info at
collisiondetection.biz>wrote:
>
>>
>> Hi,
>>
>> I have followed your tutorial to set up glusterfs 3.4.2 in vmware.
>>
>>
http://www.gluster.org/community/documentation/index.php/Getting_started_configure
>>
>> My gluster volume info is the same as this:
>>
>>
>> Volume Name: gv0
>> Type: Replicate
>> Volume ID: 8bc3e96b-a1b6-457d-8f7a-a91d1d4dc019
>> Status: Created
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: node01.yourdomain.net:/export/sdb1/brick
>> Brick2: node02.yourdomain.net:/export/sdb1/brick
>>
>> In order to test replication I have installed the glusterfs-client on
my
>> ubuntu 12.04 laptop.
>>
>> I issue this command:
>>
>> sudo mount.glusterfs 192.168.100.170:gv0 /mnt/export
>>
>> but I receive this error :
>>
>> Usage: mount.glusterfs
<volumeserver>:<volumeid/volumeport> -o
>> <options> <mountpoint>
>> Options:
>> man 8 mount.glusterfs
>>
>> To display the version number of the mount helper:
>> mount.glusterfs --version
>>
>>
>>
>> I have also tried this variant :
>>
>> # mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
>>
>>
>>
>> So how do I mount the volumes and test the replication. Your getting
>> started tutorial doesn't detail that ?
>>
>> Thanks for your help
>>
>> Dan
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140307/1143f397/attachment-0001.html>
>
> ------------------------------