Apologies for sending so many messages about this! I think I may be running
into this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1105283
Would someone be so kind as to let me know which symlinks are missing when
this bug manifests, so that I can create them?
Thank you,
Dave
On Sun, Dec 7, 2014 at 11:01 AM, David Gibbons <david.c.gibbons at
gmail.com>
wrote:
> Ok,
>
> I was able to get geo-replication configured by
> changing /usr/local/libexec/glusterfs/gverify.sh to use ssh to access the
> local machine, instead of accessing bash -c directly. I then found that the
> hook script was missing for geo-replication, so I copied that over
> manually. I now have what appears to be a "configured" geo-rep
setup:
>
>> # gluster volume geo-replication shares gfs-a-bkp::bkpshares status
>>
>>
>>> MASTER NODE MASTER VOL MASTER BRICK
>>> SLAVE STATUS CHECKPOINT STATUS
CRAWL
>>> STATUS
>>
>>
>>>
--------------------------------------------------------------------------------------------------------------------------------------------------------
>>
>> gfs-a-3 shares
>>> /mnt/a-3-shares-brick-1/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-3 shares
>>> /mnt/a-3-shares-brick-2/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-3 shares
>>> /mnt/a-3-shares-brick-3/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-3 shares
>>> /mnt/a-3-shares-brick-4/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-2 shares
>>> /mnt/a-2-shares-brick-1/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-2 shares
>>> /mnt/a-2-shares-brick-2/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-2 shares
>>> /mnt/a-2-shares-brick-3/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-2 shares
>>> /mnt/a-2-shares-brick-4/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-4 shares
>>> /mnt/a-4-shares-brick-1/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-4 shares
>>> /mnt/a-4-shares-brick-2/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-4 shares
>>> /mnt/a-4-shares-brick-3/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-4 shares
>>> /mnt/a-4-shares-brick-4/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-1 shares
>>> /mnt/a-1-shares-brick-1/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-1 shares
>>> /mnt/a-1-shares-brick-2/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-1 shares
>>> /mnt/a-1-shares-brick-3/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>> gfs-a-1 shares
>>> /mnt/a-1-shares-brick-4/brick gfs-a-bkp::bkpshares Not
Started
>>> N/A N/A
>>
>>
> So that's a step in the right direction (and I can upload a patch for
> gverify to a bugzilla). However, gverify *should* have worked with bash-c,
> and I was not able to figure out why it didn't work, other than it
didn't
> seem able to find some programs. I'm thinking that maybe the PATH
variable
> is wrong for Gluster, and that's why gverify didn't work out of the
box.
>
> When I attempt to start geo-rep now, I get the following in the geo-rep
> log:
>
>> [2014-12-07 10:52:40.893594] E
>> [syncdutils(monitor):218:log_raise_exception] <top>: execution of
"gluster"
>> failed with ENOENT (No such file or directory)
>
> [2014-12-07 10:52:40.893886] I [syncdutils(monitor):192:finalize]
<top>:
>> exiting.
>
>
> Which seems to agree that maybe gluster isn't running with the same
path
> variable that my console session is running with. Is this possible? I know
> I'm grasping :).
>
> Any nudge in the right direction would be very much appreciated!
>
> Cheers,
> Dave
>
>
> On Sat, Dec 6, 2014 at 10:06 AM, David Gibbons <david.c.gibbons at
gmail.com>
> wrote:
>
>> Good Morning,
>>
>> I am having some trouble getting geo-replication started on a 3.5.3
>> volume.
>>
>> I have verified that password-less SSH is functional in both directions
>> from the backup gluster server, and all nodes in the production
gluster. I
>> have verified that all nodes in production and backup cluster are
running
>> the same version of gluster, and that name resolution works in both
>> directions.
>>
>> When I attempt to start geo-replication with this command:
>>
>>> gluster volume geo-replication shares gfs-a-bkp::bkpshares create
>>> push-pem
>>>
>>
>> I end up with the following in the logs:
>>
>>> [2014-12-06 15:02:50.284426] E
>>> [glusterd-geo-rep.c:1889:glusterd_verify_slave] 0-: Not a valid
slave
>>
>> [2014-12-06 15:02:50.284495] E
>>> [glusterd-geo-rep.c:2106:glusterd_op_stage_gsync_create] 0-:
>>> gfs-a-bkp::bkpshares is not a valid slave volume. Error: Unable to
fetch
>>> master volume details. Please check the master cluster and master
volume.
>>
>> [2014-12-06 15:02:50.284509] E
[glusterd-syncop.c:912:gd_stage_op_phase]
>>> 0-management: Staging of operation 'Volume Geo-replication
Create' failed
>>> on localhost : Unable to fetch master volume details. Please check
the
>>> master cluster and master volume.
>>
>>
>> Would someone be so kind as to point me in the right direction?
>>
>> Cheers,
>> Dave
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141208/3761e58c/attachment.html>