Please share the tracebacks/errors from the logs(Master nodes)
/var/log/glusterfs/geo-replication/static/*.log
regards
Aravinda
On 10/06/2015 01:19 PM, Wade Fitzpatrick wrote:> I am trying to set up geo-replication of a striped-replicate volume. I
> used https://github.com/aravindavk/georepsetup to configure the
> replication.
>
> root at james:~# gluster volume info
>
> Volume Name: static
> Type: Striped-Replicate
> Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
> Status: Started
> Number of Bricks: 1 x 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: james:/data/gluster1/static/brick1
> Brick2: cupid:/data/gluster1/static/brick2
> Brick3: hilton:/data/gluster1/static/brick3
> Brick4: present:/data/gluster1/static/brick4
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> performance.readdir-ahead: on
>
> root at james:~# gluster volume geo-replication status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED
>
------------------------------------------------------------------------------------------------------------------------------------------------------
> james static /data/gluster1/static/brick1 root
> ssh://palace::static N/A Created N/A N/A
> cupid static /data/gluster1/static/brick2 root
> ssh://palace::static N/A Created N/A N/A
> hilton static /data/gluster1/static/brick3 root
> ssh://palace::static N/A Created N/A N/A
> present static /data/gluster1/static/brick4 root
> ssh://palace::static N/A Created N/A N/A
>
> So of the 4 bricks, data is striped over brick1 and brick3, also
> brick1=brick2 is a mirror and brick3=brick4 is a mirror. Therefore I
> have no need to geo-replicate bricks 2 and 4.
>
> At the other site, palace and madonna form a stripe volume (no
> replication):
>
> root at palace:~# gluster volume info
>
> Volume Name: static
> Type: Stripe
> Volume ID: 0e91c6f2-3499-4fc4-9630-9da8b7f57db5
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: palace:/data/gluster1/static/brick1
> Brick2: madonna:/data/gluster1/static/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
>
> However, when I try to start geo-replication, it fails as below.
>
> root at james:~# gluster volume geo-replication static
> ssh://palace::static start
> Starting geo-replication session between static & ssh://palace::static
> has been successful
> root at james:~# gluster volume geo-replication status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED
>
-----------------------------------------------------------------------------------------------------------------------------------------------------
> james static /data/gluster1/static/brick1 root
> ssh://palace::static N/A Faulty N/A N/A
> cupid static /data/gluster1/static/brick2 root
> ssh://palace::static N/A Faulty N/A N/A
> hilton static /data/gluster1/static/brick3 root
> ssh://palace::static N/A Faulty N/A N/A
> present static /data/gluster1/static/brick4 root
> ssh://palace::static N/A Faulty N/A N/A
>
>
> What should I do to set this up properly so that
> james:/data/gluster1/static/brick1 gets replicated to
> palace:/data/gluster1/static/brick1 ; and
> hilton:/data/gluster1/static/brick3 gets replicated to
> madonna:/data/gluster1/static/brick2 ???
>
>
>
> --
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151006/4f02e48a/attachment.html>