Hi Krishna,
glusterd log file would help here
Thanks,
Kotresh HR
On Thu, Sep 6, 2018 at 1:02 PM, Krishna Verma <kverma at cadence.com>
wrote:
> Hi All,
>
>
>
> I am getting issue in geo-replication distributed gluster volume. In a
> session status it shows only peer node instead of 2. And I am also not able
> to delete/start/stop or anything on this session.
>
>
>
> geo-replication distributed gluster volume ?glusterdist? status
>
> [root at gluster-poc-noida ~]# gluster volume status glusterdist
>
> Status of volume: glusterdist
>
> Gluster process TCP Port RDMA Port Online
> Pid
>
> ------------------------------------------------------------
> ------------------
>
> Brick gluster-poc-noida:/data/gluster-dist/
>
> distvol 49154 0 Y
> 23138
>
> Brick noi-poc-gluster:/data/gluster-dist/di
>
> stvol 49154 0 Y
> 14637
>
>
>
> Task Status of Volume glusterdist
>
> ------------------------------------------------------------
> ------------------
>
> There are no active volume tasks
>
>
>
> Geo-replication session status
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist status
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ----------------------------------------
>
> noi-poc-gluster glusterdist /data/gluster-dist/distvol
> root gluster-poc-sj::glusterdist N/A Stopped
> N/A N/A
>
> [root at gluster-poc-noida ~]#
>
>
>
> Can?t stop/start/delete the session:
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist stop
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist stop force
>
> pid-file entry mising in config file and template config file.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist delete
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist start
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]#
>
>
>
> gsyncd.log errors
>
> [2018-09-06 06:17:21.757195] I [monitor(monitor):269:monitor] Monitor:
worker
> died before establishing connection brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:32.312093] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:32.441817] I [monitor(monitor):261:monitor] Monitor:
Changelog
> Agent died, Aborting Worker brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:32.442193] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:43.1177] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:43.137794] I [monitor(monitor):261:monitor] Monitor:
> Changelog Agent died, Aborting Worker brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:43.138214] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:53.144072] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:53.276853] I [monitor(monitor):261:monitor] Monitor:
> Changelog Agent died, Aborting Worker brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:53.277327] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase brick=/data/gluster-dist/distvol
>
>
>
> Could anyone please help?
>
>
>
> /Krishna
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180906/5f08374f/attachment.html>