Hi Arvind,
Have patched my setup with your fix: re-run the setup, but this time
getting a different error where it failed to commit the ssh-port on my
other 2 nodes on the master cluster, so manually copied the :
*[vars]*
*ssh-port = 2222*
into gsyncd.conf
and status reported back is as shown below : Any ideas how to troubleshoot
this?
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
172.16.201.4 *Passive *N/A N/A
172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
*Faulty *N/A N/A
172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
*Initializing*... N/A N/A
On Tue, Mar 26, 2019 at 1:40 PM Aravinda <avishwan at redhat.com> wrote:
> I got chance to investigate this issue further and identified a issue
> with Geo-replication config set and sent patch to fix the same.
>
> BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
> Patch: https://review.gluster.org/22418
>
> On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> > ran this command : ssh -p 2222 -i /var/lib/glusterd/geo-
> > replication/secret.pem root@<slave node>gluster volume info
--xml
> >
> > attaching the output.
> >
> >
> >
> > On Mon, Mar 25, 2019 at 2:13 PM Aravinda <avishwan at
redhat.com> wrote:
> > > Geo-rep is running `ssh -i /var/lib/glusterd/geo-
> > > replication/secret.pem
> > > root@<slavenode> gluster volume info --xml` and parsing its
output.
> > > Please try to to run the command from the same node and let us
know
> > > the
> > > output.
> > >
> > >
> > > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > > > Now the error is on the same line 860 : as highlighted
below:
> > > >
> > > > [2019-03-25 06:11:52.376238] E
> > > > [syncdutils(monitor):332:log_raise_exception] <top>:
FAIL:
> > > > Traceback (most recent call last):
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > > > 311, in main
> > > > func(args)
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > > line
> > > > 50, in subcmd_monitor
> > > > return monitor.monitor(local, remote)
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > line
> > > > 427, in monitor
> > > > return Monitor().multiplex(*distribute(local, remote))
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > line
> > > > 386, in distribute
> > > > svol = Volinfo(slave.volume, "localhost",
prelude)
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > line
> > > > 860, in __init__
> > > > vi = XET.fromstring(vix)
> > > > File
"/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1300, in
> > > > XML
> > > > parser.feed(text)
> > > > File
"/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1642, in
> > > > feed
> > > > self._raiseerror(v)
> > > > File
"/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1506, in
> > > > _raiseerror
> > > > raise err
> > > > ParseError: syntax error: line 1, column 0
> > > >
> > > >
> > > > On Mon, Mar 25, 2019 at 11:29 AM Maurya M <mauryam at
gmail.com>
> > > wrote:
> > > > > Sorry my bad, had put the print line to debug, i am
using
> > > gluster
> > > > > 4.1.7, will remove the print line.
> > > > >
> > > > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda <avishwan
at redhat.com>
> > > > > wrote:
> > > > > > Below print statement looks wrong. Latest
Glusterfs code
> > > doesn't
> > > > > > have
> > > > > > this print statement. Please let us know which
version of
> > > > > > glusterfs you
> > > > > > are using.
> > > > > >
> > > > > >
> > > > > > ```
> > > > > > File
> > >
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > > > line
> > > > > > 860, in __init__
> > > > > > print "debug varible " %vix
> > > > > > ```
> > > > > >
> > > > > > As a workaround, edit that file and comment the
print line
> > > and
> > > > > > test the
> > > > > > geo-rep config command.
> > > > > >
> > > > > >
> > > > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > > > hi Aravinda,
> > > > > > > had the session created using : create
ssh-port 2222 push-
> > > pem
> > > > > > and
> > > > > > > also the :
> > > > > > >
> > > > > > > gluster volume geo-replication
> > > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config
> > > ssh-
> > > > > > port
> > > > > > > 2222
> > > > > > >
> > > > > > > hitting this message:
> > > > > > > geo-replication config-set failed for
> > > > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > > > > geo-replication command failed
> > > > > > >
> > > > > > > Below is snap of status:
> > > > > > >
> > > > > > > [root at k8s-agentpool1-24779565-1
> > > > > > >
> > > > > >
> > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > > > > 8e45ed9d51b9a80df6c33f]# gluster volume
geo-replication
> > > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > > > >
> > > > > > > MASTER NODE MASTER VOL
> > > MASTER
> > > > > > > BRICK
> > >
> > > > > >
> > > > > > > SLAVE USER
SLAVE
> > >
> > > > > >
> > > > > > > SLAVE NODE
STATUS
> > > CRAWL
> > > > > > STATUS
> > > > > > > LAST_SYNCED
> > > > > > >
---------------------------------------------------------
> > > ----
> > > > > > ------
> > > > > > >
---------------------------------------------------------
> > > ----
> > > > > > ------
> > > > > > >
---------------------------------------------------------
> > > ----
> > > > > > ------
> > > > > > >
---------------------------------------------------------
> > > ----
> > > > > > ------
> > > > > > > ----------------
> > > > > > > 172.16.189.4
vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > >
> > > > > >
> > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > > > > 116f
> > > > > > > b9427fb26f752d9ba8e45e183cb1/brick root
> > > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> > >
> > > > > >
> > > > > > > Created N/A N/A
> > > > > > > 172.16.189.35
vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > >
> > > > > >
> > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > > > > 266b
> > > > > > > b08f0d466d346f8c0b19569736fb/brick root
> > > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> > >
> > > > > >
> > > > > > > Created N/A N/A
> > > > > > > 172.16.189.66
vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > >
> > > > > >
> > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > > > > dfa4
> > > > > > > 4c9380cdedac708e27e2c2a443a0/brick root
> > > > > > >
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> > >
> > > > > >
> > > > > > > Created N/A N/A
> > > > > > >
> > > > > > > any ideas ? where can find logs for the
failed commands
> > > check
> > > > > > in
> > > > > > > gysncd.log , the trace is as below:
> > > > > > >
> > > > > > > [2019-03-25 04:04:42.295043] I
[gsyncd(monitor):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:04:42.387192] E
> > > > > > > [syncdutils(monitor):332:log_raise_exception]
<top>: FAIL:
> > > > > > > Traceback (most recent call last):
> > > > > > > File
> > > "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
> > > > > > line
> > > > > > > 311, in main
> > > > > > > func(args)
> > > > > > > File
> > > "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > > > > > line
> > > > > > > 50, in subcmd_monitor
> > > > > > > return monitor.monitor(local, remote)
> > > > > > > File
> > > "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > > > line
> > > > > > > 427, in monitor
> > > > > > > return
Monitor().multiplex(*distribute(local, remote))
> > > > > > > File
> > > "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > > > line
> > > > > > > 370, in distribute
> > > > > > > mvol = Volinfo(master.volume,
master.host)
> > > > > > > File
> > > > > >
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > line
> > > > > > > 860, in __init__
> > > > > > > print "debug varible " %vix
> > > > > > > TypeError: not all arguments converted during
string
> > > formatting
> > > > > > > [2019-03-25 04:04:48.997519] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:04:49.93528] I
[gsyncd(status):297:main]
> > > <top>:
> > > > > > Using
> > > > > > > session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:08:07.194348] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:08:07.262588] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:08:07.550080] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:08:18.933028] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:08:19.25285] I
[gsyncd(status):297:main]
> > > <top>:
> > > > > > Using
> > > > > > > session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:09:15.766882] I
[gsyncd(config-
> > > get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:09:16.30267] I
[gsyncd(config-get):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > > [2019-03-25 04:09:16.89006] I
[gsyncd(config-set):297:main]
> > > > > > <top>:
> > > > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > > > >
> > > > > >
> > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > > > l_e7
> > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > > >
> > > > > > > regards,
> > > > > > > Maurya
> > > > > > >
> > > > > > > On Mon, Mar 25, 2019 at 9:08 AM Aravinda <
> > > avishwan at redhat.com>
> > > > > > wrote:
> > > > > > > > Use `ssh-port <port>` while
creating the Geo-rep session
> > > > > > > >
> > > > > > > > Ref:
> > > > > > > >
> > > > > >
> > >
>
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session
> > > > > > > >
> > > > > > > > And set the ssh-port option before
start.
> > > > > > > >
> > > > > > > > ```
> > > > > > > > gluster volume geo-replication
<master_volume> \
> > > > > > > >
[<slave_user>@]<slave_host>::<slave_volume> config
> > > > > > > > ssh-port 2222
> > > > > > > > ```
> > > > > > > >
> --
> regards
> Aravinda
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190326/714e5805/attachment.html>