ran this command : ssh -p 2222 -i
/var/lib/glusterd/geo-replication/secret.pem root@<slave node>gluster
volume info --xml
attaching the output.
On Mon, Mar 25, 2019 at 2:13 PM Aravinda <avishwan at redhat.com> wrote:
> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
> root@<slavenode> gluster volume info --xml` and parsing its output.
> Please try to to run the command from the same node and let us know the
> output.
>
>
> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > Now the error is on the same line 860 : as highlighted below:
> >
> > [2019-03-25 06:11:52.376238] E
> > [syncdutils(monitor):332:log_raise_exception] <top>: FAIL:
> > Traceback (most recent call last):
> > File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
line
> > 311, in main
> > func(args)
> > File
"/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > 50, in subcmd_monitor
> > return monitor.monitor(local, remote)
> > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 427, in monitor
> > return Monitor().multiplex(*distribute(local, remote))
> > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 386, in distribute
> > svol = Volinfo(slave.volume, "localhost", prelude)
> > File
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> > 860, in __init__
> > vi = XET.fromstring(vix)
> > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
1300, in
> > XML
> > parser.feed(text)
> > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
1642, in
> > feed
> > self._raiseerror(v)
> > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
1506, in
> > _raiseerror
> > raise err
> > ParseError: syntax error: line 1, column 0
> >
> >
> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M <mauryam at gmail.com>
wrote:
> > > Sorry my bad, had put the print line to debug, i am using gluster
> > > 4.1.7, will remove the print line.
> > >
> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda <avishwan at
redhat.com>
> > > wrote:
> > > > Below print statement looks wrong. Latest Glusterfs code
doesn't
> > > > have
> > > > this print statement. Please let us know which version of
> > > > glusterfs you
> > > > are using.
> > > >
> > > >
> > > > ```
> > > > File
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > line
> > > > 860, in __init__
> > > > print "debug varible " %vix
> > > > ```
> > > >
> > > > As a workaround, edit that file and comment the print line
and
> > > > test the
> > > > geo-rep config command.
> > > >
> > > >
> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > hi Aravinda,
> > > > > had the session created using : create ssh-port 2222
push-pem
> > > > and
> > > > > also the :
> > > > >
> > > > > gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
config ssh-
> > > > port
> > > > > 2222
> > > > >
> > > > > hitting this message:
> > > > > geo-replication config-set failed for
> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > > geo-replication command failed
> > > > >
> > > > > Below is snap of status:
> > > > >
> > > > > [root at k8s-agentpool1-24779565-1
> > > > >
> > > >
vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > >
> > > > > MASTER NODE MASTER VOL
MASTER
> > > > > BRICK
> > > >
> > > > > SLAVE USER SLAVE
> > > >
> > > > > SLAVE NODE STATUS
CRAWL
> > > > STATUS
> > > > > LAST_SYNCED
> > > > >
-------------------------------------------------------------
> > > > ------
> > > > >
-------------------------------------------------------------
> > > > ------
> > > > >
-------------------------------------------------------------
> > > > ------
> > > > >
-------------------------------------------------------------
> > > > ------
> > > > > ----------------
> > > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > > 116f
> > > > > b9427fb26f752d9ba8e45e183cb1/brick root
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
N/A
> > > >
> > > > > Created N/A N/A
> > > > > 172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > >
/var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > > 266b
> > > > > b08f0d466d346f8c0b19569736fb/brick root
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
N/A
> > > >
> > > > > Created N/A N/A
> > > > > 172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > >
/var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > > dfa4
> > > > > 4c9380cdedac708e27e2c2a443a0/brick root
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
N/A
> > > >
> > > > > Created N/A N/A
> > > > >
> > > > > any ideas ? where can find logs for the failed commands
check
> > > > in
> > > > > gysncd.log , the trace is as below:
> > > > >
> > > > > [2019-03-25 04:04:42.295043] I
[gsyncd(monitor):297:main]
> > > > <top>:
> > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:04:42.387192] E
> > > > > [syncdutils(monitor):332:log_raise_exception]
<top>: FAIL:
> > > > > Traceback (most recent call last):
> > > > > File
"/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
> > > > line
> > > > > 311, in main
> > > > > func(args)
> > > > > File
"/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > > > line
> > > > > 50, in subcmd_monitor
> > > > > return monitor.monitor(local, remote)
> > > > > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > line
> > > > > 427, in monitor
> > > > > return Monitor().multiplex(*distribute(local,
remote))
> > > > > File
"/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > line
> > > > > 370, in distribute
> > > > > mvol = Volinfo(master.volume, master.host)
> > > > > File
> > > >
"/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> > > > > 860, in __init__
> > > > > print "debug varible " %vix
> > > > > TypeError: not all arguments converted during string
formatting
> > > > > [2019-03-25 04:04:48.997519] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main]
<top>:
> > > > Using
> > > > > session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:08:07.194348] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:08:07.262588] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:08:07.550080] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:08:18.933028] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:08:19.25285] I [gsyncd(status):297:main]
<top>:
> > > > Using
> > > > > session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:09:15.766882] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:09:16.30267] I
[gsyncd(config-get):297:main]
> > > > <top>:
> > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > > [2019-03-25 04:09:16.89006] I
[gsyncd(config-set):297:main]
> > > > <top>:
> > > > > Using session config file
path=/var/lib/glusterd/geo-
> > > > >
> > > >
replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
> > > > l_e7
> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > > > >
> > > > > regards,
> > > > > Maurya
> > > > >
> > > > > On Mon, Mar 25, 2019 at 9:08 AM Aravinda <avishwan
at redhat.com>
> > > > wrote:
> > > > > > Use `ssh-port <port>` while creating the
Geo-rep session
> > > > > >
> > > > > > Ref:
> > > > > >
> > > >
>
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session
> > > > > >
> > > > > > And set the ssh-port option before start.
> > > > > >
> > > > > > ```
> > > > > > gluster volume geo-replication
<master_volume> \
> > > > > >
[<slave_user>@]<slave_host>::<slave_volume> config
> > > > > > ssh-port 2222
> > > > > > ```
> > > > > >
> --
> regards
> Aravinda
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190325/8c20ab12/attachment.html>
-------------- next part --------------
[root at k8s-agentpool1-24779565-1 geo-replication]# ssh -p 2222 -i
/var/lib/glusterd/geo-replication/secret.pem root at 172.16.201.35 gluster
volume info --xml
<?xml version="1.0" encoding="UTF-8"
standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>heketidbstorage</name>
<id>a24ac1cd-4ea2-423e-9d1c-ccfbe3e607f9</id>
<status>1</status>
<statusStr>Started</statusStr>
<snapshotCount>0</snapshotCount>
<brickCount>3</brickCount>
<distCount>3</distCount>
<stripeCount>1</stripeCount>
<replicaCount>3</replicaCount>
<arbiterCount>0</arbiterCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>2</type>
<typeStr>Replicate</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick
uuid="cf79df71-73b3-4513-bbd2-ef63d891b97e">172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_d5da700ee29c0328584170f31698b0df/brick<name>172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_d5da700ee29c0328584170f31698b0df/brick</name><hostUuid>cf79df71-73b3-4513-bbd2-ef63d891b97e</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="7eb0a2b6-c4d6-41b1-a346-0638dbf8d779">172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_3bac8cfd51861fce6cecdd0e8cc605cc/brick<name>172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_3bac8cfd51861fce6cecdd0e8cc605cc/brick</name><hostUuid>7eb0a2b6-c4d6-41b1-a346-0638dbf8d779</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="26a6272a-a4e6-4812-bbf4-c7e2b64508c1">172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_0fc1dde2b69bab57360b71db2fe79569/brick<name>172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_0fc1dde2b69bab57360b71db2fe79569/brick</name><hostUuid>26a6272a-a4e6-4812-bbf4-c7e2b64508c1</hostUuid><isArbiter>0</isArbiter></brick>
</bricks>
<optCount>3</optCount>
<options>
<option>
<name>transport.address-family</name>
<value>inet</value>
</option>
<option>
<name>nfs.disable</name>
<value>on</value>
</option>
<option>
<name>performance.client-io-threads</name>
<value>off</value>
</option>
</options>
</volume>
<volume>
<name>vol_a5aee81a873c043c99a938adcb5b5781</name>
<id>9e92ec96-2134-4163-8cba-010582433bbc</id>
<status>1</status>
<statusStr>Started</statusStr>
<snapshotCount>0</snapshotCount>
<brickCount>3</brickCount>
<distCount>3</distCount>
<stripeCount>1</stripeCount>
<replicaCount>3</replicaCount>
<arbiterCount>0</arbiterCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>2</type>
<typeStr>Replicate</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick
uuid="26a6272a-a4e6-4812-bbf4-c7e2b64508c1">172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_2b980230c37a1bd58c12fad92d43804d/brick<name>172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_2b980230c37a1bd58c12fad92d43804d/brick</name><hostUuid>26a6272a-a4e6-4812-bbf4-c7e2b64508c1</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="cf79df71-73b3-4513-bbd2-ef63d891b97e">172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_f805b8c7bead7189672415c048e82231/brick<name>172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_f805b8c7bead7189672415c048e82231/brick</name><hostUuid>cf79df71-73b3-4513-bbd2-ef63d891b97e</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="7eb0a2b6-c4d6-41b1-a346-0638dbf8d779">172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_6e7f6e47da96c305ece514f94caddeaf/brick<name>172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_6e7f6e47da96c305ece514f94caddeaf/brick</name><hostUuid>7eb0a2b6-c4d6-41b1-a346-0638dbf8d779</hostUuid><isArbiter>0</isArbiter></brick>
</bricks>
<optCount>3</optCount>
<options>
<option>
<name>performance.client-io-threads</name>
<value>off</value>
</option>
<option>
<name>nfs.disable</name>
<value>on</value>
</option>
<option>
<name>transport.address-family</name>
<value>inet</value>
</option>
</options>
</volume>
<volume>
<name>vol_a7568f9c87d4aaf20bc56d8909390e24</name>
<id>310b2b01-cd9a-4c1e-810b-700d78158a82</id>
<status>1</status>
<statusStr>Started</statusStr>
<snapshotCount>0</snapshotCount>
<brickCount>3</brickCount>
<distCount>3</distCount>
<stripeCount>1</stripeCount>
<replicaCount>3</replicaCount>
<arbiterCount>0</arbiterCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>2</type>
<typeStr>Replicate</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick
uuid="26a6272a-a4e6-4812-bbf4-c7e2b64508c1">172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_152482ad653bf54a2c2a9978d8e5f65a/brick<name>172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_152482ad653bf54a2c2a9978d8e5f65a/brick</name><hostUuid>26a6272a-a4e6-4812-bbf4-c7e2b64508c1</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="cf79df71-73b3-4513-bbd2-ef63d891b97e">172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_26b82d524b427664ba3bb43de0d441aa/brick<name>172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_26b82d524b427664ba3bb43de0d441aa/brick</name><hostUuid>cf79df71-73b3-4513-bbd2-ef63d891b97e</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="7eb0a2b6-c4d6-41b1-a346-0638dbf8d779">172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_893bb9c4942f9e765e83b9cb5d81b6ce/brick<name>172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_893bb9c4942f9e765e83b9cb5d81b6ce/brick</name><hostUuid>7eb0a2b6-c4d6-41b1-a346-0638dbf8d779</hostUuid><isArbiter>0</isArbiter></brick>
</bricks>
<optCount>3</optCount>
<options>
<option>
<name>transport.address-family</name>
<value>inet</value>
</option>
<option>
<name>nfs.disable</name>
<value>on</value>
</option>
<option>
<name>performance.client-io-threads</name>
<value>off</value>
</option>
</options>
</volume>
<volume>
<name>vol_e783a730578e45ed9d51b9a80df6c33f</name>
<id>301b00a8-162a-4d90-a978-4c7f7b048fec</id>
<status>1</status>
<statusStr>Started</statusStr>
<snapshotCount>0</snapshotCount>
<brickCount>3</brickCount>
<distCount>3</distCount>
<stripeCount>1</stripeCount>
<replicaCount>3</replicaCount>
<arbiterCount>0</arbiterCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>2</type>
<typeStr>Replicate</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick
uuid="cf79df71-73b3-4513-bbd2-ef63d891b97e">172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_26c7eac980667092e6f84dc1398a8337/brick<name>172.16.201.4:/var/lib/heketi/mounts/vg_6c5d501fdff3889cd28fc85f2b373e85/brick_26c7eac980667092e6f84dc1398a8337/brick</name><hostUuid>cf79df71-73b3-4513-bbd2-ef63d891b97e</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="7eb0a2b6-c4d6-41b1-a346-0638dbf8d779">172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_0c54a3aecaf24de1b2a5a1337145bacf/brick<name>172.16.201.35:/var/lib/heketi/mounts/vg_971b9ef8e2977cca69bebd5ad82f604c/brick_0c54a3aecaf24de1b2a5a1337145bacf/brick</name><hostUuid>7eb0a2b6-c4d6-41b1-a346-0638dbf8d779</hostUuid><isArbiter>0</isArbiter></brick>
<brick
uuid="26a6272a-a4e6-4812-bbf4-c7e2b64508c1">172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_cc777eaebd1c3a64e8483ee88bfa6b43/brick<name>172.16.201.66:/var/lib/heketi/mounts/vg_be107dc7d94595ff30b5b4b6cceab663/brick_cc777eaebd1c3a64e8483ee88bfa6b43/brick</name><hostUuid>26a6272a-a4e6-4812-bbf4-c7e2b64508c1</hostUuid><isArbiter>0</isArbiter></brick>
</bricks>
<optCount>3</optCount>
<options>
<option>
<name>transport.address-family</name>
<value>inet</value>
</option>
<option>
<name>nfs.disable</name>
<value>on</value>
</option>
<option>
<name>performance.client-io-threads</name>
<value>off</value>
</option>
</options>
</volume>
<count>4</count>
</volumes>
</volInfo>
</cliOutput>