hi Aravinda,
had the session created using : create ssh-port 2222 push-pem and also the
:
gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port 2222
hitting this message:
geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
geo-replication command failed
Below is snap of status:
[root at k8s-agentpool1-24779565-1
vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
Created N/A N/A
172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
Created N/A N/A
172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick
root 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
Created N/A N/A
any ideas ? where can find logs for the failed commands check in gysncd.log
, the trace is as below:
[2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:04:42.387192] E
[syncdutils(monitor):332:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311,
in
main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 50,
in
subcmd_monitor
return monitor.monitor(local, remote)
File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
427, in
monitor
return Monitor().multiplex(*distribute(local, remote))
File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
370, in
distribute
mvol = Volinfo(master.volume, master.host)
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
860,
in __init__
print "debug varible " %vix
TypeError: not all arguments converted during string formatting
[2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:19.25285] I [gsyncd(status):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:09:15.766882] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:09:16.30267] I [gsyncd(config-get):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:09:16.89006] I [gsyncd(config-set):297:main] <top>: Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
regards,
Maurya
On Mon, Mar 25, 2019 at 9:08 AM Aravinda <avishwan at redhat.com> wrote:
> Use `ssh-port <port>` while creating the Geo-rep session
>
> Ref:
>
>
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session
>
> And set the ssh-port option before start.
>
> ```
> gluster volume geo-replication <master_volume> \
> [<slave_user>@]<slave_host>::<slave_volume> config
> ssh-port 2222
> ```
>
> --
> regards
> Aravinda
> http://aravindavk.in
>
>
> On Sun, 2019-03-24 at 17:13 +0530, Maurya M wrote:
> > did all the suggestion as mentioned in the log trace , have another
> > setup using root user , but there i have issue on the ssh command as
> > i am unable to change the ssh port to use default 22, but my servers
> > (azure aks engine) are configure to using 2222 where i am unable to
> > change the ports , restart of ssh service giving me error!
> >
> > Is this syntax correct to config the ssh-command:
> > gluster volume geo-replication vol_041afbc53746053368a1840607636e97
> > xxx.xx.xxx.xx::vol_a5aee81a873c043c99a938adcb5b5781 config ssh-
> > command '/usr/sbin/sshd -D -p 2222'
> >
> > On Sun, Mar 24, 2019 at 4:38 PM Maurya M <mauryam at gmail.com>
wrote:
> > > Did give the persmission on both "/var/log/glusterfs/"
&
> > > "/var/lib/glusterd/" too, but seems the directory where
i mounted
> > > using heketi is having issues:
> > >
> > > [2019-03-22 09:48:21.546308] E [syncdutils(worker
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):305:log_raise_exception]
> > > <top>: connection to peer is broken
> > > [2019-03-22 09:48:21.546662] E [syncdutils(worker
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):309:log_raise_exception]
> > > <top>: getting "No such file or directory"errors
is most likely due
> > > to MISCONFIGURATION, please remove all the public keys added by
> > > geo-replication from authorized_keys file in slave nodes and run
> > > Geo-replication create command again.
> > > [2019-03-22 09:48:21.546736] E [syncdutils(worker
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):316:log_raise_exception]
> > > <top>: If `gsec_create container` was used, then run
`gluster
> > > volume geo-replication <MASTERVOL>
> > > [<SLAVEUSER>@]<SLAVEHOST>::<SLAVEVOL> config
remote-gsyncd
> > > <GSYNCD_PATH> (Example GSYNCD_PATH:
> > > `/usr/libexec/glusterfs/gsyncd`)
> > > [2019-03-22 09:48:21.546858] E [syncdutils(worker
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):801:errlog] Popen: command
> > > returned error cmd=ssh -oPasswordAuthentication=no
> > > -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-
> > > replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-
> > > aux-ssh-OaPGc3/c784230c9648efa4d529975bd779c551.sock
> > > azureuser at 172.16.201.35 /nonexistent/gsyncd slave
> > > vol_041afbc53746053368a1840607636e97 azureuser at
172.16.201.35::vol_a
> > > 5aee81a873c043c99a938adcb5b5781 --master-node 172.16.189.4 --
> > > master-node-id dd4efc35-4b86-4901-9c00-483032614c35
--master-brick
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick --local-node 172.16.201.35
--
> > > local-node-id 7eb0a2b6-c4d6-41b1-a346-0638dbf8d779
--slave-timeout
> > > 120 --slave-log-level INFO --slave-gluster-log-level INFO
--slave-
> > > gluster-command-dir /usr/sbin error=127
> > > [2019-03-22 09:48:21.546977] E [syncdutils(worker
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):805:logerr] Popen: ssh>
bash:
> > > /nonexistent/gsyncd: No such file or directory
> > > [2019-03-22 09:48:21.565583] I [repce(agent
> > >
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > > eab2394433f02f5617012d4ae3c28f/brick):80:service_loop]
RepceServer:
> > > terminating on reaching EOF.
> > > [2019-03-22 09:48:21.565745] I [monitor(monitor):266:monitor]
> > > Monitor: worker died before establishing connection
> > >
brick=/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/br
> > > ick_b3eab2394433f02f5617012d4ae3c28f/brick
> > > [2019-03-22 09:48:21.579195] I
> > > [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus:
Worker
> > > Status Change status=Faulty
> > >
> > > On Fri, Mar 22, 2019 at 10:23 PM Sunny Kumar <sunkumar at
redhat.com>
> > > wrote:
> > > > Hi Maurya,
> > > >
> > > > Looks like hook script is failed to set permissions for
azureuser
> > > > on
> > > > "/var/log/glusterfs".
> > > > You can assign permission manually for directory then it
will
> > > > work.
> > > >
> > > > -Sunny
> > > >
> > > > On Fri, Mar 22, 2019 at 2:07 PM Maurya M <mauryam at
gmail.com>
> > > > wrote:
> > > > >
> > > > > hi Sunny,
> > > > > Passwordless ssh to :
> > > > >
> > > > > ssh -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i
> > > > /var/lib/glusterd/geo-replication/secret.pem -p 22
> > > > azureuser at 172.16.201.35
> > > > >
> > > > > is login, but when the whole command is run getting
permission
> > > > issues again::
> > > > >
> > > > > ssh -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i
> > > > /var/lib/glusterd/geo-replication/secret.pem -p 22
> > > > azureuser at 172.16.201.35 gluster --xml
--remote-host=localhost
> > > > volume info vol_a5aee81a873c043c99a938adcb5b5781 -v
> > > > > ERROR: failed to create logfile
"/var/log/glusterfs/cli.log"
> > > > (Permission denied)
> > > > > ERROR: failed to open logfile
/var/log/glusterfs/cli.log
> > > > >
> > > > > any idea here ?
> > > > >
> > > > > thanks,
> > > > > Maurya
> > > > >
> > > > >
> > > > > On Thu, Mar 21, 2019 at 2:43 PM Maurya M <mauryam at
gmail.com>
> > > > wrote:
> > > > >>
> > > > >> hi Sunny,
> > > > >> i did use the [1] link for the setup, when i
encountered this
> > > > error during ssh-copy-id : (so setup the passwordless ssh,
by
> > > > manually copied the private/ public keys to all the nodes ,
both
> > > > master & slave)
> > > > >>
> > > > >> [root at k8s-agentpool1-24779565-1 ~]# ssh-copy-id
> > > > geouser at xxx.xx.xxx.x
> > > > >> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be
installed:
> > > > "/root/.ssh/id_rsa.pub"
> > > > >> The authenticity of host ' xxx.xx.xxx.x (
xxx.xx.xxx.x )'
> > > > can't be established.
> > > > >> ECDSA key fingerprint is
> > > > SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.
> > > > >> ECDSA key fingerprint is
> > > > MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.
> > > > >> Are you sure you want to continue connecting
(yes/no)? yes
> > > > >> /usr/bin/ssh-copy-id: INFO: attempting to log in
with the new
> > > > key(s), to filter out any that are already installed
> > > > >> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be
installed --
> > > > if you are prompted now it is to install the new keys
> > > > >> Permission denied (publickey).
> > > > >>
> > > > >> To start afresh what all needs to teardown /
delete, do we
> > > > have any script for it ? where all the pem keys do i need to
> > > > delete?
> > > > >>
> > > > >> thanks,
> > > > >> Maurya
> > > > >>
> > > > >> On Thu, Mar 21, 2019 at 2:12 PM Sunny Kumar <
> > > > sunkumar at redhat.com> wrote:
> > > > >>>
> > > > >>> Hey you can start a fresh I think you are not
following
> > > > proper setup steps.
> > > > >>>
> > > > >>> Please follow these steps [1] to create geo-rep
session, you
> > > > can
> > > > >>> delete the old one and do a fresh start. Or
alternative you
> > > > can use
> > > > >>> this tool[2] to setup geo-rep.
> > > > >>>
> > > > >>>
> > > > >>> [1].
> > > >
> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
> > > > >>> [2].
http://aravindavk.in/blog/gluster-georep-tools/
> > > > >>>
> > > > >>>
> > > > >>> /Sunny
> > > > >>>
> > > > >>> On Thu, Mar 21, 2019 at 11:28 AM Maurya M
<mauryam at gmail.com>
> > > > wrote:
> > > > >>> >
> > > > >>> > Hi Sunil,
> > > > >>> > I did run the on the slave node :
> > > > >>> >
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
> > > > vol_041afbc53746053368a1840607636e97
> > > > vol_a5aee81a873c043c99a938adcb5b5781
> > > > >>> > getting this message
"/home/azureuser/common_secret.pem.pub
> > > > not present. Please run geo-replication command on master
with
> > > > push-pem option to generate the file"
> > > > >>> >
> > > > >>> > So went back and created the session
again, no change, so
> > > > manually copied the common_secret.pem.pub to
/home/azureuser/ but
> > > > still the set_geo_rep_pem_keys.sh is looking the pem file in
> > > > different name :
> > > >
COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pe
> > > > m.pub , change the name of pem , ran the command again :
> > > > >>> >
> > > > >>> >
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
> > > > vol_041afbc53746053368a1840607636e97
> > > > vol_a5aee81a873c043c99a938adcb5b5781
> > > > >>> > Successfully copied file.
> > > > >>> > Command executed successfully.
> > > > >>> >
> > > > >>> >
> > > > >>> > - went back and created the session ,
start the geo-
> > > > replication , still seeing the same error in logs. Any
ideas ?
> > > > >>> >
> > > > >>> > thanks,
> > > > >>> > Maurya
> > > > >>> >
> > > > >>> >
> > > > >>> >
> > > > >>> > On Wed, Mar 20, 2019 at 11:07 PM Sunny
Kumar <
> > > > sunkumar at redhat.com> wrote:
> > > > >>> >>
> > > > >>> >> Hi Maurya,
> > > > >>> >>
> > > > >>> >> I guess you missed last trick to
distribute keys in slave
> > > > node. I see
> > > > >>> >> this is non-root geo-rep setup so
please try this:
> > > > >>> >>
> > > > >>> >>
> > > > >>> >> Run the following command as root in
any one of Slave
> > > > node.
> > > > >>> >>
> > > > >>> >>
/usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh
> > > > <slave_user>
> > > > >>> >> <master_volume>
<slave_volume>
> > > > >>> >>
> > > > >>> >> - Sunny
> > > > >>> >>
> > > > >>> >> On Wed, Mar 20, 2019 at 10:47 PM
Maurya M <
> > > > mauryam at gmail.com> wrote:
> > > > >>> >> >
> > > > >>> >> > Hi all,
> > > > >>> >> > Have setup a 3 master nodes - 3
slave nodes (gluster
> > > > 4.1) for geo-replication, but once have the geo-replication
> > > > configure the status is always on "Created',
> > > > >>> >> > even after have force start the
session.
> > > > >>> >> >
> > > > >>> >> > On close inspect of the logs on
the master node seeing
> > > > this error:
> > > > >>> >> >
> > > > >>> >> > "E
[syncdutils(monitor):801:errlog] Popen: command
> > > > returned error cmd=ssh -oPasswordAuthentication=no
> > > > -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-
> > > > replication/secret.pem -p 22 azureuser at xxxxx.xxxx..xxx.
gluster
> > > > --xml --remote-host=localhost volume info
> > > > vol_a5ae34341a873c043c99a938adcb5b5781 error=255"
> > > > >>> >> >
> > > > >>> >> > Any ideas what is issue?
> > > > >>> >> >
> > > > >>> >> > thanks,
> > > > >>> >> > Maurya
> > > > >>> >> >
> > > > >>> >> >
_______________________________________________
> > > > >>> >> > Gluster-users mailing list
> > > > >>> >> > Gluster-users at gluster.org
> > > > >>> >> >
https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190325/84f28c8b/attachment.html>