rick sanchez
2017-Sep-29 16:00 UTC
[Gluster-users] Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2: gfs3:/gfs/brick1/gv0
Brick3: gfs1:/gfs/arbiter/gv0 (arbiter)
Brick4: gfs1:/gfs/brick1/gv0
Brick5: gfs3:/gfs/brick2/gv0
Brick6: gfs2:/gfs/arbiter/gv0 (arbiter)
Brick7: gfs1:/gfs/brick2/gv0
Brick8: gfs2:/gfs/brick2/gv0
Brick9: gfs3:/gfs/arbiter/gv0 (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[root at gfs4 ~]# gluster volume info
Volume Name: gfsvol_rep
Type: Distributed-Replicate
Volume ID: 42bfa062-ad0d-4242-a813-63389be1c404
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs5:/gfs/brick1/gv0
Brick2: gfs6:/gfs/brick1/gv0
Brick3: gfs4:/gfs/arbiter/gv0 (arbiter)
Brick4: gfs4:/gfs/brick1/gv0
Brick5: gfs6:/gfs/brick2/gv0
Brick6: gfs5:/gfs/arbiter/gv0 (arbiter)
Brick7: gfs4:/gfs/brick2/gv0
Brick8: gfs5:/gfs/brick2/gv0
Brick9: gfs6:/gfs/arbiter/gv0 (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
I set up passwordless ssh login from all the master servers to all the
slave servers then created and started the geo replicated volume
I check the status and they switch between being active with history crawl
and faulty with n/a every few second
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------
gfs1 gfsvol /gfs/arbiter/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs1 gfsvol /gfs/brick1/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep gfs6 Active History Crawl
2017-09-28 23:30:19
gfs1 gfsvol /gfs/brick2/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs3 gfsvol /gfs/brick1/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs3 gfsvol /gfs/brick2/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs3 gfsvol /gfs/arbiter/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs2 gfsvol /gfs/brick1/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs2 gfsvol /gfs/arbiter/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs2 gfsvol /gfs/brick2/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
Here is the output of the geo replication log file
[root at gfs1 ~]# tail -n 100 $(gluster volume geo-replication gfsvol
geo-rep-user at gfs4::gfsvol_rep config log-file)
[2017-09-29 15:53:29.785386] I [master(/gfs/brick2/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0357 num_files=1 job=3 return_code=12
[2017-09-29 15:53:29.785615] E [resource(/gfs/brick2/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17554/cwd error=12
[2017-09-29 15:53:29.797259] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:29.799386] I [repce(/gfs/brick2/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:29.799570] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/brick1/gv0
slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep
[2017-09-29 15:53:30.232007] I
[resource(/gfs/brick1/gv0):1772:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2017-09-29 15:53:30.232738] I
[changelogagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent
listining...
[2017-09-29 15:53:30.248094] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/brick2/gv0
[2017-09-29 15:53:30.252793] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:30.742058] I [master(/gfs/arbiter/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
[2017-09-29 15:53:30.742360] I
[resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register time
time=1506700410
[2017-09-29 15:53:30.754738] I
[gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:30.756040] I
[gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:30.756280] I [master(/gfs/arbiter/gv0):1429:crawl]
_GMaster: starting history crawl turns=1 stime=(1506637819, 0)
entry_stime=None etime=1506700410
[2017-09-29 15:53:31.758335] I [master(/gfs/arbiter/gv0):1458:crawl]
_GMaster: slave's time stime=(1506637819, 0)
[2017-09-29 15:53:31.939471] I
[resource(/gfs/brick1/gv0):1779:connect_remote] SSH: SSH connection between
master and slave established. duration=1.7073
[2017-09-29 15:53:31.939665] I [resource(/gfs/brick1/gv0):1494:connect]
GLUSTER: Mounting gluster volume locally...
[2017-09-29 15:53:32.284754] I [master(/gfs/arbiter/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0372 num_files=1 job=3 return_code=12
[2017-09-29 15:53:32.284996] E [resource(/gfs/arbiter/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-i_wIMu/5f1d38555e12d0018fb6ed1e6bd63023.sock --compress
geo-rep-user at gfs5:/proc/8334/cwd error=12
[2017-09-29 15:53:32.300786] I [syncdutils(/gfs/arbiter/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:32.303261] I [repce(/gfs/arbiter/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:32.303452] I [syncdutils(/gfs/arbiter/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:32.732858] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/arbiter/gv0
[2017-09-29 15:53:32.736538] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:33.35219] I [resource(/gfs/brick1/gv0):1507:connect]
GLUSTER: Mounted gluster volume duration=1.0954
[2017-09-29 15:53:33.35403] I [gsyncd(/gfs/brick1/gv0):799:main_i] <top>:
Closing feedback fd, waking up the monitor
[2017-09-29 15:53:35.50920] I [master(/gfs/brick1/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1583960edbbd2f1dfb6b4
[2017-09-29 15:53:35.51227] I [resource(/gfs/brick1/gv0):1654:service_loop]
GLUSTER: Register time time=1506700415
[2017-09-29 15:53:35.64343] I
[gsyncdstatus(/gfs/brick1/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:35.65696] I
[gsyncdstatus(/gfs/brick1/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:35.65915] I [master(/gfs/brick1/gv0):1429:crawl]
_GMaster: starting history crawl turns=1 stime=(1506637819, 0)
entry_stime=None etime=1506700415
[2017-09-29 15:53:36.68135] I [master(/gfs/brick1/gv0):1458:crawl]
_GMaster: slave's time stime=(1506637819, 0)
[2017-09-29 15:53:36.578717] I [master(/gfs/brick1/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0376 num_files=1 job=1 return_code=12
[2017-09-29 15:53:36.578946] E [resource(/gfs/brick1/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-2pGnVA/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17648/cwd error=12
[2017-09-29 15:53:36.590887] I [syncdutils(/gfs/brick1/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:36.596421] I [repce(/gfs/brick1/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:36.596635] I [syncdutils(/gfs/brick1/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:37.41075] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/brick1/gv0
[2017-09-29 15:53:37.44637] I [gsyncdstatus(monitor):242:set_worker_status]
GeorepStatus: Worker Status Change status=Faulty
[2017-09-29 15:53:40.351263] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/brick2/gv0
slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep
[2017-09-29 15:53:40.484637] I
[resource(/gfs/brick2/gv0):1772:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2017-09-29 15:53:40.497215] I
[changelogagent(/gfs/brick2/gv0):73:__init__] ChangelogAgent: Agent
listining...
[2017-09-29 15:53:42.278539] I
[resource(/gfs/brick2/gv0):1779:connect_remote] SSH: SSH connection between
master and slave established. duration=1.7936
[2017-09-29 15:53:42.278747] I [resource(/gfs/brick2/gv0):1494:connect]
GLUSTER: Mounting gluster volume locally...
[2017-09-29 15:53:42.851296] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/arbiter/gv0
slave_node=ssh://geo-rep-user at gfs5:gluster://localhost:gfsvol_rep
[2017-09-29 15:53:42.985567] I
[resource(/gfs/arbiter/gv0):1772:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2017-09-29 15:53:42.986390] I
[changelogagent(/gfs/arbiter/gv0):73:__init__] ChangelogAgent: Agent
listining...
[2017-09-29 15:53:43.377480] I [resource(/gfs/brick2/gv0):1507:connect]
GLUSTER: Mounted gluster volume duration=1.0986
[2017-09-29 15:53:43.377681] I [gsyncd(/gfs/brick2/gv0):799:main_i] <top>:
Closing feedback fd, waking up the monitor
[2017-09-29 15:53:44.767873] I
[resource(/gfs/arbiter/gv0):1779:connect_remote] SSH: SSH connection
between master and slave established. duration=1.7821
[2017-09-29 15:53:44.768059] I [resource(/gfs/arbiter/gv0):1494:connect]
GLUSTER: Mounting gluster volume locally...
[2017-09-29 15:53:45.393150] I [master(/gfs/brick2/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/1eb15856c627f181513bf23f8bf2f9d0
[2017-09-29 15:53:45.393373] I
[resource(/gfs/brick2/gv0):1654:service_loop] GLUSTER: Register time
time=1506700425
[2017-09-29 15:53:45.404992] I
[gsyncdstatus(/gfs/brick2/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:45.406404] I
[gsyncdstatus(/gfs/brick2/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:45.406660] I [master(/gfs/brick2/gv0):1429:crawl]
_GMaster: starting history crawl turns=1 stime=(1506637819, 0)
entry_stime=None etime=1506700425
[2017-09-29 15:53:45.863256] I [resource(/gfs/arbiter/gv0):1507:connect]
GLUSTER: Mounted gluster volume duration=1.0950
[2017-09-29 15:53:45.863430] I [gsyncd(/gfs/arbiter/gv0):799:main_i]
<top>:
Closing feedback fd, waking up the monitor
[2017-09-29 15:53:46.408814] I [master(/gfs/brick2/gv0):1458:crawl]
_GMaster: slave's time stime=(1506637819, 0)
[2017-09-29 15:53:46.920937] I [master(/gfs/brick2/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0363 num_files=1 job=3 return_code=12
[2017-09-29 15:53:46.921140] E [resource(/gfs/brick2/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-DCruqU/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17747/cwd error=12
[2017-09-29 15:53:46.937288] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:46.940479] I [repce(/gfs/brick2/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:46.940772] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:47.151477] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/brick1/gv0
slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep
[2017-09-29 15:53:47.303791] I
[resource(/gfs/brick1/gv0):1772:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2017-09-29 15:53:47.316878] I
[changelogagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent
listining...
[2017-09-29 15:53:47.382605] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/brick2/gv0
[2017-09-29 15:53:47.387926] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:47.876825] I [master(/gfs/arbiter/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
[2017-09-29 15:53:47.877044] I
[resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register time
time=1506700427
[2017-09-29 15:53:47.888930] I
[gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:47.890043] I
[gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:47.890285] I [master(/gfs/arbiter/gv0):1429:crawl]
_GMaster: starting history crawl turns=1 stime=(1506637819, 0)
entry_stime=None etime=1506700427
[2017-09-29 15:53:48.891966] I [master(/gfs/arbiter/gv0):1458:crawl]
_GMaster: slave's time stime=(1506637819, 0)
[2017-09-29 15:53:48.998140] I
[resource(/gfs/brick1/gv0):1779:connect_remote] SSH: SSH connection between
master and slave established. duration=1.6942
[2017-09-29 15:53:48.998330] I [resource(/gfs/brick1/gv0):1494:connect]
GLUSTER: Mounting gluster volume locally...
[2017-09-29 15:53:49.406749] I [master(/gfs/arbiter/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0383 num_files=1 job=2 return_code=12
[2017-09-29 15:53:49.406999] E [resource(/gfs/arbiter/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-5VeNKp/5f1d38555e12d0018fb6ed1e6bd63023.sock --compress
geo-rep-user at gfs5:/proc/8448/cwd error=12
[2017-09-29 15:53:49.426301] I [syncdutils(/gfs/arbiter/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:49.428428] I [repce(/gfs/arbiter/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:49.428618] I [syncdutils(/gfs/arbiter/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:49.868974] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/arbiter/gv0
[2017-09-29 15:53:49.872705] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:50.78377] I [resource(/gfs/brick1/gv0):1507:connect]
GLUSTER: Mounted gluster volume duration=1.0799
[2017-09-29 15:53:50.78643] I [gsyncd(/gfs/brick1/gv0):799:main_i] <top>:
Closing feedback fd, waking up the monitor
[2017-09-29 15:53:52.93027] I [master(/gfs/brick1/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1583960edbbd2f1dfb6b4
[2017-09-29 15:53:52.93331] I [resource(/gfs/brick1/gv0):1654:service_loop]
GLUSTER: Register time time=1506700432
[2017-09-29 15:53:52.107558] I
[gsyncdstatus(/gfs/brick1/gv0):275:set_active] GeorepStatus: Worker Status
Change status=Active
[2017-09-29 15:53:52.108943] I
[gsyncdstatus(/gfs/brick1/gv0):247:set_worker_crawl_status] GeorepStatus:
Crawl Status Change status=History Crawl
[2017-09-29 15:53:52.109178] I [master(/gfs/brick1/gv0):1429:crawl]
_GMaster: starting history crawl turns=1 stime=(1506637819, 0)
entry_stime=None etime=1506700432
[2017-09-29 15:53:53.111017] I [master(/gfs/brick1/gv0):1458:crawl]
_GMaster: slave's time stime=(1506637819, 0)
[2017-09-29 15:53:53.622422] I [master(/gfs/brick1/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0369 num_files=1 job=2 return_code=12
[2017-09-29 15:53:53.622683] E [resource(/gfs/brick1/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-DBB9pL/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17837/cwd error=12
[2017-09-29 15:53:53.635057] I [syncdutils(/gfs/brick1/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:53.639909] I [repce(/gfs/brick1/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:53.640172] I [syncdutils(/gfs/brick1/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:54.85591] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/brick1/gv0
[2017-09-29 15:53:54.89509] I [gsyncdstatus(monitor):242:set_worker_status]
GeorepStatus: Worker Status Change status=Faulty
I think the error has to do with this part:
rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids
--no-implied-dirs --existing --xattrs --acls . -e ssh
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-DBB9pL/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17837/cwd
especially the ssh part since I notice a lot of failed log in attempts when
geo replication is running
Please can anybody help advise what to do in this situation?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170929/c4b08cad/attachment.html>
On 09/29/2017 09:30 PM, rick sanchez wrote:> I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x (2 + 1) = 9 > Transport-type: tcp > Bricks: > Brick1: gfs2:/gfs/brick1/gv0 > Brick2: gfs3:/gfs/brick1/gv0 > Brick3: gfs1:/gfs/arbiter/gv0 (arbiter) > Brick4: gfs1:/gfs/brick1/gv0 > Brick5: gfs3:/gfs/brick2/gv0 > Brick6: gfs2:/gfs/arbiter/gv0 (arbiter) > Brick7: gfs1:/gfs/brick2/gv0 > Brick8: gfs2:/gfs/brick2/gv0 > Brick9: gfs3:/gfs/arbiter/gv0 (arbiter) > Options Reconfigured: > nfs.disable: on > transport.address-family: inet > geo-replication.indexing: on > geo-replication.ignore-pid-check: on > changelog.changelog: on > > [root at gfs4 ~]# gluster volume info > Volume Name: gfsvol_rep > Type: Distributed-Replicate > Volume ID: 42bfa062-ad0d-4242-a813-63389be1c404 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x (2 + 1) = 9 > Transport-type: tcp > Bricks: > Brick1: gfs5:/gfs/brick1/gv0 > Brick2: gfs6:/gfs/brick1/gv0 > Brick3: gfs4:/gfs/arbiter/gv0 (arbiter) > Brick4: gfs4:/gfs/brick1/gv0 > Brick5: gfs6:/gfs/brick2/gv0 > Brick6: gfs5:/gfs/arbiter/gv0 (arbiter) > Brick7: gfs4:/gfs/brick2/gv0 > Brick8: gfs5:/gfs/brick2/gv0 > Brick9: gfs6:/gfs/arbiter/gv0 (arbiter) > Options Reconfigured: > nfs.disable: on > transport.address-family: inet > > I set up passwordless ssh login from all the master servers to all the > slave servers then created and started the geo replicated volumePasswordless SSH not required for all nodes, it is required from one of the master node to one of the slave node. (From the master node where you want to run create command to the slave node which will be specified in the Create command) Alternatively you can use a tool called "gluster-georep-setup", which doesn't require initial passwordless step. http://aravindavk.in/blog/gluster-georep-tools/ https://github.com/aravindavk/gluster-georep-tools> > I check the status and they switch between being active with history > crawl and faulty with n/a every few second > MASTER NODE? ? MASTER VOL? ? MASTER BRICK? ? ? ? SLAVE USER? ? ? > SLAVE? ? ? ? ? ? ? ? ? ? ? ? ? ? SLAVE NODE STATUS? ? CRAWL STATUS? ? > ?LAST_SYNCED > ------------------------------------------------------------------------------------------------------------------------------------------------------------- > gfs1? ? ? ? ? ?gfsvol? ? ? ? /gfs/arbiter/gv0 geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs1? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick1/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? gfs6 ? ? ? Active? ? History Crawl? ? > 2017-09-28 23:30:19 > gfs1? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick2/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs3? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick1/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs3? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick2/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs3? ? ? ? ? ?gfsvol? ? ? ? /gfs/arbiter/gv0 geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs2? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick1/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs2? ? ? ? ? ?gfsvol? ? ? ? /gfs/arbiter/gv0 geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > gfs2? ? ? ? ? ?gfsvol? ? ? ? /gfs/brick2/gv0 ?geo-rep-user? ? > geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ? ? ? ? ? ? N/A > > > Here is the output of the geo replication log file > [root at gfs1 ~]# tail -n 100 $(gluster volume geo-replication gfsvol > geo-rep-user at gfs4::gfsvol_rep config log-file) > [2017-09-29 15:53:29.785386] I [master(/gfs/brick2/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0357num_files=1job=3return_code=12 > [2017-09-29 15:53:29.785615] E [resource(/gfs/brick2/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17554/cwderror=12 > [2017-09-29 15:53:29.797259] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:29.799386] I > [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:29.799570] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor: > starting gsyncd > workerbrick=/gfs/brick1/gv0slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep > [2017-09-29 15:53:30.232007] I > [resource(/gfs/brick1/gv0):1772:connect_remote] SSH: Initializing SSH > connection between master and slave... > [2017-09-29 15:53:30.232738] I > [changelogagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent > listining... > [2017-09-29 15:53:30.248094] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/brick2/gv0 > [2017-09-29 15:53:30.252793] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:30.742058] I > [master(/gfs/arbiter/gv0):1515:register] _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f > [2017-09-29 15:53:30.742360] I > [resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register > timetime=1506700410 > [2017-09-29 15:53:30.754738] I > [gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker > Status Changestatus=Active > [2017-09-29 15:53:30.756040] I > [gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status] > GeorepStatus: Crawl Status Changestatus=History Crawl > [2017-09-29 15:53:30.756280] I [master(/gfs/arbiter/gv0):1429:crawl] > _GMaster: starting history crawlturns=1stime=(1506637819, > 0)entry_stime=Noneetime=1506700410 > [2017-09-29 15:53:31.758335] I [master(/gfs/arbiter/gv0):1458:crawl] > _GMaster: slave's timestime=(1506637819, 0) > [2017-09-29 15:53:31.939471] I > [resource(/gfs/brick1/gv0):1779:connect_remote] SSH: SSH connection > between master and slave established.duration=1.7073 > [2017-09-29 15:53:31.939665] I > [resource(/gfs/brick1/gv0):1494:connect] GLUSTER: Mounting gluster > volume locally... > [2017-09-29 15:53:32.284754] I [master(/gfs/arbiter/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0372num_files=1job=3return_code=12 > [2017-09-29 15:53:32.284996] E [resource(/gfs/arbiter/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-i_wIMu/5f1d38555e12d0018fb6ed1e6bd63023.sock > --compress geo-rep-user at gfs5:/proc/8334/cwderror=12 > [2017-09-29 15:53:32.300786] I > [syncdutils(/gfs/arbiter/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:32.303261] I > [repce(/gfs/arbiter/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:32.303452] I > [syncdutils(/gfs/arbiter/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:32.732858] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/arbiter/gv0 > [2017-09-29 15:53:32.736538] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:33.35219] I [resource(/gfs/brick1/gv0):1507:connect] > GLUSTER: Mounted gluster volumeduration=1.0954 > [2017-09-29 15:53:33.35403] I [gsyncd(/gfs/brick1/gv0):799:main_i] > <top>: Closing feedback fd, waking up the monitor > [2017-09-29 15:53:35.50920] I [master(/gfs/brick1/gv0):1515:register] > _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1583960edbbd2f1dfb6b4 > [2017-09-29 15:53:35.51227] I > [resource(/gfs/brick1/gv0):1654:service_loop] GLUSTER: Register > timetime=1506700415 > [2017-09-29 15:53:35.64343] I > [gsyncdstatus(/gfs/brick1/gv0):275:set_active] GeorepStatus: Worker > Status Changestatus=Active > [2017-09-29 15:53:35.65696] I > [gsyncdstatus(/gfs/brick1/gv0):247:set_worker_crawl_status] > GeorepStatus: Crawl Status Changestatus=History Crawl > [2017-09-29 15:53:35.65915] I [master(/gfs/brick1/gv0):1429:crawl] > _GMaster: starting history crawlturns=1stime=(1506637819, > 0)entry_stime=Noneetime=1506700415 > [2017-09-29 15:53:36.68135] I [master(/gfs/brick1/gv0):1458:crawl] > _GMaster: slave's timestime=(1506637819, 0) > [2017-09-29 15:53:36.578717] I [master(/gfs/brick1/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0376num_files=1job=1return_code=12 > [2017-09-29 15:53:36.578946] E [resource(/gfs/brick1/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-2pGnVA/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17648/cwderror=12 > [2017-09-29 15:53:36.590887] I > [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:36.596421] I > [repce(/gfs/brick1/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:36.596635] I > [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:37.41075] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/brick1/gv0 > [2017-09-29 15:53:37.44637] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:40.351263] I [monitor(monitor):280:monitor] Monitor: > starting gsyncd > workerbrick=/gfs/brick2/gv0slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep > [2017-09-29 15:53:40.484637] I > [resource(/gfs/brick2/gv0):1772:connect_remote] SSH: Initializing SSH > connection between master and slave... > [2017-09-29 15:53:40.497215] I > [changelogagent(/gfs/brick2/gv0):73:__init__] ChangelogAgent: Agent > listining... > [2017-09-29 15:53:42.278539] I > [resource(/gfs/brick2/gv0):1779:connect_remote] SSH: SSH connection > between master and slave established.duration=1.7936 > [2017-09-29 15:53:42.278747] I > [resource(/gfs/brick2/gv0):1494:connect] GLUSTER: Mounting gluster > volume locally... > [2017-09-29 15:53:42.851296] I [monitor(monitor):280:monitor] Monitor: > starting gsyncd > workerbrick=/gfs/arbiter/gv0slave_node=ssh://geo-rep-user at gfs5:gluster://localhost:gfsvol_rep > [2017-09-29 15:53:42.985567] I > [resource(/gfs/arbiter/gv0):1772:connect_remote] SSH: Initializing SSH > connection between master and slave... > [2017-09-29 15:53:42.986390] I > [changelogagent(/gfs/arbiter/gv0):73:__init__] ChangelogAgent: Agent > listining... > [2017-09-29 15:53:43.377480] I > [resource(/gfs/brick2/gv0):1507:connect] GLUSTER: Mounted gluster > volumeduration=1.0986 > [2017-09-29 15:53:43.377681] I [gsyncd(/gfs/brick2/gv0):799:main_i] > <top>: Closing feedback fd, waking up the monitor > [2017-09-29 15:53:44.767873] I > [resource(/gfs/arbiter/gv0):1779:connect_remote] SSH: SSH connection > between master and slave established.duration=1.7821 > [2017-09-29 15:53:44.768059] I > [resource(/gfs/arbiter/gv0):1494:connect] GLUSTER: Mounting gluster > volume locally... > [2017-09-29 15:53:45.393150] I [master(/gfs/brick2/gv0):1515:register] > _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/1eb15856c627f181513bf23f8bf2f9d0 > [2017-09-29 15:53:45.393373] I > [resource(/gfs/brick2/gv0):1654:service_loop] GLUSTER: Register > timetime=1506700425 > [2017-09-29 15:53:45.404992] I > [gsyncdstatus(/gfs/brick2/gv0):275:set_active] GeorepStatus: Worker > Status Changestatus=Active > [2017-09-29 15:53:45.406404] I > [gsyncdstatus(/gfs/brick2/gv0):247:set_worker_crawl_status] > GeorepStatus: Crawl Status Changestatus=History Crawl > [2017-09-29 15:53:45.406660] I [master(/gfs/brick2/gv0):1429:crawl] > _GMaster: starting history crawlturns=1stime=(1506637819, > 0)entry_stime=Noneetime=1506700425 > [2017-09-29 15:53:45.863256] I > [resource(/gfs/arbiter/gv0):1507:connect] GLUSTER: Mounted gluster > volumeduration=1.0950 > [2017-09-29 15:53:45.863430] I [gsyncd(/gfs/arbiter/gv0):799:main_i] > <top>: Closing feedback fd, waking up the monitor > [2017-09-29 15:53:46.408814] I [master(/gfs/brick2/gv0):1458:crawl] > _GMaster: slave's timestime=(1506637819, 0) > [2017-09-29 15:53:46.920937] I [master(/gfs/brick2/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0363num_files=1job=3return_code=12 > [2017-09-29 15:53:46.921140] E [resource(/gfs/brick2/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-DCruqU/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17747/cwderror=12 > [2017-09-29 15:53:46.937288] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:46.940479] I > [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:46.940772] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:47.151477] I [monitor(monitor):280:monitor] Monitor: > starting gsyncd > workerbrick=/gfs/brick1/gv0slave_node=ssh://geo-rep-user at gfs6:gluster://localhost:gfsvol_rep > [2017-09-29 15:53:47.303791] I > [resource(/gfs/brick1/gv0):1772:connect_remote] SSH: Initializing SSH > connection between master and slave... > [2017-09-29 15:53:47.316878] I > [changelogagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent > listining... > [2017-09-29 15:53:47.382605] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/brick2/gv0 > [2017-09-29 15:53:47.387926] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:47.876825] I > [master(/gfs/arbiter/gv0):1515:register] _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f > [2017-09-29 15:53:47.877044] I > [resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register > timetime=1506700427 > [2017-09-29 15:53:47.888930] I > [gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker > Status Changestatus=Active > [2017-09-29 15:53:47.890043] I > [gsyncdstatus(/gfs/arbiter/gv0):247:set_worker_crawl_status] > GeorepStatus: Crawl Status Changestatus=History Crawl > [2017-09-29 15:53:47.890285] I [master(/gfs/arbiter/gv0):1429:crawl] > _GMaster: starting history crawlturns=1stime=(1506637819, > 0)entry_stime=Noneetime=1506700427 > [2017-09-29 15:53:48.891966] I [master(/gfs/arbiter/gv0):1458:crawl] > _GMaster: slave's timestime=(1506637819, 0) > [2017-09-29 15:53:48.998140] I > [resource(/gfs/brick1/gv0):1779:connect_remote] SSH: SSH connection > between master and slave established.duration=1.6942 > [2017-09-29 15:53:48.998330] I > [resource(/gfs/brick1/gv0):1494:connect] GLUSTER: Mounting gluster > volume locally... > [2017-09-29 15:53:49.406749] I [master(/gfs/arbiter/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0383num_files=1job=2return_code=12 > [2017-09-29 15:53:49.406999] E [resource(/gfs/arbiter/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-5VeNKp/5f1d38555e12d0018fb6ed1e6bd63023.sock > --compress geo-rep-user at gfs5:/proc/8448/cwderror=12 > [2017-09-29 15:53:49.426301] I > [syncdutils(/gfs/arbiter/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:49.428428] I > [repce(/gfs/arbiter/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:49.428618] I > [syncdutils(/gfs/arbiter/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:49.868974] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/arbiter/gv0 > [2017-09-29 15:53:49.872705] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:50.78377] I [resource(/gfs/brick1/gv0):1507:connect] > GLUSTER: Mounted gluster volumeduration=1.0799 > [2017-09-29 15:53:50.78643] I [gsyncd(/gfs/brick1/gv0):799:main_i] > <top>: Closing feedback fd, waking up the monitor > [2017-09-29 15:53:52.93027] I [master(/gfs/brick1/gv0):1515:register] > _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1583960edbbd2f1dfb6b4 > [2017-09-29 15:53:52.93331] I > [resource(/gfs/brick1/gv0):1654:service_loop] GLUSTER: Register > timetime=1506700432 > [2017-09-29 15:53:52.107558] I > [gsyncdstatus(/gfs/brick1/gv0):275:set_active] GeorepStatus: Worker > Status Changestatus=Active > [2017-09-29 15:53:52.108943] I > [gsyncdstatus(/gfs/brick1/gv0):247:set_worker_crawl_status] > GeorepStatus: Crawl Status Changestatus=History Crawl > [2017-09-29 15:53:52.109178] I [master(/gfs/brick1/gv0):1429:crawl] > _GMaster: starting history crawlturns=1stime=(1506637819, > 0)entry_stime=Noneetime=1506700432 > [2017-09-29 15:53:53.111017] I [master(/gfs/brick1/gv0):1458:crawl] > _GMaster: slave's timestime=(1506637819, 0) > [2017-09-29 15:53:53.622422] I [master(/gfs/brick1/gv0):1860:syncjob] > Syncer: Sync Time Takenduration=0.0369num_files=1job=2return_code=12 > [2017-09-29 15:53:53.622683] E [resource(/gfs/brick1/gv0):208:errlog] > Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-DBB9pL/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17837/cwderror=12 > [2017-09-29 15:53:53.635057] I > [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:53.639909] I > [repce(/gfs/brick1/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:53.640172] I > [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:54.85591] I [monitor(monitor):363:monitor] Monitor: > worker died in startup phasebrick=/gfs/brick1/gv0 > [2017-09-29 15:53:54.89509] I > [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > > I think the error has to do with this part: > rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids > --no-implied-dirs --existing --xattrs --acls . -e ssh > -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i > /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-DBB9pL/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17837/cwd > especially the ssh part since I notice a lot of failed log in attempts > when geo replication is runningI suspect this is related to ssh keys, please let us know if resetup with above mentioned steps helps.> > Please can anybody help advise what to do in this situation? > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-- regards Aravinda VK http://aravindavk.in -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171006/44e4dbdf/attachment.html>