Displaying 10 results from an estimated 10 matches for "georepstatus".
2017 Sep 29
1
Gluster geo replication volume is faulty
...15:53:30.232738] I
[changelogagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent
listining...
[2017-09-29 15:53:30.248094] I [monitor(monitor):363:monitor] Monitor:
worker died in startup phase brick=/gfs/brick2/gv0
[2017-09-29 15:53:30.252793] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:30.742058] I [master(/gfs/arbiter/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
[2017-09-29 15:53:30...
2017 Oct 06
0
Gluster geo replication volume is faulty
...logagent(/gfs/brick1/gv0):73:__init__] ChangelogAgent: Agent
> listining...
> [2017-09-29 15:53:30.248094] I [monitor(monitor):363:monitor] Monitor:
> worker died in startup phasebrick=/gfs/brick2/gv0
> [2017-09-29 15:53:30.252793] I
> [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker
> Status Changestatus=Faulty
> [2017-09-29 15:53:30.742058] I
> [master(/gfs/arbiter/gv0):1515:register] _GMaster: Working
> dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...e
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the geo-replication, but I am getting the same error.
[2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}]
[2024-01-24 19:51:24.158021] I [resource(worker /opt/tier1data2019/brick):1387:connect_remote] S...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...ster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the geo-replication, but I am getting the same error.
[2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}]
[2024-01-24 19:51:24.158021] I [resource(worker /opt/tier1data2019/brick):1387:connect_remote]...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...ster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the geo-replication, but I am getting the same error.
[2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}]
[2024-01-24 19:51:24.158021] I [resource(worker /opt/tier1data2019/brick):1387:connect_remote]...
2018 Mar 06
1
geo replication
...p; started the session with:
gluster volume geo-replication testtomcat stogfstest11::testtomcat create no-verify
gluster volume geo-replication testtomcat stogfstest11::testtomcat start
getting the following logs:
master:
[2018-03-06 08:32:46.767544] I [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-03-06 08:32:46.872857] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/gfs/testtomcat/mount slave_node=ssh://root at stogfstest11:gluster://localhost:testtomcat
[2018-03-06 08:32:46.9611...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2017 Aug 17
0
Extended attributes not supported by the backend storage
...[2017-08-16 12:57:45.279946] I [repce(/mnt/storage/lapbacks):92:service_loop] RepceServer: terminating on reaching EOF.
[2017-08-16 12:57:45.280275] I [syncdutils(/mnt/storage/lapbacks):252:finalize] <top>: exiting.
[2017-08-16 12:57:45.302642] I [gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker Status: Faulty
The session status becomes faulty then it tries again and it enters in a loop. Both backend storages on the master and slave are ext4.
Could you please help me?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipe...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...[2017-08-16 12:57:45.279946] I [repce(/mnt/storage/lapbacks):92:service_loop] RepceServer: terminating on reaching EOF.
[2017-08-16 12:57:45.280275] I [syncdutils(/mnt/storage/lapbacks):252:finalize] <top>: exiting.
[2017-08-16 12:57:45.302642] I [gsyncdstatus(monitor):241:set_worker_status] GeorepStatus: Worker Status: Faulty
The session status becomes faulty then it tries again and it enters in a loop. Both backend storages on the master and slave are ext4.
Could you please help me?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pi...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...itor] Monitor: Changelog Agent died, Aborting Worker brick=/urd-gds/gluster
[2018-07-11 18:43:10.88613] I [monitor(monitor):353:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster
[2018-07-11 18:43:20.112435] I [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status Change status=inconsistent
[2018-07-11 18:43:20.112885] E [syncdutils(monitor):331:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 361, in twrap
except:
File "/usr/li...