Anant Saraswat
2024-Jan-22 21:00 UTC
[Gluster-users] Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave
node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
Brick3: master3:/opt/tier1data2019/brick
master1 |
master2 |
------------------------------geo-replication----------------------------- |
drtier1data
master3 |
We added the master3 node a few months back, the initial setup consisted of 2
master nodes and one geo-replicated slave(drtier1data).
Our geo-replication was functioning well with the initial two master nodes
(master1 and master2), where master1 was active and master2 was in passive mode.
However, today, we started experiencing issues where geo-replication suddenly
stopped and became stuck in a loop of Initializing..., Active.. Faulty on
master1, while master2 remained in passive mode.
Upon checking the gsyncd.log on the master1 node, we observed the following
error (please refer to the attached logs for more details):
E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception]
<top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS
CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 tier1data /opt/tier1data2019/brick root
ssh://drtier1data::drtier1data N/A Faulty ? N/A
N/A
master2 tier1data /opt/tier1data2019/brick root
ssh://drtier1data::drtier1data Passive N/A
N/A
Suspecting an issue on the drtier1data(slave)?, I attempted to restart Gluster
on the slave node, also tried to restart drtier1data server without any luck.
After that I tried the following command to get the Primary-log-file for
geo-replication on master1, and got the following error.
# gluster volume geo-replication tier1data drtier1data::drtier1data config
log-file
Staging failed on master3. Error: Geo-replication session between tier1data and
drtier1data::drtier1data does not exist.
geo-replication command failed
Master3 was the new node added a few months back, but geo-replication was
working until today, and we never added this node under geo-replication.
After that, I forcefully stopped the geo-replication, thinking that restarting
geo-replication might fix the issue. However, now the geo-replication is not
starting and is giving the same error.
# gluster volume geo-replication tier1data drtier1data::drtier1data start force
Staging failed on master3. Error: Geo-replication session between tier1data and
drtier1data::drtier1data does not exist.
geo-replication command failed
Can anyone please suggest what I should do next to resolve this issue? As there
is 5TB of data in this volume, I don't want to resync the entire data to
drtier1data. Instead, I want to resume the sync from where it last stopped.
Thanks in advance for any guidance/help.
Kind regards,
Anant
?
DISCLAIMER: This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the sender.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee, you should not
disseminate, distribute or copy this email. Please notify the sender immediately
by email if you have received this email by mistake and delete this email from
your system.
If you are not the intended recipient, you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited. Thanks for your cooperation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240122/332ece21/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: commands
Type: application/octet-stream
Size: 3375 bytes
Desc: commands
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240122/332ece21/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gsyncd.log
Type: text/x-log
Size: 8453 bytes
Desc: gsyncd.log
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240122/332ece21/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: changes-opt-tier1data2019-brick.log
Type: text/x-log
Size: 3027 bytes
Desc: changes-opt-tier1data2019-brick.log
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240122/332ece21/attachment-0001.bin>
Anant Saraswat
2024-Jan-24 23:07 UTC
[Gluster-users] Geo-replication status is getting Faulty after few seconds
Hi All,
I have run the following commands on master3, and that has added master3 to
geo-replication.
gluster system:: execute gsec_create
gluster volume geo-replication tier1data drtier1data::drtier1data create
push-pem force
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the geo-replication, but I am getting the same error.
[2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status]
GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting
gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}]
[2024-01-24 19:51:24.158021] I [resource(worker
/opt/tier1data2019/brick):1387:connect_remote] SSH: Initializing SSH connection
between master and slave...
[2024-01-24 19:51:25.951998] I [resource(worker
/opt/tier1data2019/brick):1436:connect_remote] SSH: SSH connection between
master and slave established. [{duration=1.7938}]
[2024-01-24 19:51:25.952292] I [resource(worker
/opt/tier1data2019/brick):1116:connect] GLUSTER: Mounting gluster volume
locally...
[2024-01-24 19:51:26.986974] I [resource(worker
/opt/tier1data2019/brick):1139:connect] GLUSTER: Mounted gluster volume
[{duration=1.0346}]
[2024-01-24 19:51:26.987137] I [subcmds(worker
/opt/tier1data2019/brick):84:subcmd_worker] <top>: Worker spawn
successful. Acknowledging back to monitor
[2024-01-24 19:51:29.139131] I [master(worker
/opt/tier1data2019/brick):1662:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker
/opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time
[{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker
/opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change
[{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker
/opt/tier1data2019/brick):253:set_worker_crawl_status] GeorepStatus: Crawl
Status Change [{status=History Crawl}]
[2024-01-24 19:51:29.174558] I [master(worker
/opt/tier1data2019/brick):1576:crawl] _GMaster: starting history crawl
[{turns=1}, {stime=(1705935991, 0)}, {etime=1706125889},
{entry_stime=(1705935991, 0)}]
[2024-01-24 19:51:30.251965] I [master(worker
/opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time
[{stime=(1705935991, 0)}]
[2024-01-24 19:51:30.376715] E [syncdutils(worker
/opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount
process exited [{error=ENOTCONN}]
[2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker
died in startup phase [{brick=/opt/tier1data2019/brick}]
[2024-01-24 19:51:30.993608] I [gsyncdstatus(monitor):248:set_worker_status]
GeorepStatus: Worker Status Change [{status=Faulty}]
Any idea why it's stuck in this loop?
Thanks,
Anant
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of
Anant Saraswat <anant.saraswat at techblue.co.uk>
Sent: 22 January 2024 9:00 PM
To: gluster-users at gluster.org <gluster-users at gluster.org>
Subject: [Gluster-users] Geo-replication status is getting Faulty after few
seconds
EXTERNAL: Do not click links or open attachments if you do not recognize the
sender.
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave
node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
Brick3: master3:/opt/tier1data2019/brick
master1 |
master2 |
------------------------------geo-replication----------------------------- |
drtier1data
master3 |
We added the master3 node a few months back, the initial setup consisted of 2
master nodes and one geo-replicated slave(drtier1data).
Our geo-replication was functioning well with the initial two master nodes
(master1 and master2), where master1 was active and master2 was in passive mode.
However, today, we started experiencing issues where geo-replication suddenly
stopped and became stuck in a loop of Initializing..., Active.. Faulty on
master1, while master2 remained in passive mode.
Upon checking the gsyncd.log on the master1 node, we observed the following
error (please refer to the attached logs for more details):
E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception]
<top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS
CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 tier1data /opt/tier1data2019/brick root
ssh://drtier1data::drtier1data N/A Faulty ? N/A
N/A
master2 tier1data /opt/tier1data2019/brick root
ssh://drtier1data::drtier1data Passive N/A
N/A
Suspecting an issue on the drtier1data(slave)?, I attempted to restart Gluster
on the slave node, also tried to restart drtier1data server without any luck.
After that I tried the following command to get the Primary-log-file for
geo-replication on master1, and got the following error.
# gluster volume geo-replication tier1data drtier1data::drtier1data config
log-file
Staging failed on master3. Error: Geo-replication session between tier1data and
drtier1data::drtier1data does not exist.
geo-replication command failed
Master3 was the new node added a few months back, but geo-replication was
working until today, and we never added this node under geo-replication.
After that, I forcefully stopped the geo-replication, thinking that restarting
geo-replication might fix the issue. However, now the geo-replication is not
starting and is giving the same error.
# gluster volume geo-replication tier1data drtier1data::drtier1data start force
Staging failed on master3. Error: Geo-replication session between tier1data and
drtier1data::drtier1data does not exist.
geo-replication command failed
Can anyone please suggest what I should do next to resolve this issue? As there
is 5TB of data in this volume, I don't want to resync the entire data to
drtier1data. Instead, I want to resume the sync from where it last stopped.
Thanks in advance for any guidance/help.
Kind regards,
Anant
?
DISCLAIMER: This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the sender.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee, you should not
disseminate, distribute or copy this email. Please notify the sender immediately
by email if you have received this email by mistake and delete this email from
your system.
If you are not the intended recipient, you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited. Thanks for your cooperation.
DISCLAIMER: This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the sender.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee, you should not
disseminate, distribute or copy this email. Please notify the sender immediately
by email if you have received this email by mistake and delete this email from
your system.
If you are not the intended recipient, you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited. Thanks for your cooperation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20240124/af101957/attachment.html>
Apparently Analagous Threads
- Geo-replication status is getting Faulty after few seconds
- Geo-replication status is getting Faulty after few seconds
- Geo-replication status is getting Faulty after few seconds
- Graceful shutdown doesn't stop all Gluster processes
- replication + attachment sis + zlib bug ? (HEAD version from xi.rename-it.nl)