I?ve never had such situation and I don?t recall someone sharing something
similar.
Most probably it?s easier to remove the node from the TSP and re-add it.Of
course , test the case in VMs just to validate that it?s possible to add a mode
to a cluster with snapshots.
I have a vague feeling that you will need to delete all snapshots.
Best Regards,Strahil Nikolov?
On Thursday, August 10, 2023, 4:36 AM, Sebastian Neustein <sebastian.neustein
at arc-aachen.de> wrote:
Hi
Due to an outage of one node, after bringing it up again, the node has some
orphaned snapshosts, which are already deleted on the other nodes.
How can I delete these orphaned snapshots? Trying the normal way produceses
these errors:
[2023-08-08 19:34:03.667109 +0000] E [MSGID: 106115]
[glusterd-mgmt.c:118:gd_mgmt_v3_collate_errors] 0-management: Pre Validation
failed on B742. Please check log file for details.
[2023-08-08 19:34:03.667184 +0000] E [MSGID: 106115]
[glusterd-mgmt.c:118:gd_mgmt_v3_collate_errors] 0-management: Pre Validation
failed on B741. Please check log file for details.
[2023-08-08 19:34:03.667210 +0000] E [MSGID: 106121]
[glusterd-mgmt.c:1083:glusterd_mgmt_v3_pre_validate] 0-management: Pre
Validation failed on peers
[2023-08-08 19:34:03.667236 +0000] E [MSGID: 106121]
[glusterd-mgmt.c:2875:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre
Validation Failed
Even worse: I followed read hat gluster snapshot trouble guide and deleted one
of those directories defining a snapshot. Now I receive this on the cli:
run-gluster-snaps-e4dcd4166538414c849fa91b0b3934d7-brick6-brick[297342]:
[2023-08-09 08:59:41.107243 +0000] M [MSGID: 113075]
[posix-helpers.c:2161:posix_health_check_thread_proc]
0-e4dcd4166538414c849fa91b0b3934d7-posix: health-check failed, going down
run-gluster-snaps-e4dcd4166538414c849fa91b0b3934d7-brick6-brick[297342]:
[2023-08-09 08:59:41.107243 +0000] M [MSGID: 113075]
[posix-helpers.c:2161:posix_health_check_thread_proc]
0-e4dcd4166538414c849fa91b0b3934d7-posix: health-check failed, going down
run-gluster-snaps-e4dcd4166538414c849fa91b0b3934d7-brick6-brick[297342]:
[2023-08-09 08:59:41.107292 +0000] M [MSGID: 113075]
[posix-helpers.c:2179:posix_health_check_thread_proc]
0-e4dcd4166538414c849fa91b0b3934d7-posix: still alive! -> SIGTERM
run-gluster-snaps-e4dcd4166538414c849fa91b0b3934d7-brick6-brick[297342]:
[2023-08-09 08:59:41.107292 +0000] M [MSGID: 113075]
[posix-helpers.c:2179:posix_health_check_thread_proc]
0-e4dcd4166538414c849fa91b0b3934d7-posix: still alive! -> SIGTERM
What are my options?
- is there an easy way to remove all those snapshots?
- or would it be easier to remove and rejoin the node to the gluster cluster?
Thank you for any help!
Seb
--
Sebastian Neustein
Airport Research Center GmbH
Bismarckstra?e 61
52066 Aachen
Germany
Phone: +49 241 16843-23
Fax: +49 241 16843-19
e-mail: sebastian.neustein at arc-aachen.de
Website: http://www.airport-consultants.com
Register Court: Amtsgericht Aachen HRB 7313
Ust-Id-No.: DE196450052
Managing Director:
Dipl.-Ing. Tom Alexander Heuer ________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20230810/1d6da8c4/attachment.html>