You?ve probably got multiple glusterfsd brick processes running. It?s possible
to track them down and kill them from a shell, do a gluster vol status to see
which one got registered last with glusterd, then ps -ax | grep glusterd | grep
"< volume name>" and kill any extra one that are not the PID
reported from vol status.
And upgrade to gluster6, I?m not all the way through that process, but so far it
seems to resolve that problem for me.
> On Apr 7, 2019, at 8:48 AM, Strahil <hunter86_bg at yahoo.com> wrote:
>
> Hi,
>
> After a hardware maintenance (GPU removed) I have powered my oVirt node
running gluster 5.5 and noticed that one volume has no running brick locally.
>
> After forcefully starting the volume, the brick is up but almost instantly
I got the following on my CentOS 7 terminal.
> ===============================>
> [root at ovirt2 ~]# gluster volume heal isos full
> Broadcast message from systemd-journald at ovirt2.localdomain (Sun
2019-04-07 16:41:30 EEST):
>
> gluster_bricks-isos-isos[6884]: [2019-04-07 13:41:30.148365] M [MSGID:
113075] [posix-helpers.c:1957:posix_health_check_thread_proc] 0-isos-posix:
health-check failed, going down
>
> Broadcast message from systemd-journald at ovirt2.localdomain (Sun
2019-04-07 16:41:30 EEST):
>
> gluster_bricks-isos-isos[6884]: [2019-04-07 13:41:30.148934] M [MSGID:
113075] [posix-helpers.c:1975:posix_health_check_thread_proc] 0-isos-posix:
still alive! -> SIGTERM
>
> Message from syslogd at ovirt2 at Apr 7 16:41:30 ...
> gluster_bricks-isos-isos[6884]:[2019-04-07 13:41:30.148365] M [MSGID:
113075] [posix-helpers.c:1957:posix_health_check_thread_proc] 0-isos-posix:
health-check failed, going down
>
> Message from syslogd at ovirt2 at Apr 7 16:41:30 ...
> gluster_bricks-isos-isos[6884]:[2019-04-07 13:41:30.148934] M [MSGID:
113075] [posix-helpers.c:1975:posix_health_check_thread_proc] 0-isos-posix:
still alive! -> SIGTERM
>
> ===============================>
> Restarting glusterd.service didn't help.
> How should I debug it ?
>
> Best Regards,
> Strahil Nikolov
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190407/5924b91c/attachment.html>