Hi Max, On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote:> As soon as I made the configuration change and restarted CTDB, it crashes. > > Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB. > Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE > Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: connect() failed, errno=111 > Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: Failed to connect to CTDB daemon (/var/run/ctdb/ctdbd.socket) > Oct 2 11:05:22 hq-6pgluster01 ctdbd_wrapper: Error while shutting down CTDBIs there anything in the log file to suggest that this is an early failure instead of an actual crash? We do a lot of testing on CentsOS 7 (although not with Ganesha and with our own CTDB packages) and we haven't seen any crashes in recent times. If this is a crash, are you able to get a core dump and generate a backtrace for me? Thanks... peace & happiness, martin
Looks like this is the actual error:
2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started
2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb
request 2147483554 of type 8 length 48 from node 1 to 0
2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration
inconsistent: recmaster has NULL, this node has
/run/gluster/shared_storage/.CTDB-lockfile, shutting down
2019/10/04 09:51:29.175045 ctdbd[17244]: Shutdown sequence commencing.
2019/10/04 09:51:29.175056 ctdbd[17244]: Set runstate to SHUTDOWN (6)
I'm attaching the full log from this startup.
The other thing that baffles me is that I have most of the legacy scripts
disabled, yet the startup shows that's running them all. Also have no idea
why it's listing the legacy scripts twice here, and the list is different.
[[LAColo-Prod] root at hq-6pgluster01 ~]# ctdb event script list legacy
* 00.ctdb
01.reclock
05.system
06.nfs
* 10.interface
11.natgw
11.routing
13.per_ip_routing
20.multipathd
31.clamd
40.vsftpd
41.httpd
49.winbind
50.samba
60.nfs
70.iscsi
91.lvs
* 01.reclock
05.system
* 06.nfs
11.natgw
11.routing
13.per_ip_routing
20.multipathd
31.clamd
40.vsftpd
41.httpd
49.winbind
50.samba
* 60.nfs
70.iscsi
91.lvs
?On 10/2/19, 8:48 PM, "Martin Schwenke" <martin at meltin.net>
wrote:
NOTE: This email originated from outside of the organization.
Hi Max,
On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio
<Max.DiOrio at ieeeglobalspec.com> wrote:
> As soon as I made the configuration change and restarted CTDB, it
crashes.
>
> Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB.
> Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process
exited, code=exited, status=1/FAILURE
> Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: connect() failed,
errno=111
> Oct 2 11:05:21 hq-6pgluster01 ctdbd_wrapper: Failed to connect to CTDB
daemon (/var/run/ctdb/ctdbd.socket)
> Oct 2 11:05:22 hq-6pgluster01 ctdbd_wrapper: Error while shutting down
CTDB
Is there anything in the log file to suggest that this is an early
failure instead of an actual crash? We do a lot of testing on CentsOS
7 (although not with Ganesha and with our own CTDB packages) and we
haven't seen any crashes in recent times.
If this is a crash, are you able to get a core dump and generate a
backtrace for me?
Thanks...
peace & happiness,
martin
Hi Max, On Fri, 4 Oct 2019 14:01:22 +0000, Max DiOrio <Max.DiOrio at ieeeglobalspec.com> wrote:> Looks like this is the actual error: > > 2019/10/04 09:51:29.174870 ctdbd[17244]: Recovery has started > 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0 > 2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration inconsistent: recmaster has NULL, this node has /run/gluster/shared_storage/.CTDB-lockfile, shutting down > 2019/10/04 09:51:29.175045 ctdbd[17244]: Shutdown sequence commencing. > 2019/10/04 09:51:29.175056 ctdbd[17244]: Set runstate to SHUTDOWN (6)Yep. CTDB refuses to work if the recovery lock is configured to be different on different nodes, since that is an important misconfiguration. If you make it the same on all nodes then it will get past this.> I'm attaching the full log from this startup. > > The other thing that baffles me is that I have most of the legacy scripts disabled, yet the startup shows that's running them all. Also have no idea why it's listing the legacy scripts twice here, and the list is different. > > [[LAColo-Prod] root at hq-6pgluster01 ~]# ctdb event script list legacy > * 00.ctdb > 01.reclock > 05.system > 06.nfs > * 10.interface > 11.natgw > 11.routing > 13.per_ip_routing > 20.multipathd > 31.clamd > 40.vsftpd > 41.httpd > 49.winbind > 50.samba > 60.nfs > 70.iscsi > 91.lvs > > * 01.reclock > 05.system > * 06.nfs > 11.natgw > 11.routing > 13.per_ip_routing > 20.multipathd > 31.clamd > 40.vsftpd > 41.httpd > 49.winbind > 50.samba > * 60.nfs > 70.iscsi > 91.lvsThis is strange. I can explain the above, but I can't explain why all of the scripts without stars are running. The first list is the scripts installed with CTDB in /usr/share/ctdb/events/legacy/. These are enabled via a symlink to /etc/ctdb/events/legacy/, so you should see 2 symlinks. The second list is "custom" scripts installed directly into /etc/ctdb/events/legacy/, or perhaps linked to some other place. I don't know how either of these things could have happened. What does: ls -l /etc/ctdb/events/legacy/ show? Thanks... peace & happiness, martin