Thanks for reply.
I updated storage pool in 7.5 and restarted all 3 nodes sequentially.
All nodes now appear in Connected state from every node and gluster volume list
show all 74 volumes.
SSL log lines are still flooding glusterd log file on all nodes but don't
appear on grick log files. As there's no information about volume nor client
on these lines I'm not able to check if a certain volume produce this error
or not.
I alos tried pstack after installing Debian package glusterfs-dbg but still
getting "No symbols" error
I found that 5 brick processes didn't start on node 2 and 1 on node 3
[2020-04-27 11:54:23.622659] I [MSGID: 100030] [glusterfsd.c:2867:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 7.5 (args:
/usr/sbin/glusterfsd -s glusterDevVM2 --volfile-id
svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data -p
/var/run/gluster/vols/svg_pg_wed_dev_bkp/glusterDevVM2-bricks-svg_pg_wed_dev_bkp-brick1-data.pid
-S /var/run/gluster/5023d38a22a8a874.socket --brick-name
/bricks/svg_pg_wed_dev_bkp/brick1/data -l
/var/log/glusterfs/bricks/bricks-svg_pg_wed_dev_bkp-brick1-data.log
--xlator-option *-posix.glusterd-uuid=7f6c3023-144b-4db2-9063-d90926dbdd18
--process-name brick --brick-port 49206 --xlator-option
svg_pg_wed_dev_bkp-server.listen-port=49206)
[2020-04-27 11:54:23.632870] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of
current running process is 5331
[2020-04-27 11:54:23.636679] I [socket.c:4350:ssl_setup_connection_params]
0-socket.glusterfsd: SSL support for glusterd is ENABLED
[2020-04-27 11:54:23.636745] I [socket.c:4360:ssl_setup_connection_params]
0-socket.glusterfsd: using certificate depth 1
[2020-04-27 11:54:23.637580] I [socket.c:958:__socket_server_bind]
0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2020-04-27 11:54:23.637932] I [socket.c:4347:ssl_setup_connection_params]
0-glusterfs: SSL support on the I/O path is ENABLED
[2020-04-27 11:54:23.637949] I [socket.c:4350:ssl_setup_connection_params]
0-glusterfs: SSL support for glusterd is ENABLED
[2020-04-27 11:54:23.637960] I [socket.c:4360:ssl_setup_connection_params]
0-glusterfs: using certificate depth 1
[2020-04-27 11:54:23.639324] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 0
[2020-04-27 11:54:23.639380] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2020-04-27 11:54:28.933102] E [glusterfsd-mgmt.c:2217:mgmt_getspec_cbk]
0-glusterfs: failed to get the 'volume file' from server
[2020-04-27 11:54:28.933134] E [glusterfsd-mgmt.c:2416:mgmt_getspec_cbk] 0-mgmt:
failed to fetch volume file
(key:svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data)
[2020-04-27 11:54:28.933361] W [glusterfsd.c:1596:cleanup_and_exit]
(-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe5d1) [0x7f2b08ec35d1]
-->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x8d0) [0x55d46cb5a110]
-->/usr/sbin/glusterfsd(cleanup_and_exit+0x54) [0x55d46cb51ec4] ) 0-:
received signum (0), shutting down
I tried to stop the volume but gluster commands are still locked (Another
transaction is in progress.).
Best regards,
Nicolas.
De: "Nikhil Ladha" <nladha at redhat.com>
?: nico at furyweb.fr
Cc: "gluster-users" <gluster-users at gluster.org>
Envoy?: Lundi 27 Avril 2020 13:34:47
Objet: Re: [Gluster-users] never ending logging
Hi,
As you mentioned that the node 2 is in "semi-connected" state, I think
due to that the locking of volume is failing, and since it is failing in one of
the volumes the transaction is not complete and you are seeing a transaction
error on another volume.
Moreover, for the repeated logging of lines :
SSL support on the I/O path is enabled, SSL support for glusterd is enabled and
using certificate depth 1
If you can try creating a volume without having ssl enabled and then check if
the same log messages appear.
Also, if you update to 7.5, and find any change in log message with SSL ENABLED,
then please do share that.
Regards
Nikhil Ladha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200427/1bf88218/attachment.html>