Hi,
You can follow steps mentioned in these two links for removing a node Managing
Trusted Storage Pools - Gluster Docs
<https://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/>
and
adding/removing a brick safely Managing Volumes - Gluster Docs
<https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/>
.
Regards
Nikhil Ladha
On Wed, May 6, 2020 at 1:14 PM <nico at furyweb.fr> wrote:
> Hi all.
>
> I'm still stuck with my problem, the only solution I can consider is to
> rebuild all cluster but I need to move 6TB of data with very long downtime.
> Is there a way to fully remove a node without priori removing bricks ?
> Perhaps I can remove arbiter and node 2, keeping only node 1, then fully
> rebuild VM/format bricks/reinstall gluster and add node tu the cluster ?
>
> Please tell me.
>
> Regards,
> Nicolas.
>
> ------------------------------
> *De: *nico at furyweb.fr
> *?: *"Sanju Rakonde" <srakonde at redhat.com>
> *Cc: *"gluster-users" <gluster-users at gluster.org>,
"Nikhil Ladha" <
> nladha at redhat.com>
> *Envoy?: *Jeudi 30 Avril 2020 07:40:01
> *Objet: *Re: [Gluster-users] never ending logging
>
> I've checked all info files and they all contains same attributes :
> # for d in node1/vols/*; do
> v=${d##*/}
> for n in 1 2 3; do
> sort node$n/vols/$v/info >/tmp/info.node$n
> done
> diff -q /tmp/info.node1 /tmp/info.node2 && diff -q
/tmp/info.node3
> /tmp/info.node2 && echo "$v:Ok" || md5sum
node?/vols/$v/info
> done
> ...
> centreon_sondes:Ok
> data_export:Ok
> data_recette:Ok
> documents:Ok
> fw_dev:Ok
> fwtmp_uat:Ok
> fw_uat:Ok
> gcnuat:Ok
> home_dev:Ok
> pfwa:Ok
> rrddata:Ok
> sauvegarde_git:Ok
> share_data:Ok
> svg_pg_asys_dev_arch:Ok
> svg_pg_asys_dev_bkp:Ok
> svg_pg_asys_rct_arch:Ok
> svg_pg_asys_rct_bkp:Ok
> testformation:Ok
> tmp:Ok
> userfiles:Ok
> xlog_dq_uat:Ok
> ...
>
> No script running, no cron declared, no one connected except me on all 3
> nodes.
>
> All nodes connected from every node
>
> Hostname: glusterDevVM1
> Uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb
> State: Peer in Cluster (Connected)
>
> Hostname: glusterDevVM3
> Uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974
> State: Peer in Cluster (Connected)
>
> Hostname: glusterDevVM2
> Uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18
> State: Peer in Cluster (Connected)
>
> Certificates check
> root at glusterDevVM1:/opt/pfwa/tmp# openssl s_client -connect
> glusterdevvm1:24007 -CAfile /etc/ssl/glusterfs.ca
> CONNECTED(00000003)
> depth=1 C = FR, ST = MidiPy, L = Toulouse, O = xxx, OU = xxx, CN >
adminca.local, emailAddress = pfwa_int at xxx.fr
> verify return:1
> depth=0 emailAddress = pfwa_int at xxx.fr, C = FR, ST = Occitanie, L >
Toulouse, O = xxx, OU = xxx, CN = glusterDevVM1
> verify return:1
> ---
> Certificate chain
> 0 s:/emailAddress> pfwa_int at
sii.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM1
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> 1
s:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> Server certificate
> -----BEGIN CERTIFICATE-----
> xxx
> -----END CERTIFICATE-----
> subject=/emailAddress> pfwa_int at
sii.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM1
>
>
issuer=/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> No client certificate CA names sent
> Client Certificate Types: RSA sign, DSA sign, ECDSA sign
> Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Shared Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Peer signing digest: SHA512
> Server Temp Key: ECDH, P-256, 256 bits
> ---
> SSL handshake has read 2529 bytes and written 314 bytes
> Verification: OK
> ---
> New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
> Server public key is 2048 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> No ALPN negotiated
> SSL-Session:
> Protocol : TLSv1.2
> Cipher : ECDHE-RSA-AES256-GCM-SHA384
> Session-ID:
> 7EED4B3C732F1A93401F031E5A1A72587213B23B7EBE4D6FA96ADD88EB7E5E71
> Session-ID-ctx:
> Master-Key:
>
AEF5756C7C85BBD5D56C45BDCF8CEF07188C51D184651351C5B80C40B8DF6DC290C8322FA351C683E370394ADA3BF035
> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1588223746
> Timeout : 7200 (sec)
> Verify return code: 0 (ok)
> Extended master secret: yes
> ---
> read:errno=0
> root at glusterDevVM1:/opt/pfwa/tmp# openssl s_client -connect
> glusterdevvm2:24007 -CAfile /etc/ssl/glusterfs.ca
> CONNECTED(00000003)
> depth=1 C = FR, ST = MidiPy, L = Toulouse, O = xxx, OU = xxx, CN >
adminca.local, emailAddress = pfwa_int at xxx.fr
> verify return:1
> depth=0 emailAddress = pfwa_int at xxx.fr, C = FR, ST = Occitanie, L >
Toulouse, O = xxx, OU = xxx, CN = glusterDevVM2
> verify return:1
> ---
> Certificate chain
> 0 s:/emailAddress> pfwa_int at
xxx.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM2
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> 1
s:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=SII/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> Server certificate
> -----BEGIN CERTIFICATE-----
> xxx
> -----END CERTIFICATE-----
> subject=/emailAddress> pfwa_int at
xxx.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM2
>
>
issuer=/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> No client certificate CA names sent
> Client Certificate Types: RSA sign, DSA sign, ECDSA sign
> Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Shared Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Peer signing digest: SHA512
> Server Temp Key: ECDH, P-256, 256 bits
> ---
> SSL handshake has read 2529 bytes and written 314 bytes
> Verification: OK
> ---
> New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
> Server public key is 2048 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> No ALPN negotiated
> SSL-Session:
> Protocol : TLSv1.2
> Cipher : ECDHE-RSA-AES256-GCM-SHA384
> Session-ID:
> 6D7ACC100780C059A21330955FE7E8A003F9BBCE9AEB0E0E6E9C957A51206CC1
> Session-ID-ctx:
> Master-Key:
>
9E2B21511D531EA9894F91ED77F17AFCAEF3F74202AE0AD3F6B6B5F85E5D11027E4AAE8863601E7C7C01EED56273498D
> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1588223716
> Timeout : 7200 (sec)
> Verify return code: 0 (ok)
> Extended master secret: yes
> ---
> read:errno=0
> root at glusterDevVM1:/opt/pfwa/tmp# openssl s_client -connect
> glusterdevvm3:24007 -CAfile /etc/ssl/glusterfs.ca
> CONNECTED(00000003)
> depth=1 C = FR, ST = MidiPy, L = Toulouse, O = xxx, OU = xxx, CN >
adminca.local, emailAddress = pfwa_int at xxx.fr
> verify return:1
> depth=0 emailAddress = pfwa_int at xxx.fr, C = FR, ST = Occitanie, L >
Toulouse, O = xxx, OU = xxx, CN = glusterDevVM3
> verify return:1
> ---
> Certificate chain
> 0 s:/emailAddress> pfwa_int at
xxx.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM3
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> 1
s:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
>
i:/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> Server certificate
> -----BEGIN CERTIFICATE-----
> xxx
> -----END CERTIFICATE-----
> subject=/emailAddress> pfwa_int at
xxx.fr/C=FR/ST=Occitanie/L=Toulouse/O=xxx/OU=xxx/CN=glusterDevVM3
>
>
issuer=/C=FR/ST=MidiPy/L=Toulouse/O=xxx/OU=xxx/CN=adminca.local/emailAddress>
pfwa_int at xxx.fr
> ---
> No client certificate CA names sent
> Client Certificate Types: RSA sign, DSA sign, ECDSA sign
> Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Shared Requested Signature Algorithms:
>
RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
> Peer signing digest: SHA512
> Server Temp Key: ECDH, P-256, 256 bits
> ---
> SSL handshake has read 2529 bytes and written 314 bytes
> Verification: OK
> ---
> New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
> Server public key is 2048 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> No ALPN negotiated
> SSL-Session:
> Protocol : TLSv1.2
> Cipher : ECDHE-RSA-AES256-GCM-SHA384
> Session-ID:
> 057CEFC654A09132871316BF1CD9400B5BFEB11FE94F6B2E6F660140990C717B
> Session-ID-ctx:
> Master-Key:
>
EA4476F7EA01DBE20BB401773D1EF89BF1322C845E1B562B851C97467F79C8D6AF77E48035A2634F27DC751701D22931
> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1588223753
> Timeout : 7200 (sec)
> Verify return code: 0 (ok)
> Extended master secret: yes
> ---
> read:errno=0
>
> *Status command on 3 nodes one by one*
> root at glusterDevVM1:~# date
> jeudi 30 avril 2020, 06:53:19 (UTC+0200)
> root at glusterDevVM1:~# gluster volume status tmp
> Locking failed on glusterDevVM3. Please check log file for details.
> root at glusterDevVM1:~# date
> jeudi 30 avril 2020, 06:53:34 (UTC+0200)
> root at glusterDevVM2:~# gluster volume status tmp
> Locking failed on glusterDevVM3. Please check log file for details.
> root at glusterDevVM2:~# date
> jeudi 30 avril 2020, 06:53:43 (UTC+0200)
> root at glusterDevVM3:~# gluster volume status tmp
> Another transaction is in progress for tmp. Please try again after some
> time.
> root at glusterDevVM3:~# date
> jeudi 30 avril 2020, 06:53:48 (UTC+0200)
>
> *Log files extract for this time period*
> node1
> glusterDevVM1# sed -n '/SSL support.*ENABLED/d;/using certificate
> depth/d;/^.2020-04-30 04:53/p'
> /var/log/glusterfs/{cmd_history,glusterd}.log
> /var/log/glusterfs/cmd_history.log:[2020-04-30 04:53:30.142202] : volume
> status tmp : FAILED : Locking failed on glusterDevVM3. Please check log
> file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.132395] I [MSGID:
> 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.134594] E
> [socket.c:241:ssl_dump_error_stack] 0-management: error:1417C086:SSL
> routines:tls_process_client_certificate:certificate verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.134622] E
> [socket.c:241:ssl_dump_error_stack] 0-management: error:140A4044:SSL
> routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.134824] I [MSGID:
> 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management:
> Peer <glusterDevVM3> (<0d8a3686-9e37-4ce7-87bf-c85d1ec40974>),
in state
> <Peer in Cluster>, has disconnected from glusterd.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135010] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol axwayrct not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135042] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for axwayrct
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135068] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol bacarauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135079] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bacarauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135103] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol bigdatauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135114] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bigdatauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135134] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol centreon_sondes not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135145] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for centreon_sondes
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135166] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol data_export not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135176] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_export
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135195] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol data_recette not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135205] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_recette
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135224] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol desireuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135235] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for desireuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135254] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol documents not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135264] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for documents
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135288] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol euruat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135298] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for euruat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135317] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol flowfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135327] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for flowfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135346] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol fw_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135356] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135381] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol fw_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135392] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135411] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol fwtmp_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135421] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fwtmp_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135440] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol gcnuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135450] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for gcnuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135469] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol home_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135479] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for home_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135497] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol nichtest not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135512] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for nichtest
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135532] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol oprwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135542] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for oprwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135561] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol parcwebUAT not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135571] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for parcwebUAT
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135590] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol pfwa not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135600] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pfwa
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135619] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol primeuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135629] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for primeuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135651] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol pub_landigger not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135662] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pub_landigger
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135682] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol rrddata not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135692] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for rrddata
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135710] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol s4cmsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135729] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for s4cmsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135749] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol sauvegarde_git not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135759] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sauvegarde_git
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135778] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol scicsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135788] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for scicsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135807] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol share_data not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135818] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for share_data
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135836] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol sifuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135846] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sifuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135865] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol sltwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135875] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sltwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135894] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_my_epad not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135904] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_my_epad
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135924] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_asys_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135935] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135958] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_asys_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135969] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135988] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_asys_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.135999] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136018] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_asys_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136028] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136047] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_arch not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136057] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136076] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_bkp not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136087] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136107] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136117] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136137] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136147] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136171] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136181] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136201] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136211] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136231] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_fwsi_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136241] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136260] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_fwsi_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136271] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136290] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_fwsi_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136300] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136319] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_fwsi_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136329] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136349] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_spot_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136359] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136382] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_spot_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136393] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136412] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_stat_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136422] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136441] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_stat_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136451] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136470] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_stat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136480] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136499] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_stat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136510] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136529] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136539] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136558] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136568] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136591] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wasac_dev_arch not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136602] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136621] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wasac_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136631] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136650] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wed_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136661] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136680] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wed_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136690] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136709] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wed_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136719] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136738] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_wed_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136748] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136767] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_woip_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136777] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136800] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_woip_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136810] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136830] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_woip_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136840] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136865] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol svg_pg_woip_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136876] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136901] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol sysabo not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136918] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sysabo
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136940] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol testformation not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136950] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for testformation
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136975] W
> [glusterd-locks.c:807:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf155)
> [0x7fe61a8f2155] ) 0-management: Lock owner mismatch. Lock for vol tmp held
> by e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.136987] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137006] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol userfiles not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137016] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for userfiles
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137036] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol wed not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137051] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137070] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol wedfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137080] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wedfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137100] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_dq_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137110] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_dq_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137129] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_fwsi_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137139] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_fwsi_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137158] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_wasac_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137168] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137187] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_wasac_secure_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137198] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137217] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_wasac_secure_uat not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137227] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137246] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_wasac_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137262] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137282] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fe61a835119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fe61a83faae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fe61a8f2293] ) 0-management: Lock for vol xlog_woip_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.137292] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_woip_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.140779] E
> [rpc-clnt.c:346:saved_frames_unwind] (-->
>
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7fe62042dde8]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7fe6201d3d97]
(-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7fe6201d3ebe] (-->
>
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7fe6201d4e93]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7fe6201d5a18]
)))))
> 0-management: forced unwinding frame type(glusterd mgmt v3) op(--(1))
> called at 2020-04-30 04:53:30.133661 (xid=0x18)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.140804] E [MSGID:
> 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management:
> Locking failed on glusterDevVM3. Please check log file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:30.140938] E [MSGID:
> 106150] [glusterd-syncop.c:1918:gd_sync_task_begin] 0-management: Locking
> Peers Failed.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.544853] I [MSGID:
> 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd:
> Received ACC from uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974, host:
> glusterDevVM3, port: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.557587] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=axwayrct) to existing process with pid 104799
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.557617] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.557664] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> 0d8a3686-9e37-4ce7-87bf-c85d1ec40974
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.567583] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.574381] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:34.557720] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267170] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267212] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:44.322623] I [MSGID:
> 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 70200
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:44.558634] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557149] I [MSGID:
> 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req]
> 0-glusterd: Received probe from uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.585459] I [MSGID:
> 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to glusterDevVM2 (0), ret: 0, op_ret: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.605414] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.605480] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.605514] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.468728] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.605547] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:47.605677] I [MSGID:
> 106498] [glusterd-svc-helper.c:747:__glusterd_send_svc_configure_req]
> 0-management: not connected yet
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:48.605764] W [MSGID:
> 106617] [glusterd-svc-helper.c:948:glusterd_attach_svc] 0-glusterd: attach
> failed for glustershd(volume=axwayrct)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:48.605869] E [MSGID:
> 106048] [glusterd-shd-svc.c:482:glusterd_shdsvc_start] 0-glusterd: Failed
> to attach shd svc(volume=axwayrct) to pid=104799
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:48.606010] E [MSGID:
> 106615] [glusterd-shd-svc.c:638:glusterd_shdsvc_restart] 0-management:
> Couldn't start shd for vol: axwayrct on restart
>
> node2
> glusterDevVM2:~# sed -n '/SSL support.*ENABLED/d;/using certificate
> depth/d;/^.2020-04-30 04:53/p'
> /var/log/glusterfs/{cmd_history,glusterd}.log
> /var/log/glusterfs/cmd_history.log:[2020-04-30 04:53:41.297188] : volume
> status tmp : FAILED : Locking failed on glusterDevVM3. Please check log
> file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:40.014460] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:40.014529] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.264365] I [MSGID:
> 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.266348] E [MSGID:
> 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management:
> Locking failed on glusterDevVM3. Please check log file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.266644] E [MSGID:
> 106150] [glusterd-syncop.c:1918:gd_sync_task_begin] 0-management: Locking
> Peers Failed.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267758] I [MSGID:
> 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management:
> Peer <glusterDevVM1> (<e2263e4d-a307-45d5-9cec-e1791f7a45fb>),
in state
> <Peer in Cluster>, has disconnected from glusterd.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269560] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol axwayrct not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269610] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for axwayrct
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269677] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bacarauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269708] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bacarauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269771] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bigdatauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269799] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bigdatauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269858] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol centreon_sondes not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269887] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for centreon_sondes
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269941] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_export not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.269969] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_export
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270022] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_recette not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270069] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_recette
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270126] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol desireuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270154] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for desireuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270212] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol documents not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270240] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for documents
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270294] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol euruat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270322] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for euruat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270375] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol flowfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270402] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for flowfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270457] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270485] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270539] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270567] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270619] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fwtmp_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270646] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fwtmp_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270712] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol gcnuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270741] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for gcnuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270796] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol home_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270823] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for home_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270875] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol nichtest not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270903] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for nichtest
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270955] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol oprwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.270982] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for oprwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271039] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol parcwebUAT not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271066] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for parcwebUAT
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271118] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pfwa not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271146] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pfwa
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271198] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol primeuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271225] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for primeuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271278] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pub_landigger not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271318] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pub_landigger
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271391] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol rrddata not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271420] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for rrddata
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271473] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol s4cmsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271500] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for s4cmsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271614] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sauvegarde_git not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271645] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sauvegarde_git
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271699] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol scicsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271727] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for scicsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271779] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol share_data not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271807] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for share_data
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271859] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sifuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271887] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sifuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271939] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sltwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.271979] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sltwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272035] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_my_epad not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272063] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_my_epad
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272117] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272146] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272204] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272232] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272287] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272320] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272372] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272401] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272454] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_arch not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272482] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272535] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_bkp not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272564] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272632] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272663] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272718] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272747] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272804] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272832] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272885] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272914] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272967] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.272996] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273049] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273077] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273131] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273159] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273224] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273253] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273307] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273349] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273405] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273433] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273486] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273515] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273568] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273596] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273649] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273677] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273730] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273758] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273826] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273856] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273910] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273938] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.273998] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_arch not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274026] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274078] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274106] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274159] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274187] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274240] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274268] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274325] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274354] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274420] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274450] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274502] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274530] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274583] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274611] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274668] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274697] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274750] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274778] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274831] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sysabo not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274858] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sysabo
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274911] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol testformation not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.274939] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for testformation
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275001] W
> [glusterd-locks.c:807:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf155)
> [0x7fc9c0a53155] ) 0-management: Lock owner mismatch. Lock for vol tmp held
> by 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275042] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275098] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol userfiles not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275127] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for userfiles
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275180] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wed not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275207] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275259] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wedfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275286] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wedfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275345] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_dq_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275389] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_dq_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275444] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_fwsi_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275473] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_fwsi_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275563] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275599] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275655] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275697] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275753] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_uat not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275782] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275836] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275864] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275917] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_woip_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.275945] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_woip_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.283875] E
> [rpc-clnt.c:346:saved_frames_unwind] (-->
>
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7fc9c658ede8]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7fc9c6334d97]
(-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7fc9c6334ebe] (-->
>
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7fc9c6335e93]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7fc9c6336a18]
)))))
> 0-management: forced unwinding frame type(glusterd mgmt v3) op(--(6))
> called at 2020-04-30 04:53:41.266974 (xid=0x17)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.283992] E [MSGID:
> 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management:
> Unlocking failed on glusterDevVM1. Please check log file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.284473] I [MSGID:
> 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management:
> Peer <glusterDevVM3> (<0d8a3686-9e37-4ce7-87bf-c85d1ec40974>),
in state
> <Peer in Cluster>, has disconnected from glusterd.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.284915] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol axwayrct not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.284976] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for axwayrct
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285113] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bacarauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285152] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bacarauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285229] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bigdatauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285260] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bigdatauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285315] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol centreon_sondes not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285344] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for centreon_sondes
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285399] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_export not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285428] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_export
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285481] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_recette not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285510] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_recette
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285563] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol desireuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285591] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for desireuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285645] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol documents not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285673] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for documents
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285725] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol euruat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285753] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for euruat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285806] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol flowfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285852] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for flowfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285908] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285936] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.285989] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286016] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286068] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fwtmp_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286096] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fwtmp_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286148] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol gcnuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286176] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for gcnuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286228] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol home_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286256] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for home_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286309] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol nichtest not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286336] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for nichtest
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286388] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol oprwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286429] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for oprwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286485] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol parcwebUAT not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286513] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for parcwebUAT
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286566] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pfwa not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286593] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pfwa
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286645] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol primeuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286673] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for primeuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286725] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pub_landigger not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286752] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pub_landigger
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286804] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol rrddata not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286832] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for rrddata
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286884] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol s4cmsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286911] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for s4cmsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286963] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sauvegarde_git not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.286991] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sauvegarde_git
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287065] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol scicsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287094] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for scicsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287147] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol share_data not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287175] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for share_data
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287227] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sifuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287254] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sifuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287307] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sltwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287346] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sltwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287403] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_my_epad not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287431] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_my_epad
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287484] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287540] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287601] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287632] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287685] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287734] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287791] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287819] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287872] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_arch not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287900] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287953] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_bkp not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.287981] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288034] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288062] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288119] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288147] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288201] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288229] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288284] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288323] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288380] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288408] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288461] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288488] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288541] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288569] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288621] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288648] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288700] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288728] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288780] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288807] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288859] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288914] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288969] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.288998] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289050] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289078] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289130] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289158] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289210] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289237] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289289] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289316] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289368] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_arch not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289395] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289447] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289486] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289541] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289569] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289622] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289649] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289701] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289729] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289782] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289809] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289861] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289889] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289941] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.289969] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290020] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290061] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290116] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290144] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290197] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sysabo not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290225] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sysabo
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290277] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol testformation not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290305] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for testformation
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290375] W
> [glusterd-locks.c:807:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf155)
> [0x7fc9c0a53155] ) 0-management: Lock owner mismatch. Lock for vol tmp held
> by 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290407] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290460] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol userfiles not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290488] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for userfiles
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290541] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wed not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290568] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290620] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wedfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290666] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wedfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290721] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_dq_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290750] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_dq_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290803] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_fwsi_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290830] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_fwsi_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290883] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290910] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290963] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.290990] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291043] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_uat not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291071] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291124] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291151] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291203] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_woip_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.291243] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_woip_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.296804] E
> [rpc-clnt.c:346:saved_frames_unwind] (-->
>
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7fc9c658ede8]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7fc9c6334d97]
(-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7fc9c6334ebe] (-->
>
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7fc9c6335e93]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7fc9c6336a18]
)))))
> 0-management: forced unwinding frame type(glusterd mgmt v3) op(--(6))
> called at 2020-04-30 04:53:41.267144 (xid=0x10)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.296864] E [MSGID:
> 106115] [glusterd-mgmt.c:117:gd_mgmt_v3_collate_errors] 0-management:
> Unlocking failed on glusterDevVM3. Please check log file for details.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.297047] E [MSGID:
> 106151] [glusterd-syncop.c:1626:gd_unlock_op_phase] 0-management: Failed to
> unlock on some peer(s)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:43.024729] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:43.024800] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.556804] I [MSGID:
> 106004] [glusterd-handler.c:6204:__glusterd_peer_rpc_notify] 0-management:
> Peer <glusterDevVM3> (<0d8a3686-9e37-4ce7-87bf-c85d1ec40974>),
in state
> <Peer in Cluster>, has disconnected from glusterd.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557051] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol axwayrct not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557073] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for axwayrct
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557103] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bacarauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557118] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bacarauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557149] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol bigdatauat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557163] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for bigdatauat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557189] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol centreon_sondes not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557203] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for centreon_sondes
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557240] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_export not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557255] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_export
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557281] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol data_recette not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557295] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for data_recette
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557320] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol desireuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557334] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for desireuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557359] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol documents not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557373] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for documents
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557398] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol euruat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557412] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for euruat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557437] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol flowfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557450] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for flowfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557475] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557489] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557515] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fw_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557535] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fw_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557562] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol fwtmp_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557575] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for fwtmp_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557601] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol gcnuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557614] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for gcnuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557645] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol home_dev not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557658] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for home_dev
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557684] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol nichtest not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557697] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for nichtest
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557724] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol oprwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557737] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for oprwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557762] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol parcwebUAT not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557776] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for parcwebUAT
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557801] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pfwa not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557820] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pfwa
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557846] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol primeuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557860] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for primeuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557886] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol pub_landigger not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557899] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for pub_landigger
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557924] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol rrddata not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557938] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for rrddata
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557974] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol s4cmsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.557990] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for s4cmsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558016] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sauvegarde_git not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558030] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sauvegarde_git
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558055] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol scicsuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558068] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for scicsuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558094] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol share_data not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558107] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for share_data
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558133] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sifuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558154] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sifuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558180] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sltwede1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558194] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sltwede1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558220] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_my_epad not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558234] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_my_epad
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558259] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558273] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558298] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558312] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558338] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558351] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558377] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_asys_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558391] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_asys_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558416] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_arch not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558436] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558463] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_csctls_e1_rct_bkp not
> held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558477] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558502] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558516] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558542] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_e1_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558555] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_e1_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558581] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558595] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558620] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol
> svg_pg_csctls_secure_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558634] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_csctls_secure_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558660] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558673] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558699] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558718] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558745] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558758] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558784] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_fwsi_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558797] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_fwsi_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558823] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558836] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558861] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_spot_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558875] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_spot_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558900] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558914] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558939] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558953] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558978] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.558997] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559024] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_stat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559038] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_stat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559063] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559077] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559102] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_uat_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559122] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_uat_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559151] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_arch not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559165] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559191] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wasac_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559205] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wasac_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559230] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559244] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559269] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559289] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559316] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559330] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559355] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_wed_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559375] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_wed_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559402] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559416] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559789] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_dev_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559807] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_dev_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559834] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_arch not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559848] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_arch
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559874] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol svg_pg_woip_rct_bkp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559887] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for svg_pg_woip_rct_bkp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559913] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol sysabo not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559933] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for sysabo
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559960] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol testformation not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.559974] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for testformation
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560000] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol tmp not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560013] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560039] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol userfiles not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560052] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for userfiles
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560077] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wed not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560091] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560116] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol wedfmkuat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560129] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for wedfmkuat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560154] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_dq_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560168] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_dq_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560193] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_fwsi_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560207] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_fwsi_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560239] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560254] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560288] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_e1 not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560305] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_e1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560332] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_secure_uat not
held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560346] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_secure_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560371] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_wasac_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560385] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_wasac_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560410] W
> [glusterd-locks.c:796:glusterd_mgmt_v3_unlock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x22119)
> [0x7fc9c0996119]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x2caae)
> [0x7fc9c09a0aae]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdf293)
> [0x7fc9c0a53293] ) 0-management: Lock for vol xlog_woip_uat not held
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560424] W [MSGID:
> 106117] [glusterd-handler.c:6225:__glusterd_peer_rpc_notify] 0-management:
> Lock not released for xlog_woip_uat
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.560632] E
> [rpc-clnt.c:346:saved_frames_unwind] (-->
>
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x138)[0x7fc9c658ede8]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcd97)[0x7fc9c6334d97]
(-->
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xcebe)[0x7fc9c6334ebe] (-->
>
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3)[0x7fc9c6335e93]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xea18)[0x7fc9c6336a18]
)))))
> 0-management: forced unwinding frame type(Peer mgmt) op(--(2)) called at
> 2020-04-30 04:53:45.548964 (xid=0x14)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.590404] I [MSGID:
> 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd:
> Received ACC from uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb, host:
> glusterDevVM1, port: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.605795] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=axwayrct) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.607702] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=bacarauat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.609819] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=bigdatauat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.612495] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=centreon_sondes) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.626019] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=data_export) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.647087] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=data_recette) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.659136] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=desireuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.670657] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=documents) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.682023] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=euruat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.693570] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=flowfmkuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.705599] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fw_dev) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.717604] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fw_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.728906] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fwtmp_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.740879] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=gcnuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.751891] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=home_dev) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.763640] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=nichtest) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.776247] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=oprwede1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.788561] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=parcwebUAT) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.800387] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=pfwa) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.812105] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=primeuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.823961] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=pub_landigger) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.837141] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=rrddata) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.850466] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=s4cmsuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.862823] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sauvegarde_git) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.876135] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=scicsuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.888415] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=share_data) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.900458] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sifuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.912089] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sltwede1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.924615] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_my_epad) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.936654] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.947857] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.958924] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.970254] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.981688] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_e1_rct_arch) to existing process with
> pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.993233] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_e1_rct_bkp) to existing process with
> pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.004715] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_e1_rct_arch) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.016529] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_e1_rct_bkp) to existing process
> with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.027805] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_uat_rct_arch) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.039737] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_uat_rct_bkp) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.050513] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.062357] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.073635] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.084670] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.095296] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_spot_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.106536] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_spot_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.118467] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.129270] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.140630] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.161897] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.172702] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_uat_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.183861] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_uat_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.194910] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wasac_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.206284] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wasac_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.217630] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.229499] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.242008] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.253692] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.265927] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.280293] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.291126] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.301830] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.313908] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sysabo) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.325862] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=testformation) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.338066] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=tmp) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.350849] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=userfiles) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.362816] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=wed) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.373710] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=wedfmkuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.385547] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_dq_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.397006] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_fwsi_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.408271] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_e1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.421135] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_secure_e1) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.432232] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_secure_uat) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.444636] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.457260] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_woip_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.459012] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.468268] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.473232] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume bacarauat attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.473316] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.477878] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume bigdatauat attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.482693] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume axwayrct attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.487866] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume centreon_sondes attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.492983] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume data_export failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.497915] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume data_recette failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.501917] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume desireuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.506923] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume documents failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.512438] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume euruat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.518040] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume flowfmkuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.522926] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fw_dev failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.527659] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fw_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.532753] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fwtmp_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.537001] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume gcnuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.541751] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume home_dev failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.547362] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume nichtest failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.552212] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume oprwede1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.556994] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume parcwebUAT failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.562588] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume pfwa failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.567076] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume primeuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.572840] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume pub_landigger failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.577506] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume rrddata failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.582343] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume s4cmsuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.588770] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sauvegarde_git failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.594366] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume scicsuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.599377] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume share_data failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.606017] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sifuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.611335] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sltwede1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.616729] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_my_epad failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.621603] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.626638] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.632579] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.637450] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.642303] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_e1_rct_arch failed to attach to pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.647140] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_e1_rct_bkp failed to attach to pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.652358] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_e1_rct_arch failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.657585] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_e1_rct_bkp failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.662443] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_uat_rct_arch failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.668309] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_uat_rct_bkp failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.673999] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.678921] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.684052] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.690447] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.696317] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_spot_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.701607] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_spot_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.706421] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.711689] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.716051] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.721992] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.727388] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_uat_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.732008] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_uat_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.736973] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wasac_dev_arch failed to attach to pid
13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.742294] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wasac_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.747477] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.752384] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.757284] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.762644] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.767639] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.772793] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.778061] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.783302] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.788061] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sysabo failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.793207] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume testformation failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.798291] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume tmp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.803416] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume userfiles failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.808308] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume wed failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.814324] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume wedfmkuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.819107] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_dq_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.823288] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_fwsi_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.828690] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_e1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.833488] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_secure_e1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.838268] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_secure_uat failed to attach to pid
13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.843147] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:46.848072] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_woip_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.313556] I [MSGID:
> 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd:
> Received ACC from uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974, host:
> glusterDevVM3, port: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.329381] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=axwayrct) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.331387] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=bacarauat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.333029] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=bigdatauat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.334597] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=centreon_sondes) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.339182] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=data_export) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.350103] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=data_recette) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.360630] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=desireuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.371890] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=documents) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.382506] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=euruat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.394164] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=flowfmkuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.404872] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fw_dev) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.415855] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fw_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.426897] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=fwtmp_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.437462] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=gcnuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.449651] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=home_dev) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.460781] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=nichtest) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.471934] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=oprwede1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.482599] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=parcwebUAT) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.493873] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=pfwa) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.504653] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=primeuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.516799] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=pub_landigger) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.527883] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=rrddata) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.539208] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=s4cmsuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.550367] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sauvegarde_git) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.561922] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=scicsuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.572642] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=share_data) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.584469] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sifuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.595899] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sltwede1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.607186] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_my_epad) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.618934] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.630429] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.641808] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.653225] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_asys_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.664938] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_e1_rct_arch) to existing process with
> pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.675713] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_e1_rct_bkp) to existing process with
> pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.687374] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_e1_rct_arch) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.699251] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_e1_rct_bkp) to existing process
> with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.710350] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_uat_rct_arch) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.721452] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_csctls_secure_uat_rct_bkp) to existing
> process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.733278] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.746155] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.756870] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.767721] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_fwsi_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.778731] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_spot_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.789891] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_spot_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.800824] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.811368] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.821870] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.832614] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_stat_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.843248] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_uat_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.854360] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_uat_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.865363] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wasac_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.876405] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wasac_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.887644] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.898755] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.909846] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.921089] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_wed_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.932033] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_dev_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.943601] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_dev_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.954545] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_rct_arch) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.965672] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=svg_pg_woip_rct_bkp) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.976431] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=sysabo) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.987562] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=testformation) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.998514] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=tmp) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.009442] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=userfiles) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.019909] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=wed) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.030875] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=wedfmkuat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.042410] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_dq_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.053263] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_fwsi_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.064913] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_e1) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.075672] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_secure_e1) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.086074] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_secure_uat) to existing process with pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.097069] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_wasac_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.107716] I [MSGID:
> 106618] [glusterd-svc-helper.c:901:glusterd_attach_svc] 0-glusterd: adding
> svc glustershd (volume=xlog_woip_uat) to existing process with pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.115507] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> 0d8a3686-9e37-4ce7-87bf-c85d1ec40974
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.115576] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.118770] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume axwayrct attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.118854] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: 0d8a3686-9e37-4ce7-87bf-c85d1ec40974
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.126469] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume bacarauat attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.136117] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume bigdatauat attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.146479] I [MSGID:
> 106617] [glusterd-svc-helper.c:680:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume centreon_sondes attached successfully to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.155857] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume data_export failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.165439] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume data_recette failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.174466] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume desireuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.184963] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume documents failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.193557] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.193615] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.194156] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume euruat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.204137] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume flowfmkuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.213284] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fw_dev failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.222847] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fw_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.232232] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume fwtmp_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.242077] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume gcnuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.253083] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume home_dev failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.260861] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume nichtest failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.269709] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume oprwede1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.279133] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume parcwebUAT failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.287840] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume pfwa failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.297333] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume primeuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.306648] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume pub_landigger failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.316242] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume rrddata failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.324951] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume s4cmsuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.333094] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sauvegarde_git failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.340171] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume scicsuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.349468] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume share_data failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.357675] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sifuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.365797] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sltwede1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.374008] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_my_epad failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.381902] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.390826] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.399462] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.407405] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_asys_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.415667] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_e1_rct_arch failed to attach to pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.423676] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_e1_rct_bkp failed to attach to pid
> 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.431865] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_e1_rct_arch failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.438165] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_e1_rct_bkp failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.443055] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_uat_rct_arch failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.448072] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_csctls_secure_uat_rct_bkp failed to attach
> to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.452946] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.457677] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.462471] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.467317] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_fwsi_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.472088] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_spot_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.477232] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_spot_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.481092] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.486204] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.490982] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.496615] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_stat_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.501516] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_uat_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.506503] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_uat_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.510742] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wasac_dev_arch failed to attach to pid
13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.516568] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wasac_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.521437] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.526419] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.530684] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.536198] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_wed_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.541424] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_dev_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.546293] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_dev_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.551174] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_rct_arch failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.556095] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume svg_pg_woip_rct_bkp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.561341] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume sysabo failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.565789] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume testformation failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.567081] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume tmp failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.567463] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume userfiles failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.568015] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume wed failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.568271] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume wedfmkuat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.568463] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_dq_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.568549] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_fwsi_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.569502] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_e1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.569738] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_secure_e1 failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.569917] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_secure_uat failed to attach to pid
13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.570093] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_wasac_uat failed to attach to pid 13968
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.570181] E [MSGID:
> 106617] [glusterd-svc-helper.c:684:glusterd_svc_attach_cbk] 0-management:
> svc glustershd of volume xlog_woip_uat failed to attach to pid 13968
>
>
> node3 glusterDevVM3:~# sed -n '/SSL support.*ENABLED/d;/using
certificate
> depth/d;/^.2020-04-30 04:53/p'
> /var/log/glusterfs/{cmd_history,glusterd}.log
> /var/log/glusterfs/cmd_history.log:[2020-04-30 04:53:45.081953] : volume
> status tmp : FAILED : Another transaction is in progress for tmp. Please
> try again after some time.
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.149428] I [MSGID:
> 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 70200
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.516320] I [MSGID:
> 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req]
> 0-glusterd: Received probe from uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.544746] I [MSGID:
> 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to glusterDevVM1 (0), ret: 0, op_ret: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.564474] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.564508] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:33.574315] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.266060] W
> [glusterd-locks.c:579:glusterd_mgmt_v3_lock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x41016)
> [0x7fd373dee016]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0x3203c)
> [0x7fd373ddf03c]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdebca)
> [0x7fd373e8bbca] ) 0-management: Lock for tmp held by
> e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.266089] E [MSGID:
> 106118] [glusterd-op-sm.c:3958:glusterd_op_ac_lock] 0-management: Unable to
> acquire lock for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.266168] E [MSGID:
> 106376] [glusterd-op-sm.c:7861:glusterd_op_sm] 0-management: handler
> returned: -1
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267447] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267500] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:41.267518] W
> [socket.c:1502:__socket_read_simple_msg] 0-socket.management: reading from
> socket failed. Error (Input/output error), peer (10.57.106.158:49081)
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:44.323453] I [MSGID:
> 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 70200
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.081794] I [MSGID:
> 106499] [glusterd-handler.c:4264:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.081913] W
> [glusterd-locks.c:579:glusterd_mgmt_v3_lock]
>
(-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xd9c8c)
> [0x7fd373e86c8c]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xd9a95)
> [0x7fd373e86a95]
>
-->/usr/lib/x86_64-linux-gnu/glusterfs/7.5/xlator/mgmt/glusterd.so(+0xdebca)
> [0x7fd373e8bbca] ) 0-management: Lock for tmp held by
> e2263e4d-a307-45d5-9cec-e1791f7a45fb
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.081931] E [MSGID:
> 106118] [glusterd-syncop.c:1883:gd_sync_task_begin] 0-management: Unable to
> acquire lock for tmp
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.549090] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:1417C086:SSL routines:tls_process_client_certificate:certificate
> verify failed
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:45.549142] E
> [socket.c:241:ssl_dump_error_stack] 0-socket.management:
> error:140A4044:SSL routines:SSL_clear:internal error
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:48.577494] I [MSGID:
> 106163] [glusterd-handshake.c:1433:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 70200
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.277115] I [MSGID:
> 106490] [glusterd-handler.c:2434:__glusterd_handle_incoming_friend_req]
> 0-glusterd: Received probe from uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.304944] I [MSGID:
> 106493] [glusterd-handler.c:3715:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to glusterDevVM2 (0), ret: 0, op_ret: 0
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.324935] I [MSGID:
> 106492] [glusterd-handler.c:2619:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid:
> 7f6c3023-144b-4db2-9063-d90926dbdd18
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:49.334823] I [MSGID:
> 106502] [glusterd-handler.c:2660:__glusterd_handle_friend_update]
> 0-management: Received my uuid as Friend
> /var/log/glusterfs/glusterd.log:[2020-04-30 04:53:50.119067] I [MSGID:
> 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management:
> Received ACC from uuid: 7f6c3023-144b-4db2-9063-d90926dbdd18
>
> Best regards,
> Nicolas.
>
> ------------------------------
> *De: *"Sanju Rakonde" <srakonde at redhat.com>
> *?: *nico at furyweb.fr
> *Cc: *"Hari Gowtham" <hgowtham at redhat.com>, "Nikhil
Ladha" <
> nladha at redhat.com>, "gluster-users" <gluster-users at
gluster.org>
> *Envoy?: *Jeudi 30 Avril 2020 03:34:04
> *Objet: *Re: [Gluster-users] never ending logging
>
> Response Inline.
>
> On Wed, Apr 29, 2020 at 10:22 PM <nico at furyweb.fr> wrote:
>
>> Thanks Hari.
>>
>> Firstly there is no more rejected node today.
>>
>> which volfile must be replicated on all nodes ? This ones ? All of them
>> must be the same ?
>>
> The volume info files located at
/var/lib/glusterd/vols/<volname>/info
> file should be consistent across the cluster. Not the below-mentioned
> volfiles. And the difference between them is expected.
>
> Looking at "Another transaction is in progress" issue, this
shouldn't be
> seen if we are running only one transaction at a time. Please check whether
> any script is executing commands in the background. If you are still
> experiencing the same after ensuring that no script is running, please
> share cmd-hostory.log from all the nodes and glusterd.log from all the
> nodes around the timestamp when the error is seen.
>
> 2b4a85d2fef7661be0f38b670b69cd02
>> node1/vols/tmp/tmp.glusterDevVM1.bricks-tmp-brick1-data.vol
>> dec273968c263e3aa12a8be073bea534
>> node2/vols/tmp/tmp.glusterDevVM1.bricks-tmp-brick1-data.vol
>> dec273968c263e3aa12a8be073bea534
>> node3/vols/tmp/tmp.glusterDevVM1.bricks-tmp-brick1-data.vol
>> dec273968c263e3aa12a8be073bea534
>> node1/vols/tmp/tmp.glusterDevVM2.bricks-tmp-brick1-data.vol
>> 2b4a85d2fef7661be0f38b670b69cd02
>> node2/vols/tmp/tmp.glusterDevVM2.bricks-tmp-brick1-data.vol
>> dec273968c263e3aa12a8be073bea534
>> node3/vols/tmp/tmp.glusterDevVM2.bricks-tmp-brick1-data.vol
>> a71fabf46ea02ce41c5cc7c3dd3cb86d
>> node1/vols/tmp/tmp.glusterDevVM3.bricks-tmp-brick1-data.vol
>> a71fabf46ea02ce41c5cc7c3dd3cb86d
>> node2/vols/tmp/tmp.glusterDevVM3.bricks-tmp-brick1-data.vol
>> 3a6a8cb4804be3453826dd9bdf865859
>> node3/vols/tmp/tmp.glusterDevVM3.bricks-tmp-brick1-data.vol
>>
>> # diff node1/vols/tmp/tmp.glusterDevVM1.bricks-tmp-brick1-data.vol
>> node2/vols/tmp/tmp.glusterDevVM1.bricks-tmp-brick1-data.vol
>> 5c5
>> < option shared-brick-count 1
>> ---
>> > option shared-brick-count 0
>>
>> Example of another volume :
>> 2aebab43f5829785e53704328f965f01
>> node1/vols/rrddata/rrddata.glusterDevVM1.bricks-rrddata-brick1-data.vol
>> 35a0ada7ba6542e2ac1bd566f84603c6
>> node2/vols/rrddata/rrddata.glusterDevVM1.bricks-rrddata-brick1-data.vol
>> 35a0ada7ba6542e2ac1bd566f84603c6
>> node3/vols/rrddata/rrddata.glusterDevVM1.bricks-rrddata-brick1-data.vol
>> 35a0ada7ba6542e2ac1bd566f84603c6
>> node1/vols/rrddata/rrddata.glusterDevVM2.bricks-rrddata-brick1-data.vol
>> 2aebab43f5829785e53704328f965f01
>> node2/vols/rrddata/rrddata.glusterDevVM2.bricks-rrddata-brick1-data.vol
>> 35a0ada7ba6542e2ac1bd566f84603c6
>> node3/vols/rrddata/rrddata.glusterDevVM2.bricks-rrddata-brick1-data.vol
>> 5b6b3453be116a2e1489c6ffc6b6fa86
>> node1/vols/rrddata/rrddata.glusterDevVM3.bricks-rrddata-brick1-data.vol
>> 5b6b3453be116a2e1489c6ffc6b6fa86
>> node2/vols/rrddata/rrddata.glusterDevVM3.bricks-rrddata-brick1-data.vol
>> 158d3a073596133643c634c1ecd603ba
>> node3/vols/rrddata/rrddata.glusterDevVM3.bricks-rrddata-brick1-data.vol
>>
>> Best regards,
>> Nicolas.
>>
>> ------------------------------
>> *De: *"Hari Gowtham" <hgowtham at redhat.com>
>> *?: *nico at furyweb.fr, "Sanju Rakonde" <srakonde at
redhat.com>
>> *Cc: *"Nikhil Ladha" <nladha at redhat.com>,
"gluster-users" <
>> gluster-users at gluster.org>
>> *Envoy?: *Mercredi 29 Avril 2020 14:11:29
>> *Objet: *Re: [Gluster-users] never ending logging
>>
>> Hi Nicolas,
>>
>> I would like to mention 2 things here.
>>
>> 1) The bricks not coming online might be because of the glusterd not
>> being in the right state.
>> As you mentioned above, the node rejection is most probably because of
>> the volfile mismatch.
>> During the glusterd reboot, we check for the volfile in each of the
nodes
>> to be the same.
>> If any node has a difference in volfile, you will notice that node to
be
>> rejected in the peer status.
>> To overcome this, we usually delete/copy-paste the volfile from the
>> correct machine to the rest
>> and then restart glusterd on each node one after the other. @Sanju
>> Rakonde <srakonde at redhat.com> can explain more on this.
>>
>> 2) The "another transaction is in progress" is a message you
usually get
>> when you try to
>> run any gluster command from different nodes. This would rarely hit the
>> locking part and throw this message.
>> If you see it a number of times, then it might be the fact that
glusterd
>> is hanging somewhere.
>> If you saw this post the above rejection issue, please do fix the
>> glusterd as per the above steps and then try.
>> That should fix it. Else we need to restart gluster for this particular
>> issue
>>
>> Looks like the resultant state is because of the volfile mismatch,
please
>> do check fix that. This will inturn fix the
>> peer rejection. Once the rejection is fixed, rest of the things should
>> fall into the right place.
>>
>>
>>
>> On Wed, Apr 29, 2020 at 5:01 PM <nico at furyweb.fr> wrote:
>>
>>> Thanks for feedback.
>>>
>>> No problem at all before node rejection, all worked fine but I had
to
>>> renew SSL certificates which were about to expire. The node
rejection
>>> occured after certificate renewal.
>>>
>>> I'll send you links for sosreports (Debian) in separate mail.
>>>
>>> Regards,
>>> Nicolas.
>>>
>>>
>>> ------------------------------
>>> *De: *"Nikhil Ladha" <nladha at redhat.com>
>>> *?: *nico at furyweb.fr
>>> *Cc: *"gluster-users" <gluster-users at
gluster.org>
>>> *Envoy?: *Mercredi 29 Avril 2020 12:38:00
>>> *Objet: *Re: [Gluster-users] never ending logging
>>>
>>> Hi,
>>> It seems to be like using the step mentioned here
>>>
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/
>>> is the reason you are facing so much problems. You said that the
vol files
>>> in all the nodes are different which should never happen, because
that's
>>> the purpose of shd to heal the vol files in case of any differences
in the
>>> files, but dues to several deletions of the files and starting the
glusterd
>>> service again and then disconnection of the nodes has lead to a
state where
>>> that task is left incomplete. And whenever you are again trying to
start
>>> the volume or get the status then that heal runs but it's not
able to
>>> complete as it's not able to get a proper head through which it
could make
>>> the heal.
>>> Before the node rejection part, and you applying those steps to
rectify
>>> it, did everything worked fine?
>>> Also, can you share the sos-reports if not then the vol files which
you
>>> see as different. So, that I can be sure of that.
>>>
>>> Regards
>>> Nikhil Ladha
>>>
>>> On Wed, Apr 29, 2020 at 12:54 PM <nico at furyweb.fr> wrote:
>>>
>>>> I made another test, I restarded glusterd on all 3 nodes and
right
>>>> after restart I can get partial volume status but Locking
failed occurs a
>>>> few seconds after.
>>>>
>>>> Example of output on node 2 and 3:
>>>> root at glusterDevVM2:~# systemctl restart glusterd
>>>> root at glusterDevVM2:~# gluster volume status tmp
>>>> Status of volume: tmp
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>>
>>>>
------------------------------------------------------------------------------
>>>> Brick glusterDevVM2:/bricks/tmp/brick1/data N/A N/A
>>>> N N/A
>>>> Self-heal Daemon on localhost N/A N/A
>>>> N N/A
>>>>
>>>> Task Status of Volume tmp
>>>>
>>>>
------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> root at glusterDevVM2:~# gluster volume status tmp
>>>> Locking failed on glusterDevVM1. Please check log file for
details.
>>>> root at glusterDevVM2:~# gluster volume status tmp
>>>> Status of volume: tmp
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>>
>>>>
------------------------------------------------------------------------------
>>>> Brick glusterDevVM1:/bricks/tmp/brick1/data 49215 0
>>>> Y 5335
>>>> Brick glusterDevVM2:/bricks/tmp/brick1/data 49215 0
>>>> Y 5239
>>>> Self-heal Daemon on localhost N/A N/A
>>>> N N/A
>>>> Self-heal Daemon on glusterDevVM1 N/A N/A
>>>> N N/A
>>>>
>>>> Task Status of Volume tmp
>>>>
>>>>
------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> root at glusterDevVM2:~# gluster volume status tmp
>>>> Locking failed on glusterDevVM3. Please check log file for
details.
>>>> Locking failed on glusterDevVM1. Please check log file for
details.
>>>>
>>>>
>>>> root at glusterDevVM3:~# systemctl restart glusterd
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Status of volume: tmp
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>>
>>>>
------------------------------------------------------------------------------
>>>> Brick glusterDevVM1:/bricks/tmp/brick1/data 49215 0
>>>> Y 5335
>>>> Brick glusterDevVM2:/bricks/tmp/brick1/data 49215 0
>>>> Y 5239
>>>> Brick glusterDevVM3:/bricks/tmp/brick1/data 49215 0
>>>> Y 3693
>>>> Self-heal Daemon on localhost N/A N/A
>>>> N N/A
>>>> Self-heal Daemon on glusterDevVM2 N/A N/A
>>>> N N/A
>>>> Self-heal Daemon on glusterDevVM1 N/A N/A
>>>> Y 102850
>>>>
>>>> Task Status of Volume tmp
>>>>
>>>>
------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Locking failed on glusterDevVM2. Please check log file for
details.
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Locking failed on glusterDevVM1. Please check log file for
details.
>>>> Locking failed on glusterDevVM2. Please check log file for
details.
>>>> root at glusterDevVM3:~# systemctl restart glusterd
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Status of volume: tmp
>>>> Gluster process TCP Port RDMA Port
>>>> Online Pid
>>>>
>>>>
------------------------------------------------------------------------------
>>>> Brick glusterDevVM3:/bricks/tmp/brick1/data N/A N/A
>>>> N N/A
>>>> Self-heal Daemon on localhost N/A N/A
>>>> N N/A
>>>>
>>>> Task Status of Volume tmp
>>>>
>>>>
------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Another transaction is in progress for tmp. Please try again
after some
>>>> time.
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Another transaction is in progress for tmp. Please try again
after some
>>>> time.
>>>> root at glusterDevVM3:~# gluster volume status tmp
>>>> Another transaction is in progress for tmp. Please try again
after some
>>>> time.
>>>>
>>>>
>>>> ------------------------------
>>>> *De: *"Nikhil Ladha" <nladha at redhat.com>
>>>> *?: *nico at furyweb.fr
>>>> *Cc: *"gluster-users" <gluster-users at
gluster.org>
>>>> *Envoy?: *Mardi 28 Avril 2020 14:17:46
>>>> *Objet: *Re: [Gluster-users] never ending logging
>>>>
>>>> Hi,
>>>> It says syntax error in the log you shared, so there must be
some
>>>> mistake in what you are passing as an argument or some spelling
mistake.
>>>> Otherwise, how come it will run on one them and not on other
having same
>>>> configuration?
>>>> Also, can you please share the complete log file.
>>>> And try restarting all the nodes in the tsp, and execute the
commands.
>>>>
>>>> Regards
>>>> Nikhil Ladha
>>>>
>>>> On Tue, Apr 28, 2020 at 5:20 PM <nico at furyweb.fr>
wrote:
>>>>
>>>>> Hi.
>>>>>
>>>>> Not really worked well, I restarted node 2 at least a dozen
of times
>>>>> until almost all bricks go online but the Rejected state
disapeared after
>>>>> applying the fix.
>>>>> I'm not able to create a volume as all gluster commands
are issuing
>>>>> the "Another transaction is in progress" error.
>>>>> All ping are less than 0.5ms.
>>>>>
>>>>> I noticed another error in brick logs for a failed brick :
>>>>> [2020-04-28 10:58:59.009933] E [MSGID: 101021]
>>>>> [graph.y:364:graphyyerror] 0-parser: syntax error: line 140
(volume
>>>>> 'data_export-server'): "!SSLv2"
>>>>> allowed tokens are 'volume', 'type',
'subvolumes', 'option',
>>>>> 'end-volume'()
>>>>>
>>>>> root at glusterDevVM2:/var/lib/glusterd/vols/data_export#
grep -n SSLv2 *
>>>>> data_export.gfproxyd.vol:8: option
transport.socket.ssl-cipher-list
>>>>> HIGH:\!SSLv2
>>>>> data_export.gfproxyd.vol:26: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> data_export.gfproxyd.vol:44: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>>
data_export.glusterDevVM1.bricks-data_export-brick1-data.vol:140:
>>>>> option transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>>
data_export.glusterDevVM2.bricks-data_export-brick1-data.vol:140:
>>>>> option transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>>
data_export.glusterDevVM3.bricks-data_export-brick1-data.vol:145:
>>>>> option transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> data_export-shd.vol:7: option
transport.socket.ssl-cipher-list
>>>>> HIGH:\!SSLv2
>>>>> data_export-shd.vol:24: option
transport.socket.ssl-cipher-list
>>>>> HIGH:\!SSLv2
>>>>> data_export-shd.vol:41: option
transport.socket.ssl-cipher-list
>>>>> HIGH:\!SSLv2
>>>>> data_export.tcp-fuse.vol:8: option
transport.socket.ssl-cipher-list
>>>>> HIGH:\!SSLv2
>>>>> data_export.tcp-fuse.vol:24: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> data_export.tcp-fuse.vol:40: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> info:22:ssl.cipher-list=HIGH:\!SSLv2
>>>>> trusted-data_export.tcp-fuse.vol:8: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> trusted-data_export.tcp-fuse.vol:26: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> trusted-data_export.tcp-fuse.vol:44: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>> trusted-data_export.tcp-gfproxy-fuse.vol:8: option
>>>>> transport.socket.ssl-cipher-list HIGH:\!SSLv2
>>>>>
>>>>> Another volume with same parameters don't show this
error :
>>>>> root at glusterDevVM2:/var/lib/glusterd/vols/userfiles#
grep '2020-04-28
>>>>> 10:5[89]:'
/var/log/glusterfs/bricks/bricks-userfiles-brick1-data.log
>>>>> [2020-04-28 10:58:53.427441] I [MSGID: 100030]
>>>>> [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd: Started
running
>>>>> /usr/sbin/glusterfsd version 7.5 (args:
/usr/sbin/glusterfsd -s
>>>>> glusterDevVM2 --volfile-id
>>>>> userfiles.glusterDevVM2.bricks-userfiles-brick1-data -p
>>>>>
/var/run/gluster/vols/userfiles/glusterDevVM2-bricks-userfiles-brick1-data.pid
>>>>> -S /var/run/gluster/072c6be1df6e31e4.socket --brick-name
>>>>> /bricks/userfiles/brick1/data -l
>>>>> /var/log/glusterfs/bricks/bricks-userfiles-brick1-data.log
--xlator-option
>>>>> *-posix.glusterd-uuid=7f6c3023-144b-4db2-9063-d90926dbdd18
--process-name
>>>>> brick --brick-port 49216 --xlator-option
userfiles-server.listen-port=49216)
>>>>> [2020-04-28 10:58:53.428426] I
[glusterfsd.c:2594:daemonize]
>>>>> 0-glusterfs: Pid of current running process is 5184
>>>>> [2020-04-28 10:58:53.432337] I
>>>>> [socket.c:4350:ssl_setup_connection_params]
0-socket.glusterfsd: SSL
>>>>> support for glusterd is ENABLED
>>>>> [2020-04-28 10:58:53.436982] I
>>>>> [socket.c:4360:ssl_setup_connection_params]
0-socket.glusterfsd: using
>>>>> certificate depth 1
>>>>> [2020-04-28 10:58:53.437873] I
[socket.c:958:__socket_server_bind]
>>>>> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
>>>>> [2020-04-28 10:58:53.438830] I
>>>>> [socket.c:4347:ssl_setup_connection_params] 0-glusterfs:
SSL support on the
>>>>> I/O path is ENABLED
>>>>> [2020-04-28 10:58:53.439206] I
>>>>> [socket.c:4350:ssl_setup_connection_params] 0-glusterfs:
SSL support for
>>>>> glusterd is ENABLED
>>>>> [2020-04-28 10:58:53.439238] I
>>>>> [socket.c:4360:ssl_setup_connection_params] 0-glusterfs:
using certificate
>>>>> depth 1
>>>>> [2020-04-28 10:58:53.441296] I [MSGID: 101190]
>>>>> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll:
Started thread
>>>>> with index 0
>>>>> [2020-04-28 10:58:53.441434] I [MSGID: 101190]
>>>>> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll:
Started thread
>>>>> with index 1
>>>>> [2020-04-28 10:59:01.609052] I
>>>>> [rpcsvc.c:2690:rpcsvc_set_outstanding_rpc_limit]
0-rpc-service: Configured
>>>>> rpc.outstanding-rpc-limit with value 64
>>>>> [2020-04-28 10:59:01.609353] I
>>>>> [socket.c:4347:ssl_setup_connection_params]
0-tcp.userfiles-server: SSL
>>>>> support on the I/O path is ENABLED
>>>>> [2020-04-28 10:59:01.609373] I
>>>>> [socket.c:4350:ssl_setup_connection_params]
0-tcp.userfiles-server: SSL
>>>>> support for glusterd is ENABLED
>>>>> [2020-04-28 10:59:01.609388] I
>>>>> [socket.c:4360:ssl_setup_connection_params]
0-tcp.userfiles-server: using
>>>>> certificate depth 1
>>>>> [2020-04-28 10:59:01.609403] I
>>>>> [socket.c:4363:ssl_setup_connection_params]
0-tcp.userfiles-server: using
>>>>> cipher list HIGH:!SSLv2
>>>>> [2020-04-28 10:59:01.644924] I
>>>>> [socket.c:4350:ssl_setup_connection_params]
0-socket.userfiles-changelog:
>>>>> SSL support for glusterd is ENABLED
>>>>> [2020-04-28 10:59:01.644958] I
>>>>> [socket.c:4360:ssl_setup_connection_params]
0-socket.userfiles-changelog:
>>>>> using certificate depth 1
>>>>>
>>>>>
>>>>> ------------------------------
>>>>> *De: *"Nikhil Ladha" <nladha at redhat.com>
>>>>> *?: *nico at furyweb.fr
>>>>> *Cc: *"gluster-users" <gluster-users at
gluster.org>
>>>>> *Envoy?: *Mardi 28 Avril 2020 09:31:45
>>>>> *Objet: *Re: [Gluster-users] never ending logging
>>>>>
>>>>> Hi,
>>>>> Okay. So, after applying the fix everything worked well?
Means all the
>>>>> peers were in connected state?
>>>>> If so, can you try creating a new volume without enabling
SSL and
>>>>> share the log, and also for the volume that is not starting
can you try the
>>>>> steps mentioned here
>>>>>
https://docs.gluster.org/en/latest/Troubleshooting/troubleshooting-glusterd/
only
>>>>> in the 'Common issues how to resolve them section"
and what logs do you get?
>>>>> Also, could you ping test all the peers?
>>>>> And the error 'failed to fetch volume files' occurs
as it is not able
>>>>> to fetch the vol file from it's peers, as all the peers
in the cluster
>>>>> share the same vol file for a volume.
>>>>>
>>>>> Regards
>>>>> Nikhil Ladha
>>>>>
>>>>> On Tue, Apr 28, 2020 at 12:37 PM <nico at furyweb.fr>
wrote:
>>>>>
>>>>>> Hi.
>>>>>>
>>>>>> No operation on any volume nor brick, the only change
was SSL
>>>>>> certificate renewal on 3 nodes and all clients. Then,
node 2 was rejected
>>>>>> and I applied following steps to fix :
>>>>>>
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/
>>>>>> I also saw
>>>>>>
https://docs.gluster.org/en/latest/Troubleshooting/troubleshooting-glusterd/
>>>>>> but solution wasn't compatible as
cluster.max-op-version doesn't exist and
>>>>>> all op-version are the same on all 3 nodes.
>>>>>>
>>>>>> The strange thing is error "failed to fetch volume
file" occurs on
>>>>>> the node owning the brick, does it means it can't
access it's own brick ?
>>>>>>
>>>>>> Regards,
>>>>>> Nicolas.
>>>>>>
>>>>>> ------------------------------
>>>>>> *De: *"Nikhil Ladha" <nladha at
redhat.com>
>>>>>> *?: *nico at furyweb.fr
>>>>>> *Cc: *"gluster-users" <gluster-users at
gluster.org>
>>>>>> *Envoy?: *Mardi 28 Avril 2020 07:43:20
>>>>>> *Objet: *Re: [Gluster-users] never ending logging
>>>>>>
>>>>>> Hi,
>>>>>> Since, all things are working fine except few bricks
which are not
>>>>>> coming up, I doubt there is any issue with gluster
itself. Did you by
>>>>>> chance made any changes to those bricks or the volume
or the node to which
>>>>>> they are linked?
>>>>>> And as far as SSL logs are concerned, I am looking into
that matter.
>>>>>>
>>>>>> Regards
>>>>>> Nikhil Ladha
>>>>>>
>>>>>> On Mon, Apr 27, 2020 at 7:17 PM <nico at
furyweb.fr> wrote:
>>>>>>
>>>>>>> Thanks for reply.
>>>>>>>
>>>>>>> I updated storage pool in 7.5 and restarted all 3
nodes sequentially.
>>>>>>> All nodes now appear in Connected state from every
node and gluster
>>>>>>> volume list show all 74 volumes.
>>>>>>> SSL log lines are still flooding glusterd log file
on all nodes but
>>>>>>> don't appear on grick log files. As there's
no information about volume nor
>>>>>>> client on these lines I'm not able to check if
a certain volume produce
>>>>>>> this error or not.
>>>>>>> I alos tried pstack after installing Debian package
glusterfs-dbg
>>>>>>> but still getting "No symbols" error
>>>>>>>
>>>>>>> I found that 5 brick processes didn't start on
node 2 and 1 on node 3
>>>>>>> [2020-04-27 11:54:23.622659] I [MSGID: 100030]
>>>>>>> [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfsd:
Started running
>>>>>>> /usr/sbin/glusterfsd version 7.5 (args:
/usr/sbin/glusterfsd -s
>>>>>>> glusterDevVM2 --volfile-id
>>>>>>>
svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data -p
>>>>>>>
/var/run/gluster/vols/svg_pg_wed_dev_bkp/glusterDevVM2-bricks-svg_pg_wed_dev_bkp-brick1-data.pid
>>>>>>> -S /var/run/gluster/5023d38a22a8a874.socket
--brick-name
>>>>>>> /bricks/svg_pg_wed_dev_bkp/brick1/data -l
>>>>>>>
/var/log/glusterfs/bricks/bricks-svg_pg_wed_dev_bkp-brick1-data.log
>>>>>>> --xlator-option
*-posix.glusterd-uuid=7f6c3023-144b-4db2-9063-d90926dbdd18
>>>>>>> --process-name brick --brick-port 49206
--xlator-option
>>>>>>> svg_pg_wed_dev_bkp-server.listen-port=49206)
>>>>>>> [2020-04-27 11:54:23.632870] I
[glusterfsd.c:2594:daemonize]
>>>>>>> 0-glusterfs: Pid of current running process is 5331
>>>>>>> [2020-04-27 11:54:23.636679] I
>>>>>>> [socket.c:4350:ssl_setup_connection_params]
0-socket.glusterfsd: SSL
>>>>>>> support for glusterd is ENABLED
>>>>>>> [2020-04-27 11:54:23.636745] I
>>>>>>> [socket.c:4360:ssl_setup_connection_params]
0-socket.glusterfsd: using
>>>>>>> certificate depth 1
>>>>>>> [2020-04-27 11:54:23.637580] I
[socket.c:958:__socket_server_bind]
>>>>>>> 0-socket.glusterfsd: closing (AF_UNIX) reuse check
socket 9
>>>>>>> [2020-04-27 11:54:23.637932] I
>>>>>>> [socket.c:4347:ssl_setup_connection_params]
0-glusterfs: SSL support on the
>>>>>>> I/O path is ENABLED
>>>>>>> [2020-04-27 11:54:23.637949] I
>>>>>>> [socket.c:4350:ssl_setup_connection_params]
0-glusterfs: SSL support for
>>>>>>> glusterd is ENABLED
>>>>>>> [2020-04-27 11:54:23.637960] I
>>>>>>> [socket.c:4360:ssl_setup_connection_params]
0-glusterfs: using certificate
>>>>>>> depth 1
>>>>>>> [2020-04-27 11:54:23.639324] I [MSGID: 101190]
>>>>>>> [event-epoll.c:682:event_dispatch_epoll_worker]
0-epoll: Started thread
>>>>>>> with index 0
>>>>>>> [2020-04-27 11:54:23.639380] I [MSGID: 101190]
>>>>>>> [event-epoll.c:682:event_dispatch_epoll_worker]
0-epoll: Started thread
>>>>>>> with index 1
>>>>>>> [2020-04-27 11:54:28.933102] E
>>>>>>> [glusterfsd-mgmt.c:2217:mgmt_getspec_cbk]
0-glusterfs: failed to get the
>>>>>>> 'volume file' from server
>>>>>>> [2020-04-27 11:54:28.933134] E
>>>>>>> [glusterfsd-mgmt.c:2416:mgmt_getspec_cbk] 0-mgmt:
failed to fetch volume
>>>>>>> file
>>>>>>>
(key:svg_pg_wed_dev_bkp.glusterDevVM2.bricks-svg_pg_wed_dev_bkp-brick1-data)
>>>>>>> [2020-04-27 11:54:28.933361] W
[glusterfsd.c:1596:cleanup_and_exit]
>>>>>>>
(-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe5d1) [0x7f2b08ec35d1]
>>>>>>> -->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x8d0)
[0x55d46cb5a110]
>>>>>>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x54)
[0x55d46cb51ec4] ) 0-:
>>>>>>> received signum (0), shutting down
>>>>>>>
>>>>>>> I tried to stop the volume but gluster commands are
still locked
>>>>>>> (Another transaction is in progress.).
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Nicolas.
>>>>>>>
>>>>>>> ------------------------------
>>>>>>> *De: *"Nikhil Ladha" <nladha at
redhat.com>
>>>>>>> *?: *nico at furyweb.fr
>>>>>>> *Cc: *"gluster-users" <gluster-users
at gluster.org>
>>>>>>> *Envoy?: *Lundi 27 Avril 2020 13:34:47
>>>>>>> *Objet: *Re: [Gluster-users] never ending logging
>>>>>>>
>>>>>>> Hi,
>>>>>>> As you mentioned that the node 2 is in
"semi-connected" state, I
>>>>>>> think due to that the locking of volume is failing,
and since it is failing
>>>>>>> in one of the volumes the transaction is not
complete and you are seeing a
>>>>>>> transaction error on another volume.
>>>>>>> Moreover, for the repeated logging of lines :
>>>>>>> SSL support on the I/O path is enabled, SSL support
for glusterd is
>>>>>>> enabled and using certificate depth 1
>>>>>>> If you can try creating a volume without having ssl
enabled and then
>>>>>>> check if the same log messages appear.
>>>>>>> Also, if you update to 7.5, and find any change in
log message with
>>>>>>> SSL ENABLED, then please do share that.
>>>>>>>
>>>>>>> Regards
>>>>>>> Nikhil Ladha
>>>>>>>
>>>>>> ________
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>> --
>> Regards,
>> Hari Gowtham.
>>
>
>
> --
> Thanks,
> Sanju
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200506/9648b221/attachment-0001.html>