Aravinda
2015-Dec-07 13:45 UTC
[Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error
Looks like failed to execute the Cleanup script as part of Volume delete. Please run the following command in the failed node and let us know the output and return code. /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py --volname=VOL_ZIMBRA echo $? This error can be ignored if not using Glusterfind. regards Aravinda On 11/11/2015 06:55 PM, Marco Lorenzo Crociani wrote:> Hi, > I removed one volume from the ovirt console. > oVirt 3.5.4 > Gluster 3.7.5 > CentOS release 6.7 > > In the logs there where these errors: > > > [2015-11-11 13:03:29.783491] I [run.c:190:runner_log] > (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) > [0x7fc7d6002c75] > -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) > [0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) > [0x7fc7e162868e] ) 0-management: Ran script: > /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh > --volname=VOL_ZIMBRA --last=no > [2015-11-11 13:03:29.789594] E [run.c:190:runner_log] > (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) > [0x7fc7d6002c75] > -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) > [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) > [0x7fc7e162868e] ) 0-management: Failed to execute script: > /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh > --volname=VOL_ZIMBRA --last=no > [2015-11-11 13:03:29.790807] I [MSGID: 106132] > [glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick > already stopped > [2015-11-11 13:03:31.108959] I [MSGID: 106540] > [glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered MOUNTV3 successfully > [2015-11-11 13:03:31.109881] I [MSGID: 106540] > [glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered MOUNTV1 successfully > [2015-11-11 13:03:31.110725] I [MSGID: 106540] > [glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered NFSV3 successfully > [2015-11-11 13:03:31.111562] I [MSGID: 106540] > [glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered NLM v4 successfully > [2015-11-11 13:03:31.112396] I [MSGID: 106540] > [glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered NLM v1 successfully > [2015-11-11 13:03:31.113225] I [MSGID: 106540] > [glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: > De-registered ACL v3 successfully > [2015-11-11 13:03:32.212071] I [MSGID: 106132] > [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd > already stopped > [2015-11-11 13:03:32.212862] I [MSGID: 106132] > [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub > already stopped > [2015-11-11 13:03:32.213099] I [MSGID: 106144] > [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick > /gluster/VOL_ZIMBRA/brick on port 49191 > [2015-11-11 13:03:32.282685] I [MSGID: 106144] > [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick > /gluster/VOL_ZIMBRA/brick3 on port 49168 > [2015-11-11 13:03:32.364079] I [MSGID: 101053] > [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 > [2015-11-11 13:03:32.364111] I [MSGID: 101053] > [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 > [2015-11-11 13:03:32.374604] I [MSGID: 101053] > [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 > [2015-11-11 13:03:32.374640] I [MSGID: 101053] > [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 > [2015-11-11 13:03:41.906892] I [MSGID: 106495] > [glusterd-handler.c:3049:__glusterd_handle_getwd] 0-glusterd: Received > getwd req > [2015-11-11 13:03:41.910931] E [run.c:190:runner_log] > (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0xef3d2) > [0x7fc7d60923d2] > -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) > [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) > [0x7fc7e162868e] ) 0-management: Failed to execute script: > /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py > --volname=VOL_ZIMBRA > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151207/ace853d5/attachment.html>
Marco Lorenzo Crociani
2015-Dec-09 10:05 UTC
[Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error
Hi, # /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py --volname=VOL_ZIMBRA Traceback (most recent call last): File "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", line 60, in <module> main() File "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", line 43, in main for session in os.listdir(glusterfind_dir): OSError: [Errno 2] No such file or directory: '/var/lib/glusterd/glusterfind' # which glusterfind /usr/bin/glusterfind Regards, Marco Crociani On 07/12/2015 14:45, Aravinda wrote:> Looks like failed to execute the Cleanup script as part of Volume delete. > > Please run the following command in the failed node and let us know > the output and return code. > > /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py > --volname=VOL_ZIMBRA > echo $? > > This error can be ignored if not using Glusterfind. > regards > Aravinda > On 11/11/2015 06:55 PM, Marco Lorenzo Crociani wrote: >> Hi, >> I removed one volume from the ovirt console. >> oVirt 3.5.4 >> Gluster 3.7.5 >> CentOS release 6.7 >> >> In the logs there where these errors: >> >> >> [2015-11-11 13:03:29.783491] I [run.c:190:runner_log] >> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) >> [0x7fc7d6002c75] >> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) >> [0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >> [0x7fc7e162868e] ) 0-management: Ran script: >> /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh >> --volname=VOL_ZIMBRA --last=no >> [2015-11-11 13:03:29.789594] E [run.c:190:runner_log] >> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) >> [0x7fc7d6002c75] >> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) >> [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >> [0x7fc7e162868e] ) 0-management: Failed to execute script: >> /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh >> --volname=VOL_ZIMBRA --last=no >> [2015-11-11 13:03:29.790807] I [MSGID: 106132] >> [glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick >> already stopped >> [2015-11-11 13:03:31.108959] I [MSGID: 106540] >> [glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered MOUNTV3 successfully >> [2015-11-11 13:03:31.109881] I [MSGID: 106540] >> [glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered MOUNTV1 successfully >> [2015-11-11 13:03:31.110725] I [MSGID: 106540] >> [glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered NFSV3 successfully >> [2015-11-11 13:03:31.111562] I [MSGID: 106540] >> [glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered NLM v4 successfully >> [2015-11-11 13:03:31.112396] I [MSGID: 106540] >> [glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered NLM v1 successfully >> [2015-11-11 13:03:31.113225] I [MSGID: 106540] >> [glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: >> De-registered ACL v3 successfully >> [2015-11-11 13:03:32.212071] I [MSGID: 106132] >> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd >> already stopped >> [2015-11-11 13:03:32.212862] I [MSGID: 106132] >> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub >> already stopped >> [2015-11-11 13:03:32.213099] I [MSGID: 106144] >> [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick >> /gluster/VOL_ZIMBRA/brick on port 49191 >> [2015-11-11 13:03:32.282685] I [MSGID: 106144] >> [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick >> /gluster/VOL_ZIMBRA/brick3 on port 49168 >> [2015-11-11 13:03:32.364079] I [MSGID: 101053] >> [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 >> [2015-11-11 13:03:32.364111] I [MSGID: 101053] >> [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 >> [2015-11-11 13:03:32.374604] I [MSGID: 101053] >> [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 >> [2015-11-11 13:03:32.374640] I [MSGID: 101053] >> [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 >> [2015-11-11 13:03:41.906892] I [MSGID: 106495] >> [glusterd-handler.c:3049:__glusterd_handle_getwd] 0-glusterd: >> Received getwd req >> [2015-11-11 13:03:41.910931] E [run.c:190:runner_log] >> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0xef3d2) >> [0x7fc7d60923d2] >> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) >> [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >> [0x7fc7e162868e] ) 0-management: Failed to execute script: >> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py >> --volname=VOL_ZIMBRA >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >-- Marco Crociani Prisma Telecom Testing S.r.l. via Petrocchi, 4 20127 MILANO ITALY Phone: +39 02 26113507 Fax: +39 02 26113597 e-mail: marcoc at prismatelecomtesting.com web: http://www.prismatelecomtesting.com Questa email (e I suoi allegati) costituisce informazione riservata e confidenziale e pu? essere soggetto a legal privilege. Pu? essere utilizzata esclusivamente dai suoi destinatari legittimi. Se avete ricevuto questa email per errore, siete pregati di informarne immediatamente il mittente e quindi cancellarla. A meno che non siate stati a ci? espressamente autorizzati, la diffusione o la riproduzione di questa email o del suo contenuto non sono consentiti. Salvo che questa email sia espressamente qualificata come offerta o accettazione contrattuale, il mittente non intende con questa email dare vita ad un vincolo giuridico e questa email non pu? essere interpretata quale offerta o accettazione che possa dare vita ad un contratto. Qualsiasi opinione manifestata in questa email ? un'opinione personale del mittente, salvo che il mittente dichiari espressamente che si tratti di un'opinione di Prisma Engineering. ******************************************************************************* This e-mail (including any attachments) is private and confidential, and may be privileged. It is for the exclusive use of the intended recipient(s). If you have received this email in error, please inform the sender immediately and then delete this email. Unless you have been given specific permission to do so, please do not distribute or copy this email or its contents. Unless the text of this email specifically states that it is a contractual offer or acceptance, the sender does not intend to create a legal relationship and this email shall not constitute an offer or acceptance which could give rise to a contract. Any views expressed in this communication are those of the individual sender, except where the sender specifically states them to be the views of Prisma Engineering. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151209/33eaa023/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: marcoc.vcf Type: text/x-vcard Size: 259 bytes Desc: not available URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151209/33eaa023/attachment.vcf>
Aravinda
2015-Dec-09 11:44 UTC
[Gluster-users] Gluster 3.7.5 - S57glusterfind-delete-post.py error
Thanks. I will fix the issue. Was directory /var/lib/glusterd deleted post installation? (During any cleanup process) This cleanup script was expecting /var/lib/glusterd/glusterfind directory to be present. Now I will handle the script to ignore if that directory not present. Opened a bug for the same and sent patch to fix the issue. (Once review complete, we will make it available in 3.7.7 release) Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1289935 Patch: http://review.gluster.org/#/c/12923/ Thanks for reporting the issue. regards Aravinda On 12/09/2015 03:35 PM, Marco Lorenzo Crociani wrote:> Hi, > > # /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py > --volname=VOL_ZIMBRA > Traceback (most recent call last): > File > "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", > line 60, in <module> > main() > File > "/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py", > line 43, in main > for session in os.listdir(glusterfind_dir): > OSError: [Errno 2] No such file or directory: > '/var/lib/glusterd/glusterfind' > > > # which glusterfind > /usr/bin/glusterfind > > > Regards, > > Marco Crociani > > On 07/12/2015 14:45, Aravinda wrote: >> Looks like failed to execute the Cleanup script as part of Volume >> delete. >> >> Please run the following command in the failed node and let us know >> the output and return code. >> >> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py >> --volname=VOL_ZIMBRA >> echo $? >> >> This error can be ignored if not using Glusterfind. >> regards >> Aravinda >> On 11/11/2015 06:55 PM, Marco Lorenzo Crociani wrote: >>> Hi, >>> I removed one volume from the ovirt console. >>> oVirt 3.5.4 >>> Gluster 3.7.5 >>> CentOS release 6.7 >>> >>> In the logs there where these errors: >>> >>> >>> [2015-11-11 13:03:29.783491] I [run.c:190:runner_log] >>> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) >>> [0x7fc7d6002c75] >>> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x4cc) >>> [0x7fc7d60920bc] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >>> [0x7fc7e162868e] ) 0-management: Ran script: >>> /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh >>> --volname=VOL_ZIMBRA --last=no >>> [2015-11-11 13:03:29.789594] E [run.c:190:runner_log] >>> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0x5fc75) >>> [0x7fc7d6002c75] >>> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) >>> [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >>> [0x7fc7e162868e] ) 0-management: Failed to execute script: >>> /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh >>> --volname=VOL_ZIMBRA --last=no >>> [2015-11-11 13:03:29.790807] I [MSGID: 106132] >>> [glusterd-utils.c:1371:glusterd_service_stop] 0-management: brick >>> already stopped >>> [2015-11-11 13:03:31.108959] I [MSGID: 106540] >>> [glusterd-utils.c:4105:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered MOUNTV3 successfully >>> [2015-11-11 13:03:31.109881] I [MSGID: 106540] >>> [glusterd-utils.c:4114:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered MOUNTV1 successfully >>> [2015-11-11 13:03:31.110725] I [MSGID: 106540] >>> [glusterd-utils.c:4123:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered NFSV3 successfully >>> [2015-11-11 13:03:31.111562] I [MSGID: 106540] >>> [glusterd-utils.c:4132:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered NLM v4 successfully >>> [2015-11-11 13:03:31.112396] I [MSGID: 106540] >>> [glusterd-utils.c:4141:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered NLM v1 successfully >>> [2015-11-11 13:03:31.113225] I [MSGID: 106540] >>> [glusterd-utils.c:4150:glusterd_nfs_pmap_deregister] 0-glusterd: >>> De-registered ACL v3 successfully >>> [2015-11-11 13:03:32.212071] I [MSGID: 106132] >>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd >>> already stopped >>> [2015-11-11 13:03:32.212862] I [MSGID: 106132] >>> [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub >>> already stopped >>> [2015-11-11 13:03:32.213099] I [MSGID: 106144] >>> [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick >>> /gluster/VOL_ZIMBRA/brick on port 49191 >>> [2015-11-11 13:03:32.282685] I [MSGID: 106144] >>> [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick >>> /gluster/VOL_ZIMBRA/brick3 on port 49168 >>> [2015-11-11 13:03:32.364079] I [MSGID: 101053] >>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 >>> [2015-11-11 13:03:32.364111] I [MSGID: 101053] >>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 >>> [2015-11-11 13:03:32.374604] I [MSGID: 101053] >>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=1 total=1 >>> [2015-11-11 13:03:32.374640] I [MSGID: 101053] >>> [mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=1 total=1 >>> [2015-11-11 13:03:41.906892] I [MSGID: 106495] >>> [glusterd-handler.c:3049:__glusterd_handle_getwd] 0-glusterd: >>> Received getwd req >>> [2015-11-11 13:03:41.910931] E [run.c:190:runner_log] >>> (-->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(+0xef3d2) >>> [0x7fc7d60923d2] >>> -->/usr/lib64/glusterfs/3.7.5/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x470) >>> [0x7fc7d6092060] -->/usr/lib64/libglusterfs.so.0(runner_log+0x11e) >>> [0x7fc7e162868e] ) 0-management: Failed to execute script: >>> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py >>> --volname=VOL_ZIMBRA >>> >>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-users >> > >