Sam McLeod
2017-Sep-18 12:14 UTC
[Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Thanks Milind, Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :( -- Sam> On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com> wrote: > > Sam, > You might want to give glusterfs-3.12.1 a try instead. > > > >> On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net> wrote: >> Howdy, >> >> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have having issues with glusterd.log and glustershd.log both being filled with errors relating to null client errors and client-callback functions. >> >> They seem to be related to high CPU usage across the nodes although I don't have a way of confirming that (suggestions welcomed!). >> >> >> in /var/log/glusterfs/glusterd.log: >> >> csvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref] (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8] -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >> >> >> This is repeated _thousands_ of times and is especially noisey when any node is running gluster volume set <volname> <option> <value>. >> >> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log: >> >> [2017-09-15 00:36:21.654242] W [MSGID: 114010] [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this function should not be called >> >> >> --- >> >> >> Cluster configuration: >> >> Gluster 3.12 >> CentOS 7.4 >> Replica 3, Arbiter 1 >> NFS disabled (using Kubernetes with the FUSE client) >> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2 >> >> >> root at int-gluster-03:~ # gluster get >> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532 >> >> [Global] >> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae >> op-version: 31200 >> >> [Global options] >> cluster.brick-multiplex: enable >> >> [Peers] >> Peer1.primary_hostname: int-gluster-02.fqdn.here >> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255 >> Peer1.state: Peer in Cluster >> Peer1.connected: Connected >> Peer1.othernames: >> Peer2.primary_hostname: int-gluster-01.fqdn.here >> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6 >> Peer2.state: Peer in Cluster >> Peer2.connected: Connected >> Peer2.othernames: >> >> (Then volume options are listed) >> >> >> --- >> >> >> Volume configuration: >> >> root at int-gluster-03:~ # gluster volume info my_volume_name >> >> Volume Name: my_volume_name >> Type: Replicate >> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x 3 = 3 >> Transport-type: tcp >> Bricks: >> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name >> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name >> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name >> Options Reconfigured: >> performance.stat-prefetch: true >> performance.parallel-readdir: true >> performance.client-io-threads: true >> network.ping-timeout: 5 >> diagnostics.client-log-level: WARNING >> diagnostics.brick-log-level: WARNING >> cluster.readdir-optimize: true >> cluster.lookup-optimize: true >> transport.address-family: inet >> nfs.disable: on >> cluster.brick-multiplex: enable >> >> >> -- >> Sam McLeod >> @s_mcleod >> https://smcleod.net >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > -- > Milind >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170918/3bf6849e/attachment.html>
Sam McLeod
2017-Sep-25 00:57 UTC
[Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
FYI - I've been testing the Gluster 3.12.1 packages with the help of the SIG maintainer and I can confirm that the logs are no longer being filled with NFS or null client errors after the upgrade. -- Sam McLeod @s_mcleod https://smcleod.net> On 18 Sep 2017, at 10:14 pm, Sam McLeod <mailinglists at smcleod.net> wrote: > > Thanks Milind, > > Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :( > > -- > Sam > > On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com <mailto:mchangir at redhat.com>> wrote: > >> Sam, >> You might want to give glusterfs-3.12.1 a try instead. >> >> >> >> On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net <mailto:mailinglists at smcleod.net>> wrote: >> Howdy, >> >> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have having issues with glusterd.log and glustershd.log both being filled with errors relating to null client errors and client-callback functions. >> >> They seem to be related to high CPU usage across the nodes although I don't have a way of confirming that (suggestions welcomed!). >> >> >> in /var/log/glusterfs/glusterd.log: >> >> csvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref] (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8] -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >> >> >> This is repeated _thousands_ of times and is especially noisey when any node is running gluster volume set <volname> <option> <value>. >> >> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log: >> >> [2017-09-15 00:36:21.654242] W [MSGID: 114010] [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this function should not be called >> >> >> --- >> >> >> Cluster configuration: >> >> Gluster 3.12 >> CentOS 7.4 >> Replica 3, Arbiter 1 >> NFS disabled (using Kubernetes with the FUSE client) >> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2 >> >> >> root at int-gluster-03:~ # gluster get >> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532 >> >> [Global] >> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae >> op-version: 31200 >> >> [Global options] >> cluster.brick-multiplex: enable >> >> [Peers] >> Peer1.primary_hostname: int-gluster-02.fqdn.here >> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255 >> Peer1.state: Peer in Cluster >> Peer1.connected: Connected >> Peer1.othernames: >> Peer2.primary_hostname: int-gluster-01.fqdn.here >> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6 >> Peer2.state: Peer in Cluster >> Peer2.connected: Connected >> Peer2.othernames: >> >> (Then volume options are listed) >> >> >> --- >> >> >> Volume configuration: >> >> root at int-gluster-03:~ # gluster volume info my_volume_name >> >> Volume Name: my_volume_name >> Type: Replicate >> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x 3 = 3 >> Transport-type: tcp >> Bricks: >> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name >> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name >> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name >> Options Reconfigured: >> performance.stat-prefetch: true >> performance.parallel-readdir: true >> performance.client-io-threads: true >> network.ping-timeout: 5 >> diagnostics.client-log-level: WARNING >> diagnostics.brick-log-level: WARNING >> cluster.readdir-optimize: true >> cluster.lookup-optimize: true >> transport.address-family: inet >> nfs.disable: on >> cluster.brick-multiplex: enable >> >> >> -- >> Sam McLeod >> @s_mcleod >> https://smcleod.net <https://smcleod.net/> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users> >> >> >> >> -- >> Milind >> > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170925/9faa636b/attachment.html>
Sam McLeod
2017-Sep-26 00:31 UTC
[Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
The 3.12.1 CentOS Storage SIG have now been released: http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.12/ <http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.12/> Big thank you to Niels de Vos from Redhat! -- Sam McLeod @s_mcleod https://smcleod.net> On 25 Sep 2017, at 10:57 am, Sam McLeod <mailinglists at smcleod.net> wrote: > > FYI - I've been testing the Gluster 3.12.1 packages with the help of the SIG maintainer and I can confirm that the logs are no longer being filled with NFS or null client errors after the upgrade. > > -- > Sam McLeod > @s_mcleod > https://smcleod.net <https://smcleod.net/> > >> On 18 Sep 2017, at 10:14 pm, Sam McLeod <mailinglists at smcleod.net <mailto:mailinglists at smcleod.net>> wrote: >> >> Thanks Milind, >> >> Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :( >> >> -- >> Sam >> >> On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com <mailto:mchangir at redhat.com>> wrote: >> >>> Sam, >>> You might want to give glusterfs-3.12.1 a try instead. >>> >>> >>> >>> On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net <mailto:mailinglists at smcleod.net>> wrote: >>> Howdy, >>> >>> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have having issues with glusterd.log and glustershd.log both being filled with errors relating to null client errors and client-callback functions. >>> >>> They seem to be related to high CPU usage across the nodes although I don't have a way of confirming that (suggestions welcomed!). >>> >>> >>> in /var/log/glusterfs/glusterd.log: >>> >>> csvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >>> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref] (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8] -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 0-client_t: null client [Invalid argument] >>> >>> >>> This is repeated _thousands_ of times and is especially noisey when any node is running gluster volume set <volname> <option> <value>. >>> >>> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log: >>> >>> [2017-09-15 00:36:21.654242] W [MSGID: 114010] [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this function should not be called >>> >>> >>> --- >>> >>> >>> Cluster configuration: >>> >>> Gluster 3.12 >>> CentOS 7.4 >>> Replica 3, Arbiter 1 >>> NFS disabled (using Kubernetes with the FUSE client) >>> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2 >>> >>> >>> root at int-gluster-03:~ # gluster get >>> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532 >>> >>> [Global] >>> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae >>> op-version: 31200 >>> >>> [Global options] >>> cluster.brick-multiplex: enable >>> >>> [Peers] >>> Peer1.primary_hostname: int-gluster-02.fqdn.here >>> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255 >>> Peer1.state: Peer in Cluster >>> Peer1.connected: Connected >>> Peer1.othernames: >>> Peer2.primary_hostname: int-gluster-01.fqdn.here >>> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6 >>> Peer2.state: Peer in Cluster >>> Peer2.connected: Connected >>> Peer2.othernames: >>> >>> (Then volume options are listed) >>> >>> >>> --- >>> >>> >>> Volume configuration: >>> >>> root at int-gluster-03:~ # gluster volume info my_volume_name >>> >>> Volume Name: my_volume_name >>> Type: Replicate >>> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 1 x 3 = 3 >>> Transport-type: tcp >>> Bricks: >>> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name >>> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name >>> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name >>> Options Reconfigured: >>> performance.stat-prefetch: true >>> performance.parallel-readdir: true >>> performance.client-io-threads: true >>> network.ping-timeout: 5 >>> diagnostics.client-log-level: WARNING >>> diagnostics.brick-log-level: WARNING >>> cluster.readdir-optimize: true >>> cluster.lookup-optimize: true >>> transport.address-family: inet >>> nfs.disable: on >>> cluster.brick-multiplex: enable >>> >>> >>> -- >>> Sam McLeod >>> @s_mcleod >>> https://smcleod.net <https://smcleod.net/> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >>> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users> >>> >>> >>> >>> -- >>> Milind >>> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://lists.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170926/1ab091c2/attachment.html>
Apparently Analagous Threads
- 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
- 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
- 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
- asterisk sip peer/user matching methodsforauthentication backwards?
- Glusterd proccess hangs on reboot