You can update the packages with the ones built from source.
You will need to update both the client and server nrpe packages with
the modified payload limit to resolve this
- nagios-plugins-nrpe
- nrpe
Have you done that?
On 10/09/2015 07:17 AM, Punit Dambiwal wrote:> Hi Ramesh,
>
> Even after recompile nrpe with increased value still the same issue...
>
> Thanks,
> Punit
>
> On Fri, Oct 9, 2015 at 9:21 AM, Punit Dambiwal <hypunit at gmail.com
> <mailto:hypunit at gmail.com>> wrote:
>
> Hi Ramesh,
>
> Thanks for the update...as i have install nagios and nrpe via
> yum,should i need to remove nrpe and reinstall through source
> package ??
>
> Thanks,
> Punit
>
> On Thu, Oct 8, 2015 at 6:49 PM, Ramesh Nachimuthu
> <rnachimu at redhat.com <mailto:rnachimu at redhat.com>>
wrote:
>
> Looks like you are hitting the NRPE Payload issue. Standard
> NRPE packages from epel/fedora has 1024 bytes payload limit.
> We have to increment this to 8192 to fix this. You can see
> more info at
>
http://serverfault.com/questions/613288/truncating-return-data-as-it-is-bigger-then-nrpe-allows.
>
>
> Let me know if u need any more info.
>
> Regards,
> Ramesh
>
>
> On 10/08/2015 02:48 PM, Punit Dambiwal wrote:
>> Hi,
>>
>> I am getting the following error :-
>>
>> ----------------
>> [root at monitor-001 yum.repos.d]#
>> /usr/lib64/nagios/plugins/gluster/discovery.py -c ssd -H stor1
>> Traceback (most recent call last):
>> File
"/usr/lib64/nagios/plugins/gluster/discovery.py", line
>> 510, in <module>
>> clusterdata = discoverCluster(args.hostip, args.cluster,
>> args.timeout)
>> File
"/usr/lib64/nagios/plugins/gluster/discovery.py", line
>> 88, in discoverCluster
>> componentlist = discoverVolumes(hostip, timeout)
>> File
"/usr/lib64/nagios/plugins/gluster/discovery.py", line
>> 56, in discoverVolumes
>> timeout=timeout)
>> File
"/usr/lib64/nagios/plugins/gluster/server_utils.py",
>> line 107, in execNRPECommand
>> resultDict = json.loads(outputStr)
>> File "/usr/lib64/python2.6/json/__init__.py", line
307, in
>> loads
>> return _default_decoder.decode(s)
>> File "/usr/lib64/python2.6/json/decoder.py", line
319, in
>> decode
>> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>> File "/usr/lib64/python2.6/json/decoder.py", line
336, in
>> raw_decode
>> obj, end = self._scanner.iterscan(s, **kw).next()
>> File "/usr/lib64/python2.6/json/scanner.py", line
55, in
>> iterscan
>> rval, next_pos = action(m, context)
>> File "/usr/lib64/python2.6/json/decoder.py", line
183, in
>> JSONObject
>> value, end = iterscan(s, idx=end, context=context).next()
>> File "/usr/lib64/python2.6/json/scanner.py", line
55, in
>> iterscan
>> rval, next_pos = action(m, context)
>> File "/usr/lib64/python2.6/json/decoder.py", line
183, in
>> JSONObject
>> value, end = iterscan(s, idx=end, context=context).next()
>> File "/usr/lib64/python2.6/json/scanner.py", line
55, in
>> iterscan
>> rval, next_pos = action(m, context)
>> File "/usr/lib64/python2.6/json/decoder.py", line
217, in
>> JSONArray
>> value, end = iterscan(s, idx=end, context=context).next()
>> File "/usr/lib64/python2.6/json/scanner.py", line
55, in
>> iterscan
>> rval, next_pos = action(m, context)
>> File "/usr/lib64/python2.6/json/decoder.py", line
183, in
>> JSONObject
>> value, end = iterscan(s, idx=end, context=context).next()
>> File "/usr/lib64/python2.6/json/scanner.py", line
55, in
>> iterscan
>> rval, next_pos = action(m, context)
>> File "/usr/lib64/python2.6/json/decoder.py", line
155, in
>> JSONString
>> return scanstring(match.string, match.end(), encoding,
>> strict)
>> ValueError: ('Invalid control character at: line 1 column
>> 1023 (char 1023)', '{"ssd":
{"name": "ssd", "disperseCount":
>> "0", "bricks": [{"brickpath":
"/bricks/b/vol1",
>> "brickaddress": "stor1",
"hostUuid":
>> "5fcb5150-f0a5-4af8-b383-11fa5d3f82f0"},
{"brickpath":
>> "/bricks/b/vol1", "brickaddress":
"stor2", "hostUuid":
>> "b78d42c1-6ad7-4044-b900-3ccfe915859f"},
{"brickpath":
>> "/bricks/b/vol1", "brickaddress":
"stor3", "hostUuid":
>> "40500a9d-418d-4cc0-aec5-6efbfb3c24e5"},
{"brickpath":
>> "/bricks/b/vol1", "brickaddress":
"stor4", "hostUuid":
>> "5886ef94-df5e-4845-a54c-0e01546d66ea"},
{"brickpath":
>> "/bricks/c/vol1", "brickaddress":
"stor1", "hostUuid":
>> "5fcb5150-f0a5-4af8-b383-11fa5d3f82f0"},
{"brickpath":
>> "/bricks/c/vol1", "brickaddress":
"stor2", "hostUuid":
>> "b78d42c1-6ad7-4044-b900-3ccfe915859f"},
{"brickpath":
>> "/bricks/c/vol1", "brickaddress":
"stor3", "hostUuid":
>> "40500a9d-418d-4cc0-aec5-6efbfb3c24e5"},
{"brickpath":
>> "/bricks/c/vol1", "brickaddress":
"stor4", "hostUuid":
>> "5886ef94-df5e-4845-a54c-0e01546d66ea"},
{"brickpath":
>> "/bricks/d/vol1", "brickaddress":
"stor1", "hostUuid":
>> "5fcb5150-f0a5-4a\n')
>> [root at monitor-001 yum.repos.d]#
>> -------------------------
>>
>> --------------
>> [root at monitor-001 yum.repos.d]#
>> /usr/lib64/nagios/plugins/check_nrpe -H stor1 -c
>> discover_volume_list
>> {"ssd": {"type":
"DISTRIBUTED_REPLICATE", "name": "ssd"},
>> "lockvol": {"type": "REPLICATE",
"name": "lockvol"}}
>> [root at monitor-001 yum.repos.d]#
>> --------------
>>
>> Please help me to solve this issue...
>>
>> Thanks,
>> Punit
>>
>> On Fri, Oct 2, 2015 at 12:15 AM, Sahina Bose
>> <sabose at redhat.com <mailto:sabose at
redhat.com>> wrote:
>>
>> The gluster-nagios packages have not been tested on Ubuntu
>>
>> Looking at the error below, it looks like the rpm has not
>> updated the nrpe.cfg correctly. You may need to edit the
>> spec file for the config file paths on Ubuntu and rebuild.
>>
>>
>> On 10/01/2015 05:45 PM, Amudhan P wrote:
>>> OSError: [Errno 2] No such file or directory is now
>>> sorted out by by changing NRPE_PATH in
"constants.py".
>>>
>>> now if i run discovery.py
>>>
>>> testusr at
gfsovirt:/usr/local/lib/nagios/plugins/gluster$
>>> sudo python discovery.py -c vm-gfs -H 192.168.1.11
>>> Failed to execute NRPE command
'discover_volume_list' in
>>> host '192.168.1.11'
>>> Error : NRPE: Command 'discover_volume_list'
not defined
>>> Make sure NRPE server in host '192.168.1.11' is
>>> configured to accept requests from Nagios server
>>>
>>>
>>> testusr at
gfsovirt:/usr/local/lib/nagios/plugins/gluster$
>>> /usr/lib/nagios/plugins/check_nrpe -H 192.168.1.11 -c
>>> discover_volume_list
>>> NRPE: Command 'discover_volume_list' not
defined
>>>
>>>
>>> My client is responding to other nrpe command.
>>> testusr at
gfsovirt:/usr/local/lib/nagios/plugins/gluster$
>>> /usr/lib/nagios/plugins/check_nrpe -H 192.168.1.11 -c
>>> check_load
>>> OK - load average: 0.01, 0.03,
>>> 0.10|load1=0.010;15.000;30.000;0;
>>> load5=0.030;10.000;25.000;0;
load15=0.100;5.000;20.000;0;
>>>
>>>
>>>
>>> On Thu, Oct 1, 2015 at 5:20 PM, Sahina Bose
>>> <sabose at redhat.com <mailto:sabose at
redhat.com>> wrote:
>>>
>>> Looks like a conflict in versions of python and
>>> python-cpopen.
>>> Can you give us the version of these packages?
>>>
>>> Also, what's the output of
>>> /usr/lib64/nagios/plugins/check_nrpe -H
>>> 192.168.1.11 -c discover_volume_list
>>>
>>>
>>>
>>>
>>> On 10/01/2015 04:10 PM, Amudhan P wrote:
>>>> Hi,
>>>>
>>>> I am getting a error when i run discovery.py.
>>>>
>>>> discovery.py -c vm-gfs -H 192.168.1.11
>>>>
>>>> Traceback (most recent call last):
>>>> File "discovery.py", line 541, in
<module>
>>>> clusterdata = discoverCluster(args.hostip,
>>>> args.cluster, args.timeout)
>>>> File "discovery.py", line 90, in
discoverCluster
>>>> componentlist = discoverVolumes(hostip,
timeout)
>>>> File "discovery.py", line 53, in
discoverVolumes
>>>> timeout=timeout)
>>>> File
>>>>
"/usr/local/lib/nagios/plugins/gluster/server_utils.py",
>>>> line 114, in execNRPECommand
>>>> (returncode, outputStr, err) >>>>
utils.execCmd(nrpeCmd, raw=True)
>>>> File
>>>>
"/usr/lib/python2.7/dist-packages/glusternagios/utils.py",
>>>> line 403, in execCmd
>>>> deathSignal=deathSignal, childUmask=childUmask)
>>>> File
>>>>
"/usr/local/lib/python2.7/dist-packages/cpopen/__init__.py",
>>>> line 63, in __init__
>>>> **kw)
>>>> File
"/usr/lib/python2.7/subprocess.py", line
>>>> 710, in __init__
>>>> errread, errwrite)
>>>> File
>>>>
"/usr/local/lib/python2.7/dist-packages/cpopen/__init__.py",
>>>> line 82, in _execute_child_v276
>>>> restore_sigpipe=restore_sigpipe
>>>> File
>>>>
"/usr/local/lib/python2.7/dist-packages/cpopen/__init__.py",
>>>> line 107, in _execute_child_v275
>>>> restore_sigpipe
>>>> OSError: [Errno 2] No such file or directory
>>>>
>>>> Gluster version : 3.7.4
>>>> OS : Ubuntu 14.04
>>>> Complied from source tar file.
>>>>
>>>>
>>>> regards
>>>> Amudhan
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Sep 30, 2015 at 6:21 PM, Humble Devassy
>>>> Chirammal <humble.devassy at gmail.com
>>>> <mailto:humble.devassy at gmail.com>>
wrote:
>>>>
>>>> The EL7 rpms of gluster-nagios are
available @
>>>>
http://download.gluster.org/pub/gluster/glusterfs-nagios/1.1.0/
>>>>
>>>> Hope it helps!
>>>>
>>>> --Humble
>>>>
>>>>
>>>> On Tue, Sep 29, 2015 at 10:56 AM, Sahina
Bose
>>>> <sabose at redhat.com <mailto:sabose
at redhat.com>>
>>>> wrote:
>>>>
>>>> We will publish the EL7 builds soon.
>>>>
>>>> The source tarballs are now available
at -
>>>>
http://download.gluster.org/pub/gluster/glusterfs-nagios/
>>>>
>>>> thanks
>>>> sahina
>>>>
>>>>
>>>> On 09/25/2015 12:55 PM, Humble Devassy
>>>> Chirammal wrote:
>>>>> HI Michael,
>>>>>
>>>>> Yes, only el6 packages are
available @
>>>>>
http://download.gluster.org/pub/gluster/glusterfs-nagios/
>>>>> . I am looping nagios project team
leads
>>>>> to this thread. Lets wait for them
to revert.
>>>>>
>>>>> --Humble
>>>>>
>>>>>
>>>>> On Sun, Sep 20, 2015 at 2:32 PM,
Prof. Dr.
>>>>> Michael Schefczyk <michael at
schefczyk.net
>>>>> <mailto:michael at
schefczyk.net>> wrote:
>>>>>
>>>>> Dear All,
>>>>>
>>>>> In June 2014, the
gluster-nagios team
>>>>> (thanks!) published the
availability
>>>>> of gluster-nagios-common and
>>>>> gluster-nagios-addons on this
list. As
>>>>> far as I can tell, this quite
>>>>> extensive gluster nagios
monitoring
>>>>> tool is available for el6 only.
Are
>>>>> there known plans to make this
>>>>> available for el7 outside the
>>>>> RHEL-repos
>>>>>
(http://ftp.redhat.de/pub/redhat/linux/enterprise/7Server/en/RHS/SRPMS/),
>>>>> e.g. for use with oVirt /
Centos 7
>>>>> also? It would be good to be
able to
>>>>> monitor gluster without playing
around
>>>>> with scripts from sources other
than a
>>>>> rpm repo.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Michael
>>>>>
_______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> <mailto:Gluster-users at
gluster.org>
>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
_______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> <mailto:Gluster-users at gluster.org>
>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at
gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at
gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151009/dfad9ce2/attachment.html>