Hi Nithya,
We are having the setup where copying the file to and deleting it from
gluster mount point to update the latest file. We noticed due to this
having some memory increase in glusterfsd process.
To find the memory leak we are using valgrind but didn't get any help.
That's why contacted to glusterfs community.
Regards,
Abhishek
On Thu, Jun 6, 2019, 16:08 Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Abhishek,
>
> I am still not clear as to the purpose of the tests. Can you clarify why
> you are using valgrind and why you think there is a memory leak?
>
> Regards,
> Nithya
>
> On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL <abhishpaliwal at
gmail.com>
> wrote:
>
>> Hi Nithya,
>>
>> Here is the Setup details and test which we are doing as below:
>>
>>
>> One client, two gluster Server.
>> The client is writing and deleting one file each 15 minutes by script
>> test_v4.15.sh.
>>
>> IP
>> Server side:
>> 128.224.98.157 /gluster/gv0/
>> 128.224.98.159 /gluster/gv0/
>>
>> Client side:
>> 128.224.98.160 /gluster_mount/
>>
>> Server side:
>> gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
>> 128.224.98.159:/gluster/gv0/ force
>> gluster volume start gv0
>>
>> root at 128:/tmp/brick/gv0# gluster volume info
>>
>> Volume Name: gv0
>> Type: Replicate
>> Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 128.224.98.157:/gluster/gv0
>> Brick2: 128.224.98.159:/gluster/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>>
>> exec script: ./ps_mem.py -p 605 -w 61 > log
>> root at 128:/# ./ps_mem.py -p 605
>> Private + Shared = RAM used Program
>> 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
>> ---------------------------------
>> 24856.0 KiB
>> ================================>>
>>
>> Client side:
>> mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0
>> /gluster_mount
>>
>>
>> We are using the below script write and delete the file.
>>
>> *test_v4.15.sh <http://test_v4.15.sh>*
>>
>> Also the below script to see the memory increase whihle the script is
>> above script is running in background.
>>
>> *ps_mem.py*
>>
>> I am attaching the script files as well as the result got after testing
>> the scenario.
>>
>> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran <nbalacha at
redhat.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Writing to a volume should not affect glusterd. The stack you have
shown
>>> in the valgrind looks like the memory used to initialise the
structures
>>> glusterd uses and will free only when it is stopped.
>>>
>>> Can you provide more details to what it is you are trying to test?
>>>
>>> Regards,
>>> Nithya
>>>
>>>
>>> On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL <abhishpaliwal at
gmail.com>
>>> wrote:
>>>
>>>> Hi Team,
>>>>
>>>> Please respond on the issue which I raised.
>>>>
>>>> Regards,
>>>> Abhishek
>>>>
>>>> On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <
>>>> abhishpaliwal at gmail.com> wrote:
>>>>
>>>>> Anyone please reply....
>>>>>
>>>>> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL
<abhishpaliwal at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> I upload some valgrind logs from my gluster 5.4 setup.
This is
>>>>>> writing to the volume every 15 minutes. I stopped
glusterd and then copy
>>>>>> away the logs. The test was running for some simulated
days. They are
>>>>>> zipped in valgrind-54.zip.
>>>>>>
>>>>>> Lots of info in valgrind-2730.log. Lots of possibly
lost bytes in
>>>>>> glusterfs and even some definitely lost bytes.
>>>>>>
>>>>>> ==2737== 1,572,880 bytes in 1 blocks are possibly lost
in loss record
>>>>>> 391 of 391
>>>>>> ==2737== at 0x4C29C25: calloc (in
>>>>>> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
>>>>>> ==2737== by 0xA22485E: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA217C94: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA21D9F8: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA21DED9: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA21E685: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA1B9D8C: init (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0x4E511CE: xlator_init (in
>>>>>> /usr/lib64/libglusterfs.so.0.0.1)
>>>>>> ==2737== by 0x4E8A2B8: ??? (in
/usr/lib64/libglusterfs.so.0.0.1)
>>>>>> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
>>>>>> /usr/lib64/libglusterfs.so.0.0.1)
>>>>>> ==2737== by 0x409C35: glusterfs_process_volfp (in
>>>>>> /usr/sbin/glusterfsd)
>>>>>> ==2737== by 0x409D99: glusterfs_volumes_init (in
/usr/sbin/glusterfsd)
>>>>>> ==2737=>>>>>> ==2737== LEAK SUMMARY:
>>>>>> ==2737== definitely lost: 1,053 bytes in 10 blocks
>>>>>> ==2737== indirectly lost: 317 bytes in 3 blocks
>>>>>> ==2737== possibly lost: 2,374,971 bytes in 524 blocks
>>>>>> ==2737== still reachable: 53,277 bytes in 201 blocks
>>>>>> ==2737== suppressed: 0 bytes in 0 blocks
>>>>>>
>>>>>> --
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Abhishek Paliwal
>>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>> Abhishek Paliwal
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190607/ab340e0c/attachment.html>