search for: statedump

Displaying 20 results from an estimated 49 matches for "statedump".

2018 Jan 29
0
Run away memory with gluster mount
...> >>> > >>> This is obviously a contrived app environment. With my intended > >>> application load it takes about a week or so for the memory to get > >>> high enough to invoke the oom killer. > >> > >> Can you try debugging with the statedump > >> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) > >> of > >> the fuse mount process and see what member is leaking? Take the > >> statedumps in succession, maybe once initially during the I/O and > >> once the...
2017 Nov 09
1
glusterfs brick server use too high memory
...cs=617382139 max_size=1941483443 max_num_allocs=617382139 total_allocs=661873332 TIme: 2017.11.9 17:15 (Today) [features/locks.www-volume-locks - usage-type gf_common_mt_strdup memusage] size=792538295 num_allocs=752904534 max_size=792538295 max_num_allocs=752904534 total_allocs=800889589 The statedump files are in the attachment. Thanks again. ? 2017-11-09 17:22:03?"Nithya Balachandran" <nbalacha at redhat.com> ??? On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote: Hi all, I'm glad to add glusterfs community. I have a glusterfs cluster...
2017 Nov 09
0
glusterfs brick server use too high memory
...s as > clients connecting to glusterfs. But, the memory of glusterfsd process is > too high, gradually increase to 100G every node. > Then, I reboot the glusterfsd process. But the memory increase during > approximate a week. > How can I debug the problem? > > Hi, Please take statedumps at intervals (a minimum of 2 at intervals of an hour) of a brick process for which you see the memory increasing and send them to us. [1] describes how to take statedumps. Regards, Nithya [1] http://docs.gluster.org/en/latest/Troubleshooting/statedump/ <http://docs.gluster.org/en/latest/Trou...
2018 Jan 29
2
Run away memory with gluster mount
...t; >>>>> This is obviously a contrived app environment. With my intended >>>>> application load it takes about a week or so for the memory to get >>>>> high enough to invoke the oom killer. >>>> >>>> Can you try debugging with the statedump >>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >>>> of >>>> the fuse mount process and see what member is leaking? Take the >>>> statedumps in succession, maybe once initially during the I/O and >>&gt...
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all, I'm glad to add glusterfs community. I have a glusterfs cluster: Nodes: 4 System: Centos7.1 Glusterfs: 3.8.9 Each Node: CPU: 48 core Mem: 128GB Disk: 1*4T There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node. Then, I reboot the glusterfsd
2018 Jan 27
6
Run away memory with gluster mount
...t;> slowly growing again. >>> >>> This is obviously a contrived app environment. With my intended >>> application load it takes about a week or so for the memory to get >>> high enough to invoke the oom killer. >> >> Can you try debugging with the statedump >> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >> of >> the fuse mount process and see what member is leaking? Take the >> statedumps in succession, maybe once initially during the I/O and >> once the memory gets high enoug...
2018 Feb 03
0
Run away memory with gluster mount
...ntended > application load it takes about a week > or so for the memory to get > high enough to invoke the oom killer. > > > Can you try debugging with the statedump > (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump > <https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) > of >...
2018 Feb 02
3
Run away memory with gluster mount
...tended >>>>>>>> application load it takes about a week or so for the memory to get >>>>>>>> high enough to invoke the oom killer. >>>>>>>> >>>>>>> >>>>>>> Can you try debugging with the statedump >>>>>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/st >>>>>>> atedump/#read-a-statedump) >>>>>>> of >>>>>>> the fuse mount process and see what member is leaking? Take the >>>>>>>...
2018 Feb 21
1
Run away memory with gluster mount
...?????????????????????????? application load it takes about a week >> ??????????????????????????????? or so for the memory to get >> ??????????????????????????????? high enough to invoke the oom killer. >> >> >> ??????????????????????????? Can you try debugging with the statedump >> >> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump >> >> >> <https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) >> >&...
2018 Jan 30
1
Run away memory with gluster mount
...gt; This is obviously a contrived app environment. With my intended > >>>>> application load it takes about a week or so for the memory to get > >>>>> high enough to invoke the oom killer. > >>>> > >>>> Can you try debugging with the statedump > >>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) > >>>> of > >>>> the fuse mount process and see what member is leaking? Take the > >>>> statedumps in succession, maybe once initially during the...
2018 Feb 05
1
Run away memory with gluster mount
...application load it takes about a week > > or so for the memory to get > > high enough to invoke the oom killer. > > > > > > Can you try debugging with the statedump > > (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump > > <https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) > > of &g...
2019 Feb 01
1
Help analise statedumps
...performance.io-thread-count: 8 server.allow-insecure: on cluster.read-hash-mode: 0 cluster.lookup-unhashed: auto cluster.choose-local: on I believe there's a memory leak somewhere, it just keeps going up until it hangs one or more nodes taking the whole cluster down sometimes. I have taken 2 statedumps on one of the nodes, one where the memory is too high and another just after a reboot with the app running and the volume fully healed. https://pmcdigital.sharepoint.com/:u:/g/EYDsNqTf1UdEuE6B0ZNVPfIBf_I-AbaqHotB1lJOnxLlTg?e=boYP09 (high memory) https://pmcdigital.sharepoint.com/:u:/g/EWZBsnET2x...
2018 Feb 01
0
Run away memory with gluster mount
...obviously a contrived app environment. With my intended >>>>>>> application load it takes about a week or so for the memory to get >>>>>>> high enough to invoke the oom killer. >>>>>> >>>>>> Can you try debugging with the statedump >>>>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >>>>>> of >>>>>> the fuse mount process and see what member is leaking? Take the >>>>>> statedumps in succession, maybe once initiall...
2018 Jan 29
0
Run away memory with gluster mount
Csaba, Could this be the problem of the inodes not getting freed in the fuse process? Daniel, as Ravi requested, please provide access to the statedumps. You can strip out the filepath information. Does your data set include a lot of directories? Thanks, Nithya On 27 January 2018 at 10:23, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/27/2018 02:29 AM, Dan Ragle wrote: > >> >> On 1/25/2018 8:21 PM,...
2018 Jan 26
2
Run away memory with gluster mount
...e. Restart the > test script and the memory begins slowly growing again. > > This is obviously a contrived app environment. With my intended > application load it takes about a week or so for the memory to get > high enough to invoke the oom killer. Can you try debugging with the statedump (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) of the fuse mount process and see what member is leaking? Take the statedumps in succession, maybe once initially during the I/O and once the memory gets high enough to hit the OOM mark. Share the dumps here....
2018 Jan 26
0
Run away memory with gluster mount
...rt the test script and the memory begins slowly growing again. >> >> This is obviously a contrived app environment. With my intended application load it takes about a week or so for the memory to get >> high enough to invoke the oom killer. > > Can you try debugging with the statedump (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) of > the fuse mount process and see what member is leaking? Take the statedumps in succession, maybe once initially during the I/O and > once the memory gets high enough to hit the OOM mark. > Share the...
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
...after a while. > We see this on the fuse client, but not when we use nfs. So the question I > am interested in seeing an answer too is in what way is nfs different from > fuse that could cause this. > > My suspicion is it is locking related. > > Would it be possible to obtain a statedump of the native client when the application becomes completely unresponsive? A statedump can help in understanding operations within the gluster stack. Log file of the native client might also offer some clues. Regards, Vijay -------------- next part -------------- An HTML attachment was scrubbed......
2017 Sep 22
0
BUG: After stop and start wrong port is advertised
...reason glusterd's portmap is referring to a stale port (IMO) where as brick is still listening to the correct port. But ideally when glusterd service is restarted, all the portmap in-memory is rebuilt. I'd request for the following details from you to let us start analysing it: 1. glusterd statedump output from 192.168.140.43 . You can use kill -SIGUSR2 <pid of glusterd> to request for a statedump and the file will be available in /var/run/gluster 2. glusterd, brick logfile for 192.168.140.43:/gluster/public from 192.168.140.43 3. cmd_history logfile from all the nodes. 4. Content of /va...
2018 Jan 25
0
Run away memory with gluster mount
*sigh* trying again to correct formatting ... apologize for the earlier mess. Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior. This is on an updated CentOS 7 box. The setup is a simple two node replicated layout where the two nodes act as both server and client. The volume in question: Volume Name: GlusterWWW Type:
2018 Jan 20
3
Stale locks on shards
...er investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all nodes. Here is part of statedump that shows shard having active lock on crashed node: [xlator.features.locks.zone2-ssd1-vmstor1-locks.inode] path=/.shard/75353c17-d6b8-485d-9baf-fd6c700e39a1.21 mandatory=0 inodelk-count=1 lock-dump.domain.domain=zone2-ssd1-vmstor1-replicate-0:metadata lock-dump.domain.domain=zone2-ssd1-vmstor1-re...