search for: statedumps

Displaying 20 results from an estimated 49 matches for "statedumps".

Did you mean: statedump
2018 Jan 29
0
Run away memory with gluster mount
...voke the oom killer. > >> > >> Can you try debugging with the statedump > >> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) > >> of > >> the fuse mount process and see what member is leaking? Take the > >> statedumps in succession, maybe once initially during the I/O and > >> once the memory gets high enough to hit the OOM mark. > >> Share the dumps here. > >> > >> Regards, > >> Ravi > > > > Thanks for the reply. I noticed yesterday that an update (3.12.5...
2017 Nov 09
1
glusterfs brick server use too high memory
...d volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node. Then, I reboot the glusterfsd process. But the memory increase during approximate a week. How can I debug the problem? Hi, Please take statedumps at intervals (a minimum of 2 at intervals of an hour) of a brick process for which you see the memory increasing and send them to us. [1] describes how to take statedumps. Regards, Nithya [1] http://docs.gluster.org/en/latest/Troubleshooting/statedump/ Thanks. ___________________...
2017 Nov 09
0
glusterfs brick server use too high memory
...s as > clients connecting to glusterfs. But, the memory of glusterfsd process is > too high, gradually increase to 100G every node. > Then, I reboot the glusterfsd process. But the memory increase during > approximate a week. > How can I debug the problem? > > Hi, Please take statedumps at intervals (a minimum of 2 at intervals of an hour) of a brick process for which you see the memory increasing and send them to us. [1] describes how to take statedumps. Regards, Nithya [1] http://docs.gluster.org/en/latest/Troubleshooting/statedump/ <http://docs.gluster.org/en/latest/Troub...
2018 Jan 29
2
Run away memory with gluster mount
...r. >>>> >>>> Can you try debugging with the statedump >>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >>>> of >>>> the fuse mount process and see what member is leaking? Take the >>>> statedumps in succession, maybe once initially during the I/O and >>>> once the memory gets high enough to hit the OOM mark. >>>> Share the dumps here. >>>> >>>> Regards, >>>> Ravi >>> >>> Thanks for the reply. I noticed yesterday t...
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all, I'm glad to add glusterfs community. I have a glusterfs cluster: Nodes: 4 System: Centos7.1 Glusterfs: 3.8.9 Each Node: CPU: 48 core Mem: 128GB Disk: 1*4T There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node. Then, I reboot the glusterfsd
2018 Jan 27
6
Run away memory with gluster mount
...;>> high enough to invoke the oom killer. >> >> Can you try debugging with the statedump >> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >> of >> the fuse mount process and see what member is leaking? Take the >> statedumps in succession, maybe once initially during the I/O and >> once the memory gets high enough to hit the OOM mark. >> Share the dumps here. >> >> Regards, >> Ravi > > Thanks for the reply. I noticed yesterday that an update (3.12.5) had > been posted so I went a...
2018 Feb 03
0
Run away memory with gluster mount
...<https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) > of > the fuse mount process and see what member > is leaking? Take the > statedumps in succession, maybe once > initially during the I/O and > once the memory gets high enough to hit the > OOM mark. > Share the dumps here. > >...
2018 Feb 02
3
Run away memory with gluster mount
...>>>>>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/st >>>>>>> atedump/#read-a-statedump) >>>>>>> of >>>>>>> the fuse mount process and see what member is leaking? Take the >>>>>>> statedumps in succession, maybe once initially during the I/O and >>>>>>> once the memory gets high enough to hit the OOM mark. >>>>>>> Share the dumps here. >>>>>>> >>>>>>> Regards, >>>>>>> Ravi >>&...
2018 Feb 21
1
Run away memory with gluster mount
....readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) >> >> ??????????????????????????? of >> ??????????????????????????? the fuse mount process and see what member >> ??????????????????????????? is leaking? Take the >> ??????????????????????????? statedumps in succession, maybe once >> ??????????????????????????? initially during the I/O and >> ??????????????????????????? once the memory gets high enough to hit the >> ??????????????????????????? OOM mark. >> ??????????????????????????? Share the dumps here. >> >> ??...
2018 Jan 30
1
Run away memory with gluster mount
...>>>> Can you try debugging with the statedump > >>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) > >>>> of > >>>> the fuse mount process and see what member is leaking? Take the > >>>> statedumps in succession, maybe once initially during the I/O and > >>>> once the memory gets high enough to hit the OOM mark. > >>>> Share the dumps here. > >>>> > >>>> Regards, > >>>> Ravi > >>> > >>> Thanks f...
2018 Feb 05
1
Run away memory with gluster mount
...luster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump>) > > of > > the fuse mount process and see what member > > is leaking? Take the > > statedumps in succession, maybe once > > initially during the I/O and > > once the memory gets high enough to hit the > > OOM mark. > > Share the dumps here. > > >...
2019 Feb 01
1
Help analise statedumps
...performance.io-thread-count: 8 server.allow-insecure: on cluster.read-hash-mode: 0 cluster.lookup-unhashed: auto cluster.choose-local: on I believe there's a memory leak somewhere, it just keeps going up until it hangs one or more nodes taking the whole cluster down sometimes. I have taken 2 statedumps on one of the nodes, one where the memory is too high and another just after a reboot with the app running and the volume fully healed. https://pmcdigital.sharepoint.com/:u:/g/EYDsNqTf1UdEuE6B0ZNVPfIBf_I-AbaqHotB1lJOnxLlTg?e=boYP09 (high memory) https://pmcdigital.sharepoint.com/:u:/g/EWZBsnET2xB...
2018 Feb 01
0
Run away memory with gluster mount
...> Can you try debugging with the statedump >>>>>> (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) >>>>>> of >>>>>> the fuse mount process and see what member is leaking? Take the >>>>>> statedumps in succession, maybe once initially during the I/O and >>>>>> once the memory gets high enough to hit the OOM mark. >>>>>> Share the dumps here. >>>>>> >>>>>> Regards, >>>>>> Ravi >>>>> >>...
2018 Jan 29
0
Run away memory with gluster mount
Csaba, Could this be the problem of the inodes not getting freed in the fuse process? Daniel, as Ravi requested, please provide access to the statedumps. You can strip out the filepath information. Does your data set include a lot of directories? Thanks, Nithya On 27 January 2018 at 10:23, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/27/2018 02:29 AM, Dan Ragle wrote: > >> >> On 1/25/2018 8:21 PM, R...
2018 Jan 26
2
Run away memory with gluster mount
...n load it takes about a week or so for the memory to get > high enough to invoke the oom killer. Can you try debugging with the statedump (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) of the fuse mount process and see what member is leaking? Take the statedumps in succession, maybe once initially during the I/O and once the memory gets high enough to hit the OOM mark. Share the dumps here. Regards, Ravi > > Is there potentially something misconfigured here? > > I did see a reference to a memory leak in another thread in this list, > but...
2018 Jan 26
0
Run away memory with gluster mount
...s about a week or so for the memory to get >> high enough to invoke the oom killer. > > Can you try debugging with the statedump (https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/#read-a-statedump) of > the fuse mount process and see what member is leaking? Take the statedumps in succession, maybe once initially during the I/O and > once the memory gets high enough to hit the OOM mark. > Share the dumps here. > > Regards, > Ravi Thanks for the reply. I noticed yesterday that an update (3.12.5) had been posted so I went ahead and updated and repeated the t...
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
On Thu, Aug 24, 2017 at 9:01 AM, Krist van Besien <krist at redhat.com> wrote: > Hi > This is gluster 3.8.4. Volume options are out of the box. Sharding is off > (and I don't think enabling it would matter) > > I haven't done much performance tuning. For one thing, using a simple > script that just creates files I can easily flood the network, so I don't >
2017 Sep 22
0
BUG: After stop and start wrong port is advertised
I've already replied to your earlier email. In case you've not seen it in your mailbox here it goes: This looks like a bug to me. For some reason glusterd's portmap is referring to a stale port (IMO) where as brick is still listening to the correct port. But ideally when glusterd service is restarted, all the portmap in-memory is rebuilt. I'd request for the following details from
2018 Jan 25
0
Run away memory with gluster mount
*sigh* trying again to correct formatting ... apologize for the earlier mess. Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior. This is on an updated CentOS 7 box. The setup is a simple two node replicated layout where the two nodes act as both server and client. The volume in question: Volume Name: GlusterWWW Type:
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all