similar to: glusterfs brick server use too high memory

Displaying 20 results from an estimated 300 matches similar to: "glusterfs brick server use too high memory"

2017 Nov 09
0
glusterfs brick server use too high memory
On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote: > Hi all, > I'm glad to add glusterfs community. > > I have a glusterfs cluster: > Nodes: 4 > System: Centos7.1 > Glusterfs: 3.8.9 > Each Node: > CPU: 48 core > Mem: 128GB > Disk: 1*4T > > There is one Distributed Replicated volume. There are ~160 k8s pods as > clients
2017 Nov 09
1
glusterfs brick server use too high memory
Thank you very much for your reply. I think I found the key to the problem. But, I'd like you to confirm whether this is the issue. My gluster version is 3.8.9, And I found there is a bug in it, and fixed in 3.8.11: https://bugzilla.redhat.com/show_bug.cgi?id=1431592 A large amount of memory allocated for agf_common_mt_strdup. Everythig else seem to be all right. My cluster:
2018 Jan 27
6
Run away memory with gluster mount
On 01/27/2018 02:29 AM, Dan Ragle wrote: > > On 1/25/2018 8:21 PM, Ravishankar N wrote: >> >> >> On 01/25/2018 11:04 PM, Dan Ragle wrote: >>> *sigh* trying again to correct formatting ... apologize for the >>> earlier mess. >>> >>> Having a memory issue with Gluster 3.12.4 and not sure how to >>> troubleshoot. I don't
2018 Jan 29
0
Run away memory with gluster mount
Csaba, Could this be the problem of the inodes not getting freed in the fuse process? Daniel, as Ravi requested, please provide access to the statedumps. You can strip out the filepath information. Does your data set include a lot of directories? Thanks, Nithya On 27 January 2018 at 10:23, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/27/2018 02:29 AM, Dan Ragle
2018 Jan 26
2
Run away memory with gluster mount
On 01/25/2018 11:04 PM, Dan Ragle wrote: > *sigh* trying again to correct formatting ... apologize for the > earlier mess. > > Having a memory issue with Gluster 3.12.4 and not sure how to > troubleshoot. I don't *think* this is expected behavior. > > This is on an updated CentOS 7 box. The setup is a simple two node > replicated layout where the two nodes act as
2019 Feb 01
1
Help analise statedumps
Hi, I have a 3x replicated cluster running 4.1.7 on ubuntu 16.04.5, all 3 replicas are also clients hosting a Node.js/Nginx web server. The current configuration is as such: Volume Name: gvol1 Type: Replicate Volume ID: XXXXXX Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vm000000:/srv/brick1/gvol1 Brick2: vm000001:/srv/brick1/gvol1 Brick3:
2018 Jan 29
2
Run away memory with gluster mount
On 1/29/2018 2:36 AM, Raghavendra Gowdappa wrote: > > > ----- Original Message ----- >> From: "Ravishankar N" <ravishankar at redhat.com> >> To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org >> Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>,
2018 Jan 29
0
Run away memory with gluster mount
----- Original Message ----- > From: "Ravishankar N" <ravishankar at redhat.com> > To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org > Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya Balachandran" <nbalacha at redhat.com>, > "Raghavendra
2018 Feb 02
3
Run away memory with gluster mount
Hi Dan, It sounds like you might be running into [1]. The patch has been posted upstream and the fix should be in the next release. In the meantime, I'm afraid there is no way to get around this without restarting the process. Regards, Nithya [1]https://bugzilla.redhat.com/show_bug.cgi?id=1541264 On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com> wrote: > >
2018 Jan 26
0
Run away memory with gluster mount
On 1/25/2018 8:21 PM, Ravishankar N wrote: > > > On 01/25/2018 11:04 PM, Dan Ragle wrote: >> *sigh* trying again to correct formatting ... apologize for the earlier mess. >> >> Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior. >> >> This is on an updated CentOS 7 box. The setup is a
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote: > Hi Dan, > > It sounds like you might be running into [1]. The patch has been posted > upstream and the fix should be in the next release. > In the meantime, I'm afraid there is no way to get around this without > restarting the process. > > Regards, > Nithya > >
2018 Jan 30
1
Run away memory with gluster mount
----- Original Message ----- > From: "Dan Ragle" <daniel at Biblestuph.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com> > Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya >
2018 Feb 01
0
Run away memory with gluster mount
On 1/30/2018 6:31 AM, Raghavendra Gowdappa wrote: > > > ----- Original Message ----- >> From: "Dan Ragle" <daniel at Biblestuph.com> >> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com> >> Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>,
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote: > > > On 2/2/2018 2:13 AM, Nithya Balachandran wrote: >> Hi Dan, >> >> It sounds like you might be running into [1]. The patch has been >> posted upstream and the fix should be in the next release. >> In the meantime, I'm afraid there is no way to get around this without >> restarting the process. >>
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan, I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better. <snip> > In the > meantime can you remount glusterfs with options > --entry-timeout=0 and
2018 Jan 25
0
Run away memory with gluster mount
*sigh* trying again to correct formatting ... apologize for the earlier mess. Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior. This is on an updated CentOS 7 box. The setup is a simple two node replicated layout where the two nodes act as both server and client. The volume in question: Volume Name: GlusterWWW Type:
2018 Jan 25
2
Run away memory with gluster mount
Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior. This is on an updated CentOS 7 box. The setup is a simple two node replicated layout where the two nodes act as both server and client. The volume in question: Volume Name: GlusterWWW Type: Replicate Volume ID: 8e9b0e79-f309-4d9b-a5bb-45d065faaaa3 Status: Started Snapshot
2017 Nov 09
8
[Bug 1201] New: Some filters randomly do not work since version 0.8
https://bugzilla.netfilter.org/show_bug.cgi?id=1201 Bug ID: 1201 Summary: Some filters randomly do not work since version 0.8 Product: nftables Version: unspecified Hardware: x86_64 OS: Gentoo Status: NEW Severity: major Priority: P5 Component: nft Assignee: pablo at
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
On Thu, Aug 24, 2017 at 9:01 AM, Krist van Besien <krist at redhat.com> wrote: > Hi > This is gluster 3.8.4. Volume options are out of the box. Sharding is off > (and I don't think enabling it would matter) > > I haven't done much performance tuning. For one thing, using a simple > script that just creates files I can easily flood the network, so I don't >
2017 Nov 09
2
Adding a slack for communication?
> On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda <amye at redhat.com <mailto:amye at redhat.com>> wrote: > From today's community meeting, we had an item from the issue queue: > https://github.com/gluster/community/issues/13 <https://github.com/gluster/community/issues/13> > > Should we have a Gluster Community slack team? I'm interested in >