Displaying 20 results from an estimated 26 matches for "num_allocations".
2018 Jan 27
6
Run away memory with gluster mount
On 01/27/2018 02:29 AM, Dan Ragle wrote:
>
> On 1/25/2018 8:21 PM, Ravishankar N wrote:
>>
>>
>> On 01/25/2018 11:04 PM, Dan Ragle wrote:
>>> *sigh* trying again to correct formatting ... apologize for the
>>> earlier mess.
>>>
>>> Having a memory issue with Gluster 3.12.4 and not sure how to
>>> troubleshoot. I don't
2018 Jan 29
0
Run away memory with gluster mount
Csaba,
Could this be the problem of the inodes not getting freed in the fuse
process?
Daniel,
as Ravi requested, please provide access to the statedumps. You can strip
out the filepath information.
Does your data set include a lot of directories?
Thanks,
Nithya
On 27 January 2018 at 10:23, Ravishankar N <ravishankar at redhat.com> wrote:
>
>
> On 01/27/2018 02:29 AM, Dan Ragle
2018 Jan 26
0
Run away memory with gluster mount
On 1/25/2018 8:21 PM, Ravishankar N wrote:
>
>
> On 01/25/2018 11:04 PM, Dan Ragle wrote:
>> *sigh* trying again to correct formatting ... apologize for the earlier mess.
>>
>> Having a memory issue with Gluster 3.12.4 and not sure how to troubleshoot. I don't *think* this is expected behavior.
>>
>> This is on an updated CentOS 7 box. The setup is a
2018 Jan 29
0
Run away memory with gluster mount
----- Original Message -----
> From: "Ravishankar N" <ravishankar at redhat.com>
> To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org
> Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya Balachandran" <nbalacha at redhat.com>,
> "Raghavendra
2018 Jan 26
2
Run away memory with gluster mount
On 01/25/2018 11:04 PM, Dan Ragle wrote:
> *sigh* trying again to correct formatting ... apologize for the
> earlier mess.
>
> Having a memory issue with Gluster 3.12.4 and not sure how to
> troubleshoot. I don't *think* this is expected behavior.
>
> This is on an updated CentOS 7 box. The setup is a simple two node
> replicated layout where the two nodes act as
2018 Jan 29
2
Run away memory with gluster mount
On 1/29/2018 2:36 AM, Raghavendra Gowdappa wrote:
>
>
> ----- Original Message -----
>> From: "Ravishankar N" <ravishankar at redhat.com>
>> To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org
>> Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>,
2018 Feb 02
3
Run away memory with gluster mount
Hi Dan,
It sounds like you might be running into [1]. The patch has been posted
upstream and the fix should be in the next release.
In the meantime, I'm afraid there is no way to get around this without
restarting the process.
Regards,
Nithya
[1]https://bugzilla.redhat.com/show_bug.cgi?id=1541264
On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com> wrote:
>
>
2018 Jan 30
1
Run away memory with gluster mount
----- Original Message -----
> From: "Dan Ragle" <daniel at Biblestuph.com>
> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>
> Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya
>
2018 Feb 01
0
Run away memory with gluster mount
On 1/30/2018 6:31 AM, Raghavendra Gowdappa wrote:
>
>
> ----- Original Message -----
>> From: "Dan Ragle" <daniel at Biblestuph.com>
>> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>
>> Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>,
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote:
> Hi Dan,
>
> It sounds like you might be running into [1]. The patch has been posted
> upstream and the fix should be in the next release.
> In the meantime, I'm afraid there is no way to get around this without
> restarting the process.
>
> Regards,
> Nithya
>
>
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote:
>
>
> On 2/2/2018 2:13 AM, Nithya Balachandran wrote:
>> Hi Dan,
>>
>> It sounds like you might be running into [1]. The patch has been
>> posted upstream and the fix should be in the next release.
>> In the meantime, I'm afraid there is no way to get around this without
>> restarting the process.
>>
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan,
I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better.
<snip>
> In the
> meantime can you remount glusterfs with options
> --entry-timeout=0 and
2017 Nov 09
1
glusterfs brick server use too high memory
Thank you very much for your reply.
I think I found the key to the problem. But, I'd like you to confirm whether this is the issue.
My gluster version is 3.8.9, And I found there is a bug in it, and fixed in 3.8.11:
https://bugzilla.redhat.com/show_bug.cgi?id=1431592
A large amount of memory allocated for agf_common_mt_strdup. Everythig else seem to be all right.
My cluster:
2020 Sep 15
0
[PATCH RFC v1 09/18] x86/hyperv: provide a bunch of helper functions
...with type
checking.
> +
> +/*
> + * Deposits exact number of pages
> + * Must be called with interrupts enabled
> + * Max 256 pages
> + */
> +int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages)
> +{
> + struct page **pages;
> + int *counts;
> + int num_allocations;
> + int i, j, page_count;
> + int order;
> + int desired_order;
> + int status;
> + int ret;
> + u64 base_pfn;
> + struct hv_deposit_memory *input_page;
> + unsigned long flags;
> +
> + if (num_pages > HV_DEPOSIT_MAX)
> + return -EINVAL;
> + if (!num_pages)...
2017 Nov 09
0
glusterfs brick server use too high memory
On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote:
> Hi all,
> I'm glad to add glusterfs community.
>
> I have a glusterfs cluster:
> Nodes: 4
> System: Centos7.1
> Glusterfs: 3.8.9
> Each Node:
> CPU: 48 core
> Mem: 128GB
> Disk: 1*4T
>
> There is one Distributed Replicated volume. There are ~160 k8s pods as
> clients
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all,
I'm glad to add glusterfs community.
I have a glusterfs cluster:
Nodes: 4
System: Centos7.1
Glusterfs: 3.8.9
Each Node:
CPU: 48 core
Mem: 128GB
Disk: 1*4T
There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node.
Then, I reboot the glusterfsd
2011 Jul 26
0
[PATCH] Btrfs: use bytes_may_use for all ENOSPC reservations
We have been using bytes_reserved for metadata reservations, which is wrong
since we use that to keep track of outstanding reservations from the allocator.
This resulted in us doing a lot of silly things to make sure we don''t allocate a
bunch of metadata chunks since we never had a real view of how much space was
actually in use by metadata.
There are a lot of fixes in here to make this
2011 Jul 27
0
[PATCH] Btrfs: use bytes_may_use for all ENOSPC reservations V2
We have been using bytes_reserved for metadata reservations, which is wrong
since we use that to keep track of outstanding reservations from the allocator.
This resulted in us doing a lot of silly things to make sure we don''t allocate a
bunch of metadata chunks since we never had a real view of how much space was
actually in use by metadata.
There are a lot of fixes in here to make this
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
Ethernet NIC driver for VMware's vmxnet3
From: Shreyas Bhatewara <sbhatewara at vmware.com>
This patch adds driver support for VMware's virtual Ethernet NIC: vmxnet3
Guests running on VMware hypervisors supporting vmxnet3 device will thus have
access to improved network functionalities and performance.
Signed-off-by: Shreyas Bhatewara <sbhatewara at vmware.com>
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
Ethernet NIC driver for VMware's vmxnet3
From: Shreyas Bhatewara <sbhatewara at vmware.com>
This patch adds driver support for VMware's virtual Ethernet NIC: vmxnet3
Guests running on VMware hypervisors supporting vmxnet3 device will thus have
access to improved network functionalities and performance.
Signed-off-by: Shreyas Bhatewara <sbhatewara at vmware.com>