search for: num_allocs

Displaying 20 results from an estimated 26 matches for "num_allocs".

2018 Jan 27
6
Run away memory with gluster mount
...ate it'll take several days before it > runs the box out of memory. But I took your suggestion and made some > statedumps today anyway, about 2 hours apart, 4 total so far. It looks > like there may already be some actionable information. These are the > only registers where the num_allocs have grown with each of the four > samples: > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 > ?---...
2018 Jan 29
0
Run away memory with gluster mount
...l take several days before it runs >> the box out of memory. But I took your suggestion and made some statedumps >> today anyway, about 2 hours apart, 4 total so far. It looks like there may >> already be some actionable information. These are the only registers where >> the num_allocs have grown with each of the four samples: >> >> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >> ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >> ---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >> ---> num_allocs at Fri Jan 26 12:55:15 2018:...
2018 Jan 26
0
Run away memory with gluster mount
...hours or so. At that rate it'll take several days before it runs the box out of memory. But I took your suggestion and made some statedumps today anyway, about 2 hours apart, 4 total so far. It looks like there may already be some actionable information. These are the only registers where the num_allocs have grown with each of the four samples: [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 ---> num_allocs at Fri Jan 26 10:55:50 2018: 831 ---> num_allocs at Fri Jan 26 12:55:15 2018: 877 ---> num_allocs at Fri Jan 26...
2018 Jan 29
0
Run away memory with gluster mount
...ke several days before it > > runs the box out of memory. But I took your suggestion and made some > > statedumps today anyway, about 2 hours apart, 4 total so far. It looks > > like there may already be some actionable information. These are the > > only registers where the num_allocs have grown with each of the four > > samples: > > > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > > ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > > ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > > ?---> num_allocs at Fri Jan 26...
2018 Jan 26
2
Run away memory with gluster mount
On 01/25/2018 11:04 PM, Dan Ragle wrote: > *sigh* trying again to correct formatting ... apologize for the > earlier mess. > > Having a memory issue with Gluster 3.12.4 and not sure how to > troubleshoot. I don't *think* this is expected behavior. > > This is on an updated CentOS 7 box. The setup is a simple two node > replicated layout where the two nodes act as
2018 Jan 29
2
Run away memory with gluster mount
...ays before it >>> runs the box out of memory. But I took your suggestion and made some >>> statedumps today anyway, about 2 hours apart, 4 total so far. It looks >>> like there may already be some actionable information. These are the >>> only registers where the num_allocs have grown with each of the four >>> samples: >>> >>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>> ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >>> ?---> num...
2018 Feb 02
3
Run away memory with gluster mount
...box out of memory. But I took your suggestion and made some >>>>>> statedumps today anyway, about 2 hours apart, 4 total so far. It looks >>>>>> like there may already be some actionable information. These are the >>>>>> only registers where the num_allocs have grown with each of the four >>>>>> samples: >>>>>> >>>>>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>>>>> ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>>>>> ---> num_allo...
2018 Jan 30
1
Run away memory with gluster mount
...gt;>> runs the box out of memory. But I took your suggestion and made some > >>> statedumps today anyway, about 2 hours apart, 4 total so far. It looks > >>> like there may already be some actionable information. These are the > >>> only registers where the num_allocs have grown with each of the four > >>> samples: > >>> > >>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > >>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > >>> ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 &...
2018 Feb 01
0
Run away memory with gluster mount
...t; runs the box out of memory. But I took your suggestion and made some >>>>> statedumps today anyway, about 2 hours apart, 4 total so far. It looks >>>>> like there may already be some actionable information. These are the >>>>> only registers where the num_allocs have grown with each of the four >>>>> samples: >>>>> >>>>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>>>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>>>> ?---> num_allocs at Fri Jan 26 10:...
2018 Feb 03
0
Run away memory with gluster mount
...statedumps today anyway, about 2 hours apart, 4 > total so far. It looks > like there may already be some actionable > information. These are the > only registers where the num_allocs have grown > with each of the four > samples: > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t > memusage] > ? ?---> num_allocs at Fri Jan 26 08:57:31 20...
2018 Feb 21
1
Run away memory with gluster mount
...heers! > > Dan > FYI, this looks like it's fixed in 3.12.6. Ran the test setup with repeated ls listings for just shy of 48 hours with no increase in RAM usage. Next will try my production application load for awhile to see if it holds steady. The gf_dht_mt_dht_layout_t memusage num_allocs went quickly up to 105415 and then stayed there for the entire 48 hours. Thanks for the quick response, Dan >> >> On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com >> <mailto:daniel at biblestuph.com>> wrote: >> >> >> >> ???...
2018 Feb 05
1
Run away memory with gluster mount
...tatedumps today anyway, about 2 hours apart, 4 > > total so far. It looks > > like there may already be some actionable > > information. These are the > > only registers where the num_allocs have grown > > with each of the four > > samples: > > > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t > > memusage] > > ? ?---> num_al...
2017 Nov 09
1
glusterfs brick server use too high memory
.../show_bug.cgi?id=1431592 A large amount of memory allocated for agf_common_mt_strdup. Everythig else seem to be all right. My cluster: Yesterday Afternoon??I'm sorry I forgot the specific time.? [features/locks.www-volume-locks - usage-type gf_common_mt_strdup memusage] size=1941483443 num_allocs=617382139 max_size=1941483443 max_num_allocs=617382139 total_allocs=661873332 TIme: 2017.11.9 17:15 (Today) [features/locks.www-volume-locks - usage-type gf_common_mt_strdup memusage] size=792538295 num_allocs=752904534 max_size=792538295 max_num_allocs=752904534 total_allocs=800889589 The st...
2020 Sep 15
0
[PATCH RFC v1 09/18] x86/hyperv: provide a bunch of helper functions
Wei Liu <wei.liu at kernel.org> writes: > They are used to deposit pages into Microsoft Hypervisor and bring up > logical and virtual processors. > > Signed-off-by: Lillian Grassin-Drake <ligrassi at microsoft.com> > Signed-off-by: Sunil Muthuswamy <sunilmut at microsoft.com> > Signed-off-by: Nuno Das Neves <nudasnev at microsoft.com> >
2017 Nov 09
0
glusterfs brick server use too high memory
On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote: > Hi all, > I'm glad to add glusterfs community. > > I have a glusterfs cluster: > Nodes: 4 > System: Centos7.1 > Glusterfs: 3.8.9 > Each Node: > CPU: 48 core > Mem: 128GB > Disk: 1*4T > > There is one Distributed Replicated volume. There are ~160 k8s pods as > clients
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all, I'm glad to add glusterfs community. I have a glusterfs cluster: Nodes: 4 System: Centos7.1 Glusterfs: 3.8.9 Each Node: CPU: 48 core Mem: 128GB Disk: 1*4T There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node. Then, I reboot the glusterfsd
2011 Jul 26
0
[PATCH] Btrfs: use bytes_may_use for all ENOSPC reservations
We have been using bytes_reserved for metadata reservations, which is wrong since we use that to keep track of outstanding reservations from the allocator. This resulted in us doing a lot of silly things to make sure we don''t allocate a bunch of metadata chunks since we never had a real view of how much space was actually in use by metadata. There are a lot of fixes in here to make this
2011 Jul 27
0
[PATCH] Btrfs: use bytes_may_use for all ENOSPC reservations V2
We have been using bytes_reserved for metadata reservations, which is wrong since we use that to keep track of outstanding reservations from the allocator. This resulted in us doing a lot of silly things to make sure we don''t allocate a bunch of metadata chunks since we never had a real view of how much space was actually in use by metadata. There are a lot of fixes in here to make this
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
Ethernet NIC driver for VMware's vmxnet3 From: Shreyas Bhatewara <sbhatewara at vmware.com> This patch adds driver support for VMware's virtual Ethernet NIC: vmxnet3 Guests running on VMware hypervisors supporting vmxnet3 device will thus have access to improved network functionalities and performance. Signed-off-by: Shreyas Bhatewara <sbhatewara at vmware.com>
2009 Oct 06
1
[PATCH 2.6.32-rc3] net: VMware virtual Ethernet NIC driver: vmxnet3
Ethernet NIC driver for VMware's vmxnet3 From: Shreyas Bhatewara <sbhatewara at vmware.com> This patch adds driver support for VMware's virtual Ethernet NIC: vmxnet3 Guests running on VMware hypervisors supporting vmxnet3 device will thus have access to improved network functionalities and performance. Signed-off-by: Shreyas Bhatewara <sbhatewara at vmware.com>