search for: memusag

Displaying 19 results from an estimated 19 matches for "memusag".

Did you mean: memusage
2017 Nov 09
1
glusterfs brick server use too high memory
...ttps://bugzilla.redhat.com/show_bug.cgi?id=1431592 A large amount of memory allocated for agf_common_mt_strdup. Everythig else seem to be all right. My cluster: Yesterday Afternoon??I'm sorry I forgot the specific time.? [features/locks.www-volume-locks - usage-type gf_common_mt_strdup memusage] size=1941483443 num_allocs=617382139 max_size=1941483443 max_num_allocs=617382139 total_allocs=661873332 TIme: 2017.11.9 17:15 (Today) [features/locks.www-volume-locks - usage-type gf_common_mt_strdup memusage] size=792538295 num_allocs=752904534 max_size=792538295 max_num_allocs=752904534 tot...
2018 Jan 27
6
Run away memory with gluster mount
...> statedumps today anyway, about 2 hours apart, 4 total so far. It looks > like there may already be some actionable information. These are the > only registers where the num_allocs have grown with each of the four > samples: > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 > ?---> num_allocs at Fri Jan 26 14:58:27 2018: 908 > > [mount/fuse.fuse - usage-type gf_common_mt_fd_lk_ctx_t...
2014 Dec 18
4
Samba4 on Ubuntu server
On 18/12/14 16:19, Germ van Eck wrote: > Not sure about the high CPU load, but you have the [netlogon] share > twice in your smb.conf. Your first matches mine, have you added the > second yourself? > The second one looks weird with 2 path definitions. > Cj Tibbetts schreef op do 18-12-2014 om 08:59 [-0700]: >> New to linux and new to Samba so any direction in troubleshooting
2018 Jan 29
0
Run away memory with gluster mount
...>> today anyway, about 2 hours apart, 4 total so far. It looks like there may >> already be some actionable information. These are the only registers where >> the num_allocs have grown with each of the four samples: >> >> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >> ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >> ---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >> ---> num_allocs at Fri Jan 26 12:55:15 2018: 877 >> ---> num_allocs at Fri Jan 26 14:58:27 2018: 908 >> >> [mount/fuse.fuse - usage-type g...
2018 Jan 26
0
Run away memory with gluster mount
...your suggestion and made some statedumps today anyway, about 2 hours apart, 4 total so far. It looks like there may already be some actionable information. These are the only registers where the num_allocs have grown with each of the four samples: [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 ---> num_allocs at Fri Jan 26 10:55:50 2018: 831 ---> num_allocs at Fri Jan 26 12:55:15 2018: 877 ---> num_allocs at Fri Jan 26 14:58:27 2018: 908 [mount/fuse.fuse - usage-type gf_common_mt_fd_lk_ctx_t memusage] ---> num_a...
2018 Feb 21
1
Run away memory with gluster mount
...; > Cheers! > > Dan > FYI, this looks like it's fixed in 3.12.6. Ran the test setup with repeated ls listings for just shy of 48 hours with no increase in RAM usage. Next will try my production application load for awhile to see if it holds steady. The gf_dht_mt_dht_layout_t memusage num_allocs went quickly up to 105415 and then stayed there for the entire 48 hours. Thanks for the quick response, Dan >> >> On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com >> <mailto:daniel at biblestuph.com>> wrote: >> >> >> &...
2014 Dec 19
0
Samba4 on Ubuntu server
...es, does >"samba-tool dbcheck --cross-ncs" show any errors? > >I'd also be interested in seeing the output from ps_mem.py ran every 5 >mins. You can do this by downloading the script, doing crontab -e, and >putting in a line like > >*/5 * * * * date >> /root/memusage.txt && /path/to/ps_mem.py | grep >"samba\|mbd" >> /root/memusage.txt && echo -e "\n\n\n" >> >/root/memusage.txt > >You can download ps_mem at >https://raw.githubusercontent.com/pixelb/ps_mem/master/ps_mem.py >This will give you a f...
2018 Feb 02
3
Run away memory with gluster mount
...> like there may already be some actionable information. These are the >>>>>> only registers where the num_allocs have grown with each of the four >>>>>> samples: >>>>>> >>>>>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>>>>> ---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>>>>> ---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >>>>>> ---> num_allocs at Fri Jan 26 12:55:15 2018: 877 >>>>>> ---> num_allocs at Fri Jan...
2018 Feb 03
0
Run away memory with gluster mount
...rmation. These are the > only registers where the num_allocs have grown > with each of the four > samples: > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t > memusage] > ? ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > ? ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > ? ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 > ? ?---> n...
2018 Jan 29
0
Run away memory with gluster mount
...anyway, about 2 hours apart, 4 total so far. It looks > > like there may already be some actionable information. These are the > > only registers where the num_allocs have grown with each of the four > > samples: > > > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > > ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > > ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > > ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 > > ?---> num_allocs at Fri Jan 26 14:58:27 2018: 908 > > > > [mount/fuse.fuse - usage-...
2018 Jan 29
2
Run away memory with gluster mount
...2 hours apart, 4 total so far. It looks >>> like there may already be some actionable information. These are the >>> only registers where the num_allocs have grown with each of the four >>> samples: >>> >>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>> ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >>> ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 >>> ?---> num_allocs at Fri Jan 26 14:58:27 2018: 908 >>> >>> [mo...
2018 Feb 01
0
Run away memory with gluster mount
...>>>>> like there may already be some actionable information. These are the >>>>> only registers where the num_allocs have grown with each of the four >>>>> samples: >>>>> >>>>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] >>>>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 >>>>> ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 >>>>> ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 >>>>> ?---> num_allocs at Fri Jan 26 14:58:27 201...
2018 Jan 26
2
Run away memory with gluster mount
On 01/25/2018 11:04 PM, Dan Ragle wrote: > *sigh* trying again to correct formatting ... apologize for the > earlier mess. > > Having a memory issue with Gluster 3.12.4 and not sure how to > troubleshoot. I don't *think* this is expected behavior. > > This is on an updated CentOS 7 box. The setup is a simple two node > replicated layout where the two nodes act as
2015 Oct 30
0
GUEST Memory statistics secret revealed ...
...got these values : VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON: active_balloon (1024000.000000) active_balloon (2048000.000000) VIR_DOMAIN_MEMORY_STAT_RSS: rss(1194195.312500) rss(2039269.531250 3) By using cgroup counters "memory.usage_in_bytes" for both Guests, I got these values : memusage (1 207 207 031 250) memusage (2 834 503 906 250) How is it possible to have a balloon size equal to memcurrent ? What "memory.usage_in_bytes" stands for (it's the maximum value of the three metrics I got for each GUEST) ? FYI , I use a 1.2.8 libvirt. Thanks. Reagrds, J.P. Ribe...
2018 Jan 30
1
Run away memory with gluster mount
...far. It looks > >>> like there may already be some actionable information. These are the > >>> only registers where the num_allocs have grown with each of the four > >>> samples: > >>> > >>> [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t memusage] > >>> ?---> num_allocs at Fri Jan 26 08:57:31 2018: 784 > >>> ?---> num_allocs at Fri Jan 26 10:55:50 2018: 831 > >>> ?---> num_allocs at Fri Jan 26 12:55:15 2018: 877 > >>> ?---> num_allocs at Fri Jan 26 14:58:27 2018: 908 > >...
2018 Feb 05
1
Run away memory with gluster mount
...t; only registers where the num_allocs have grown > > with each of the four > > samples: > > > > [mount/fuse.fuse - usage-type gf_fuse_mt_gids_t > > memusage] > > ? ?---> num_allocs at Fri Jan 26 08:57:31 2018: > > ? ?784 > > ? ?---> num_allocs at Fri Jan 26 10:55:50 2018: > > ? ?831 > > ? ?---...
2017 Nov 09
0
glusterfs brick server use too high memory
On 8 November 2017 at 17:16, Yao Guotao <yaoguo_tao at 163.com> wrote: > Hi all, > I'm glad to add glusterfs community. > > I have a glusterfs cluster: > Nodes: 4 > System: Centos7.1 > Glusterfs: 3.8.9 > Each Node: > CPU: 48 core > Mem: 128GB > Disk: 1*4T > > There is one Distributed Replicated volume. There are ~160 k8s pods as > clients
2014 Dec 22
2
Samba4 on Ubuntu server
...dbcheck --cross-ncs" show any errors? > > > >I'd also be interested in seeing the output from ps_mem.py ran every 5 > >mins. You can do this by downloading the script, doing crontab -e, and > >putting in a line like > > > >*/5 * * * * date >> /root/memusage.txt && /path/to/ps_mem.py | grep > >"samba\|mbd" >> /root/memusage.txt && echo -e "\n\n\n" >> > >/root/memusage.txt > > > >You can download ps_mem at > >https://raw.githubusercontent.com/pixelb/ps_mem/master/ps_mem.py &g...
2017 Nov 08
2
glusterfs brick server use too high memory
Hi all, I'm glad to add glusterfs community. I have a glusterfs cluster: Nodes: 4 System: Centos7.1 Glusterfs: 3.8.9 Each Node: CPU: 48 core Mem: 128GB Disk: 1*4T There is one Distributed Replicated volume. There are ~160 k8s pods as clients connecting to glusterfs. But, the memory of glusterfsd process is too high, gradually increase to 100G every node. Then, I reboot the glusterfsd