Displaying 4 results from an estimated 4 matches for "batchcount".
Did you mean:
batch_count
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...AS: 43236 kB
PageTables: 468 kB
VmallocTotal: 696312 kB
VmallocUsed: 856 kB
VmallocChunk: 695416 kB
/dev/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab>
<pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata
<active>
rpc_buffers 8 8 2048 2 1 : tunables 24 12
8 : slabdata 4 4 0
rpc_tasks 8 15 256 15 1 : tunables 120 60
8 : slabdata 1 1 0
rpc_inode_cache 0 0...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram.
Up until (and including) 3.6, used-(buffers+cached) was roughly the same
as sum(rss) (taking shared into account). Now there is an approx 6G gap.
When the box first starts, it is clearly less swappy than with <= 3.6; I
can''t tell whether that is related. The reduced swappiness persists.
It seems to get worse when I update
2011 Nov 10
13
dom0 - oom-killer - memory leak somewhere ?
Hello,
I work in a hosting company, we have tens of Xen dom0 running just fine,
but unfortunately we do have a few that get out of control.
Reported behaviour :
- dom0 uses more and more memory
- no process can be found using that memory
- at some point, oom killer kicks in, and kills everything, until even
ssh the box becomes hard
- when there is really no more process to kill, it crashes