search for: nfs_inode_cache

Displaying 4 results from an estimated 4 matches for "nfs_inode_cache".

2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...195 60 65 1 : tunables 120 60 8 : slabdata 3 3 0 nfs_write_data 36 42 512 7 1 : tunables 54 27 8 : slabdata 6 6 0 nfs_read_data 32 35 512 7 1 : tunables 54 27 8 : slabdata 5 5 0 nfs_inode_cache 0 0 580 7 1 : tunables 54 27 8 : slabdata 0 0 0 nfs_page 0 0 64 61 1 : tunables 120 60 8 : slabdata 0 0 0 isofs_inode_cache 0 0 348 11 1 : tunables 54 27 8 : slabdata 0...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...0 0 96 40 crq_pool 0 0 44 84 deadline_drq 0 0 48 78 as_arq 1197 1260 60 63 nfs_write_data 36 36 448 9 nfs_read_data 32 36 448 9 nfs_inode_cache 0 0 560 7 nfs_page 0 0 64 59 isofs_inode_cache 0 0 340 11 ext2_inode_cache 0 0 420 9 dnotify_cache 0 0 20 169 eventpoll_pwq 0 0 36 1...
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram. Up until (and including) 3.6, used-(buffers+cached) was roughly the same as sum(rss) (taking shared into account). Now there is an approx 6G gap. When the box first starts, it is clearly less swappy than with <= 3.6; I can''t tell whether that is related. The reduced swappiness persists. It seems to get worse when I update
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space