search for: fs_cache

Displaying 7 results from an estimated 7 matches for "fs_cache".

Did you mean: dfs_cache
2011 Sep 01
1
No buffer space available - loses network connectivity
...% 0.06K 11 59 44K pid 600 227 37% 0.09K 15 40 60K journal_head 590 298 50% 0.06K 10 59 40K delayacct_cache 496 424 85% 0.50K 62 8 248K size-512 413 156 37% 0.06K 7 59 28K fs_cache 404 44 10% 0.02K 2 202 8K biovec-1 390 293 75% 0.12K 13 30 52K bio 327 327 100% 4.00K 327 1 1308K size-4096 320 190 59% 0.38K 32 10 128K ip_dst_cache 308 227 73% 0.50K 44 7...
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...filp 2734 2820 192 20 1 names_cache 25 25 4096 1 1 idr_layer_cache 204 232 136 29 1 buffer_head 456669 459936 52 72 1 mm_struct 109 126 448 9 1 vm_area_struct 5010 5632 88 44 1 fs_cache 109 177 64 59 1 files_cache 94 135 448 9 1 signal_cache 159 160 384 10 1 sighand_cache 147 147 1344 3 1 task_struct 175 175 1376 5 2 anon_vma 2355 2540 12 254 1 pgd...
2011 Sep 01
0
No buffer space available - loses network connectivity
...45% 0.06K 11 59 44K pid 600 227 37% 0.09K 15 40 60K journal_head 590 298 50% 0.06K 10 59 40K delayacct_cache 496 424 85% 0.50K 62 8 248K size-512 413 156 37% 0.06K 7 59 28K fs_cache 404 44 10% 0.02K 2 202 8K biovec-1 390 293 75% 0.12K 13 30 52K bio 327 327 100% 4.00K 327 1 1308K size-4096 320 190 59% 0.38K 32 10 128K ip_dst_cache 308 227 73% 0.50K 44 7...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...18954 48 81 1 : tunables 120 60 8 : slabdata 234 234 0 mm_struct 60 60 640 6 1 : tunables 54 27 8 : slabdata 10 10 0 vm_area_struct 1266 1530 88 45 1 : tunables 120 60 8 : slabdata 34 34 0 fs_cache 50 122 64 61 1 : tunables 120 60 8 : slabdata 2 2 0 files_cache 51 63 512 7 1 : tunables 54 27 8 : slabdata 9 9 0 signal_cache 101 120 384 10 1 : tunables 54 27 8 : slabdata 1...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...2 2 4096 1 idr_layer_cache 209 232 136 29 Cache Num Total Size Pages buffer_head 25038 28938 48 78 mm_struct 87 90 448 9 vm_area_struct 1782 2478 92 42 fs_cache 84 565 32 113 files_cache 85 135 448 9 signal_cache 193 200 384 10 sighand_cache 191 204 1344 3 task_struct 208 303 1280 3 anon_vma 751 2034...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M