Displaying 7 results from an estimated 7 matches for "bdev_cache".
Did you mean:
vdev_cache
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...acpi_state 0 0 48 78 1
delayacct_cache 183 390 48 78 1
taskstats_cache 9 32 236 16 1
proc_inode_cache 49 170 372 10 1
sigqueue 96 135 144 27 1
radix_tree_node 16046 16786 276 14 1
bdev_cache 56 56 512 7 1
sysfs_dir_cache 4831 4876 40 92 1
mnt_cache 30 60 128 30 1
inode_cache 1041 1276 356 11 1
dentry_cache 11588 13688 132 29 1
filp 2734 2820 192 20 1
names_cach...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...456 336 12 1 : tunables 54 27
8 : slabdata 38 38 0
sigqueue 27 27 148 27 1 : tunables 120 60
8 : slabdata 1 1 0
radix_tree_node 3508 3528 276 14 1 : tunables 54 27
8 : slabdata 252 252 0
bdev_cache 26 28 512 7 1 : tunables 54 27
8 : slabdata 4 4 0
sysfs_dir_cache 3389 3424 36 107 1 : tunables 120 60
8 : slabdata 32 32 0
mnt_cache 16 31 128 31 1 : tunables 120 60
8 : slabdata 1...
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
...255 255 80 51
Acpi-Namespace 1428 1428 40 102
task_delay_info 1704 1704 168 24
taskstats 144 144 328 24
proc_inode_cache 1632 1664 976 16
sigqueue 250 250 160 25
bdev_cache 192 192 1344 24
sysfs_dir_cache 23817 23968 144 28
filp 1807 2400 320 25
inode_cache 1861 2244 912 17
dentry 27252 29440 248 16
buffer_head 67283 102063 104...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...0 0 44 84
acpi_parse 0 0 28 127
acpi_state 0 0 48 78
proc_inode_cache 17 48 328 12
sigqueue 54 54 144 27
radix_tree_node 810 1456 276 14
bdev_cache 82 90 448 9
sysfs_dir_cache 13325 13340 40 92
mnt_cache 21 30 128 30
inode_cache 2416 2976 312 12
dentry_cache 4407 12989 124 31
filp 710 1060 192...
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram.
Up until (and including) 3.6, used-(buffers+cached) was roughly the same
as sum(rss) (taking shared into account). Now there is an approx 6G gap.
When the box first starts, it is clearly less swappy than with <= 3.6; I
can''t tell whether that is related. The reduced swappiness persists.
It seems to get worse when I update
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and