Displaying 5 results from an estimated 5 matches for "arp_cache".
Did you mean:
aio_cache
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
...ip_mrt_cache 0 0 128 30 1
tcp_bind_bucket 14 203 16 203 1
inet_peer_cache 81 118 64 59 1
secpath_cache 0 0 128 30 1
xfrm_dst_cache 0 0 384 10 1
ip_dst_cache 176 240 256 15 1
arp_cache 6 30 256 15 1
RAW 3 7 512 7 1
UDP 29 42 512 7 1
tw_sock_TCP 0 0 128 30 1
request_sock_TCP 0 0 64 59 1
TCP 19 35 1152 7 2
flow_cach...
2005 Jun 03
0
Triple /proc/net/stat/ip_conntrack files
...at]# uname -a
Linux tcs 2.6.9-5.0.5.ELsmp #1 SMP Wed Apr 20 00:16:40 BST 2005 i686
i686 i386 GNU/Linux
[root at tcs stat]# pwd
/proc/net/stat
[root at tcs stat]# ls -al
total 0
dr-xr-xr-x 2 root root 0 Jun 3 18:51 .
dr-xr-xr-x 5 root root 0 May 31 23:12 ..
-r--r--r-- 1 root root 0 Jun 3 18:51 arp_cache
-r--r--r-- 1 root root 0 Jun 3 18:51 ip_conntrack
-r--r--r-- 1 root root 0 Jun 3 18:51 ip_conntrack
-r--r--r-- 1 root root 0 Jun 3 18:51 ip_conntrack
-r--r--r-- 1 root root 0 Jun 3 18:51 ndisc_cache
-r--r--r-- 1 root root 0 Jun 3 18:51 rt_cache
Anyone getting the above triplication?
Che...
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
...226 16 226 1 : tunables 120 60
8 : slabdata 1 1 0
ip_fib_hash 16 119 32 119 1 : tunables 120 60
8 : slabdata 1 1 0
ip_dst_cache 451 720 256 15 1 : tunables 120 60
8 : slabdata 48 48 0
arp_cache 11 30 256 15 1 : tunables 120 60
8 : slabdata 2 2 0
RAW 3 7 512 7 1 : tunables 54 27
8 : slabdata 1 1 0
UDP 6 7 512 7 1 : tunables 54 27
8 : slabdata 1...
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
...0 0 128 30
tcp_bind_bucket 49 203 16 203
inet_peer_cache 1 59 64 59
secpath_cache 0 0 128 30
xfrm_dst_cache 0 0 320 12
ip_dst_cache 15 30 256 15
arp_cache 7 30 128 30
RAW 2 9 448 9
UDP 5 9 448 9
tw_sock_TCP 3 30 128 30
request_sock_TCP 0 0 64 59
Cache Num Total Siz...
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space