Displaying 4 results from an estimated 4 matches for "45712".
Did you mean:
45,12
2010 Oct 20
0
Increased memory usage between 4.8 and 5.5
...s according to this page:
http://www.greenend.org.uk/rjk/2009/dataseg.html
Another oddity I found that was only in centos 5.5 was a lot of heap
memory being marked as Private_Dirty according to /proc/<pid>/smaps:
2b37c9826000-2b37cc4ca000 rw-p 2b37c9826000 00:00 0 [heap]
Size: 45712 kB
Rss: 45424 kB
Shared_Clean: 0 kB
Shared_Dirty: 1580 kB
Private_Clean: 0 kB
Private_Dirty: 43844 kB
Swap: 0 kB
Pss: 43913 kB
The highest amount of Private_Dirty showing up on 4.8 is just 1628 kB.
Does anyone have any ideas on why...
2012 Nov 21
1
Listing elements of a 4D array
Dear list,
I'm having trouble to see how my elements on a 4 dimensional array are
listed.
For example, I generated the following array:
junk.melt=melt(occ.data,id.var=c("Especie", "Site", "Rep", "Año"),
measure.var="Pres")
y=cast(junk.melt, Site ~ Rep ~ Especie ~ Año)
Now, I want to be able to look at how my species (Especie) are listed, in
2013 Nov 23
1
Maildir issue.
We brought up a test cluster to investigate GlusterFS.
Using the Quick Start instructions, we brought up a 2 server 1 brick
replicating setup and mounted to it from a third box with the fuse mount
(all ver 3.4.1)
# gluster volume info
Volume Name: mailtest
Type: Replicate
Volume ID: 9e412774-b8c9-4135-b7fb-bc0dd298d06a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
2006 Oct 31
9
Problems with mongrel dying
...us, 0.0% sy, 0.0% ni, 100.0% id, 0.0% wa, 0.0% hi,
0.0% si
Mem: 262316k total, 239700k used, 22616k free, 3412k buffers
Swap: 0k total, 0k used, 0k free, 88320k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18001 root 16 0 45712 39m 2584 S 0.0 15.6 0:20.79
mongrel_rails
18004 root 16 0 43624 38m 2524 S 0.0 15.0 0:25.93
mongrel_rails
2632 mysql 16 0 109m 27m 3100 S 0.0 10.8 5:37.37 mysqld
After the restart, memory usage rapidly approaches above values while
the application runs normally.
* Yester...