search for: 207m

Displaying 5 results from an estimated 5 matches for "207m".

Did you mean: 207
2010 Nov 04
1
orphan inodes deleted issue
...35M 8.3G 8% / /dev/md7 38G 6.7G 30G 19% /var /dev/md6 15G 4.5G 9.1G 33% /usr /dev/md5 103G 45G 54G 46% /backup /dev/md3 284G 42G 228G 16% /home /dev/md2 2.0G 214M 1.7G 12% /tmp /dev/md0 243M 24M 207M 11% /boot I've been searching on google but I can't find the explanation for this problem. It's a BUG? :D Thank You very much :D -- - -- Best regards, David http://blog.pnyet.web.id
2002 Oct 28
1
R-1.6.0 crashing on RedHat6.3
Hi, I'm having what I believe are OS problems with my installation of R-1.6.0. The machine I'm using was previously running R-1.4.0. The install seemed fine to me. I didn't note an error messages in 'make' or 'make check'. My problem is that I've been running some memory intesive code, and the code that ran under 1.4.0 but won't run under R.1.6.0 (or under
2002 Oct 29
0
PCA with n << p (was R-1.6.0 crashing on RedHat6.3)
...m with about 0.75 GB and got > an out of memory > > > > > error. A 144x2500 problem is currently running in > > > > > > > > > > PID USER PRI NI SIZE RSS SHARE STAT %CPU > %MEM TIME COMMAND > > > > > 16111 pd 17 0 207M 163M 18536 R 99.8 > 65.8 1:58 R.bin > > > > > > > > > > and seems to be staying there.... > > > > > > > > Yep, it's 144x5300. The machine has 2GB of RAM, and > this uses about 1.5GB. > > > > > > Hmm. My half-siz...
2002 Jul 25
0
scp hangs
...[pid 1438] rt_sigaction(SIGQUIT, {0x80538a4, [], SA_RESTART|0x4000000}, {SIG_DFL}, 8) = 0 [pid 1438] rt_sigaction(SIGTERM, {0x80538a4, [], SA_RESTART|0x4000000}, {SIG_DFL}, 8) = 0 [pid 1438] select(7, [3], [3], NULL, NULL) = 1 (out [3]) [pid 1438] write(3, "\234\231\277\177\16\315\317\207m\306\3522F\203<\275w\326"..., 64) = 64 [pid 1438] select(7, [3], [], NULL, NULL) = 1 (in [3]) [pid 1438] read(3, "\353\246&.\207\10\276\333K\203\325\267N\207\247\266\212"..., 8192) = 48 [pid 1438] getsockname(3, {sin_family=AF_INET, sin_port=htons(2019), sin_addr=inet_addr...
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi, I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of increased load on the puppetmaster machine. I''m using the Apache/passenger/rack way of puppetmastering. Main symptom is: higher load on puppetmaster machine (8 cores): - 3.1.1: around 4 - 3.2.1: around 9-10 Any idea why there''s more load on the machine with 3.2.1? -- You received this