Displaying 7 results from an estimated 7 matches for "20000k".
Did you mean:
20000
2000 Nov 09
3
maximum of nsize=20000k ??
Dear R-ers,
somehow it is not possible to increase nsize to more than
20000k. When I specify e.g.
> R --vsize=10M --nsize=21000K
the result is:
free total (Mb)
Ncells 99658 350000 6.7
Vcells 1219173 1310720 10.0
Maybe I have overlooked s.th....
Marcus
--
+-------------------------------------------------------
| Marcus Eger
| E-Mail: eger.m at...
2001 Mar 12
4
1.2.2 under M$ windows 2000 lots of plots out of memory?
...e(prompt="enter for nextplot")
dev.off()
}
~
the check print gets to 256MB and R fails; I think I am freeing everthing with the rm(...) and dev.off().
Am I failing to free something or force r to do garbage collection?
I start r with
P:\r\base\rw1022\bin\rgui.exe --vsize=300M --nsize=20000k
and turn of buffered output
I may have a hosed up windows 2000 installation as hibernating with R running on my acer 350 gives the
blue screen of death (The only blue screen of death I've ever had on the updated 2000)
thanx
Rober L Sandefur
Principal Geostatistician
Pincock Allen & Holt...
2016 May 27
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 05/25/2016 09:54 AM, Kelly Lesperance wrote:
> What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
It looks like some pretty heavy writes are going on at the time. I'm not
sure what you mean by "nose dives", but I'd expect *some* performance
impac...
2012 Apr 20
1
quota not being calculated
...h/.Spam:INDEX=/var/spool/mail/dovecot-control/indexes/%1u/%2u/%u/.Spam:CONTROL=/var/spool/mail/dovecot-control/%1u/%2u/%u/.Spam
prefix = Spam/
separator = /
subscriptions = no
type = private
}
plugin {
quota = fs:User quota
quota2 = maildir:Spam quota:ns=Spam/
quota2_rule = *:storage=20000K
sieve = /var/spool/mail/dovecot-control/sieve/%1u/%2u/%u/dovecot.sieve
sieve_before = /etc/sieve/before
sieve_dir = /var/spool/mail/dovecot-control/sieve/%1u/%2u/%u/scripts
trash = /etc/dovecot/conf.d/dovecot-trash.conf.ext
}
(full config: http://pastebin.com/Mui4X7Zh)
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...CentOS 6.5, with Kafka 0.8.1, without issue. We recently upgraded to CentOS 7.2 and Kafka 0.9, and that's when the trouble started.
What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
Here's 10 seconds of iostat output, which illustrates the issue:
[root at r1k1log] # iostat 1 1...
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...s-bounces at centos.org on behalf of gordon.messmer at gmail.com> wrote:
>On 05/25/2016 09:54 AM, Kelly Lesperance wrote:
>> What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
>
>It looks like some pretty heavy writes are going on at the time. I'm not
>sure what you mean by "nose dives", but I'd expect *some* perfor...
2016 Jun 01
0
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...centos.org on behalf of gordon.messmer at gmail.com> wrote:
>
>>On 05/25/2016 09:54 AM, Kelly Lesperance wrote:
>>> What we're seeing is that when the weekly raid-check script executes, performance nose dives, and I/O wait skyrockets. The raid check starts out fairly fast (20000K/sec - the limit that's been set), but then quickly drops down to about 4000K/Sec. dev.raid.speed sysctls are at the defaults:
>>
>>It looks like some pretty heavy writes are going on at the time. I'm not
>>sure what you mean by "nose dives", but I'd expect *...