Displaying 7 results from an estimated 7 matches for "frames_".
Did you mean:
frames
2004 May 12
0
New tutorial (Forgot the address)!
Sorry guys. I was in a hurry and forgot to include the url.
Just click <a
href="http://wxruby.rubyforge.org/wiki/wiki.pl?Frames_(Part_1)">here</a>
or enter in this url:
http://wxruby.rubyforge.org/wiki/wiki.pl?Frames_(Part_1)
Ugh!
Robert
_________________________________________________________________
Stop worrying about overloading your inbox - get MSN Hotmail Extra Storage!
http://join.msn.com/?pgmarke...
2019 Feb 03
1
Inefficiency in df$col
...es slower than pqR-2019-01-25.
(For a partial match, like df$xy, R-3.5.2 is 34 times slower.)
I wasn't surprised that pqR was faster, but I didn't expect this big a
difference. Then I remembered having seen a NEWS item from R-3.1.0:
* Partial matching when using the $ operator _on data frames_ now
throws a warning and may become defunct in the future. If partial
matching is intended, replace foo$bar by foo[["bar", exact =
FALSE]].
and having looked at the code then:
`$.data.frame` <- function(x,name) {
a <- x[[name]]
if (!is.null(a)) return(a)...
2005 Aug 08
1
Reading large files in R
Dear R-listers:
I am trying to work with a big (262 Mb) file but apparently reach a
memory limit using R on a MacOSX as well as on a unix machine.
This is the script:
> type=list(a=0,b=0,c=0)
> tmp <- scan(file="coastal_gebco_sandS_blend.txt", what=type,
sep="\t", quote="\"", dec=".", skip=1, na.strings="-99", nmax=13669628)
2007 May 15
2
Optimized File Reading with R
Dear All,
Hope I am not bumping into a FAQ, but so far my online search has been fruitless
I need to read some data file using R. I am using the (I think)
standard command:
data_150<-read.table("y_complete06000", header=FALSE)
where y_complete06000 is a 6000 by 40 table of numbers.
I am puzzled at the fact that R is taking several minutes to read this file.
First I thought it may
2007 Jan 10
1
Fw: Memory problem on a linux cluster using a large data set [Broadcast]
...w names on sub-setting) that
> can be problematic in terms of memory use. Probably better to
> use a matrix, for which:
>
> 'read.table' is not the right tool for reading large matrices,
> especially those with many columns: it is designed to read _data
> frames_ which may have columns of very different classes. Use
> 'scan' instead.
>
> (from the help page for read.table). I'm not sure of the
> details of the algorithms you'll invoke, but it might be a
> false economy to try to get scan to read in 'small' vers...
2006 Dec 18
1
Memory problem on a linux cluster using a large data set
Hello,
I have a large data set 320.000 rows and 1000 columns. All the data has the values 0,1,2.
I wrote a script to remove all the rows with more than 46 missing values. This works perfect on a smaller dataset. But the problem arises when I try to run it on the larger data set I get an error “cannot allocate vector size 1240 kb”. I’ve searched through previous posts and found out that it might
2006 Dec 21
1
Memory problem on a linux cluster using a large data set [Broadcast]
...w names on sub-setting) that
> can be problematic in terms of memory use. Probably better to
> use a matrix, for which:
>
> 'read.table' is not the right tool for reading large matrices,
> especially those with many columns: it is designed to read _data
> frames_ which may have columns of very different classes. Use
> 'scan' instead.
>
> (from the help page for read.table). I'm not sure of the
> details of the algorithms you'll invoke, but it might be a
> false economy to try to get scan to read in 'small' vers...