Hi Andy,
On Jul 27, 2009, at 12:18 PM, Andrew Aldersley wrote:
>
> Hi all,
>
> I sent a request round last week asking for help with using a
"for"
> loop to read and separate a large dataset. The response I got worked
> great, but now I have another problem with using my loop.
>
> Basically I have a number of different files containing columned
> data. There are 132 datasets, named such that I have something in
> the form...
>
> precip_colxxx.txt
>
> ...where xxx is a number ranging from 1 to 132. What I want to do is
> read in every 13th table and extract the third column, and then
> place this in a new dataset. The new dataset will thus compose of 11
> columns of data. I have written the following bit of script to read
> in every 13th table separately, however I'm not sure how to do the
> next step of creating a new data frame and "dumping" the third
> column of my tables into this data frame. Is there are chance I will
> have to do a nested loop?
>
> for (i in seq(1,120,13)) {
>
> nm <- sprintf('precip_col%03d.txt', i)
>
> precip <- read.table(nm, header=T)
>
> }
You can do this by "building up" your columns into a list, then using
a combo of do.call and cbind.
For example:
mydata <- list()
for (i in seq(1,120,13)) {
nm <- sprintf('precip_col%03d.txt', i)
precip <- read.table(nm, header=T)
mydata[[i]] <- precip[,3]
}
mydata <- do.call(cbind, mydata)
The first param in do.call is the function you want to call, the
second param is a *list* of parameters you'd like to pass into the
function.
-steve
--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact