All- Thank you in advance for any help you might be able to lend. Here is my issue. I am trying to open a fairly large .dat file. The file originally was downloaded as a GZ file but I unzipped it (with 7-zip) into it's current 1.86 gig .dat format. I know that the data is "just a plain ASCII file with 720 columns and 360 rows per time step (month). It should be readable by anything!" There are 1272 steps. Here is what happens when I try to assign the file to an object:> clds<-read.table("C:\\CRUData\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", header = TRUE, row.names = 1) Error in read.table("C:\\CRU Data\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", : duplicate 'row.names' are not allowed In addition: There were 45 warnings (use warnings() to see them)>warnings()1: In scan(file, what, nmax, sep, dec, quote, skip, nlines, ... : Reached total allocation of 1535Mb: see help(memory.size) X 25 26: In type.convert(data[[i]], as.is = as.is[i], dec = dec, ... : Reached total allocation of 1535Mb: see help(memory.size) -- Carson Baughman Cell: (406) 360-9254 University of Alaska Fairbanks Scenarios Network for Alaska Planning [[alternative HTML version deleted]]
On May 3, 2010, at 6:16 PM, Carson Baughman wrote:> All- > Thank you in advance for any help you might be able to lend. > Here is > my issue. I am trying to open a fairly large .dat file. The file > originally was downloaded as a GZ file but I unzipped it (with 7- > zip) into > it's current 1.86 gig .dat format. I know that the data is "just a > plain > ASCII file with 720 columns and 360 rows per time step (month). It > should be > readable by anything!" There are 1272 steps.What's a step?> Here is what happens when I > try to assign the file to an object: > >> clds<-read.table("C:\\CRU > Data\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", header = TRUE, > row.names > = 1) > Error in read.table("C:\\CRU > Data\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", : > duplicate 'row.names' are not allowedIf you want to assign rownames that are different than the default sequential integers, you need to offer a vector of unique character values. Perhaps you wanted to specify row.names=NULL ?> In addition: There were 45 warnings (use warnings() to see them) >> warnings() > 1: In scan(file, what, nmax, sep, dec, quote, skip, > nlines, ... : > Reached total allocation of 1535Mb: see help(memory.size) > X 25 > 26: In type.convert(data[[i]], as.is = as.is[i], dec = > dec, ... : > Reached total allocation of 1535Mb: see help(memory.size)Sounds like you may be in trouble with memory resources. As the message says, see : help(memory.size)> > > -- > Carson Baughman > Cell: (406) 360-9254 > University of Alaska Fairbanks > Scenarios Network for Alaska Planning > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.David Winsemius, MD West Hartford, CT
Hi, on the one hand, you write "fairly large," on the other hand, you write "should be readable by anything." The warnings indicate that you are plain out of memory at some point. Not too surprising, given that your dataset has about 450000 rows and 720 columns. You may search the r-help files first for how to allocate memory/how to read large files, since these questions are asked frequently. The error, however, seems to refer to the fact that there are columns with identical column names, which is not allowed. Daniel ------------------------- cuncta stricte discussurus ------------------------- -----Original Message----- From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On Behalf Of Carson Baughman Sent: Monday, May 03, 2010 6:17 PM To: R-help at r-project.org Subject: [R] advice? All- Thank you in advance for any help you might be able to lend. Here is my issue. I am trying to open a fairly large .dat file. The file originally was downloaded as a GZ file but I unzipped it (with 7-zip) into it's current 1.86 gig .dat format. I know that the data is "just a plain ASCII file with 720 columns and 360 rows per time step (month). It should be readable by anything!" There are 1272 steps. Here is what happens when I try to assign the file to an object:> clds<-read.table("C:\\CRUData\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", header = TRUE, row.names = 1) Error in read.table("C:\\CRU Data\\TS3.0\\Cloud\\cru_ts_3_00.1901.2006.cld.dat", : duplicate 'row.names' are not allowed In addition: There were 45 warnings (use warnings() to see them)>warnings()1: In scan(file, what, nmax, sep, dec, quote, skip, nlines, ... : Reached total allocation of 1535Mb: see help(memory.size) X 25 26: In type.convert(data[[i]], as.is = as.is[i], dec = dec, ... : Reached total allocation of 1535Mb: see help(memory.size) -- Carson Baughman Cell: (406) 360-9254 University of Alaska Fairbanks Scenarios Network for Alaska Planning [[alternative HTML version deleted]] ______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.