Dear list, I need to read a big txt file (around 130Mb; 23800 rows and 49 columns) for downstream clustering analysis. I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")" but it took a long time and failed. However, it had no problem if I just put data of 3 columns. Is there any way which can load this big file? Thanks for any suggestions! Sincerely, Alex [[alternative HTML version deleted]]
Easy solution will be split your big txt files by text editor. e.g. 5000 rows each. and then combine the dataframes together into one. On 6/7/07, ssls sddd <ssls.sddd at gmail.com> wrote:> Dear list, > > I need to read a big txt file (around 130Mb; 23800 rows and 49 columns) > for downstream clustering analysis. > > I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")" > but it took a long time and failed. However, it had no problem if I just put > data of 3 columns. > > Is there any way which can load this big file? > > Thanks for any suggestions! > > Sincerely, > Alex > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >-- "The scientists of today think deeply instead of clearly. One must be sane to think clearly, but one can think deeply and be quite insane." Nikola Tesla http://www.macgrass.com
Alex, See R Data Import/Export Version 2.5.0 (2007-04-23) search for 'large' or 'scan'. Usually, taking care with the arguments nlines, what, quote, comment.char should be enough to get scan() to cooperate. You will need around 1GB RAM to store the result, so if you are working on a machine with less, you will need to upgrade. Consider storing the result as a numeric matrix. If any of those columns are long strings not needed in your computation, be sure to skip over them. Read the 'Details' of the help page for scan() carefully. Chuck On Thu, 7 Jun 2007, ssls sddd wrote:> Dear list, > > I need to read a big txt file (around 130Mb; 23800 rows and 49 columns) > for downstream clustering analysis. > > I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")" > but it took a long time and failed. However, it had no problem if I just put > data of 3 columns. > > Is there any way which can load this big file? > > Thanks for any suggestions! > > Sincerely, > Alex > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >Charles C. Berry (858) 534-2098 Dept of Family/Preventive Medicine E mailto:cberry at tajo.ucsd.edu UC San Diego http://biostat.ucsd.edu/~cberry/ La Jolla, San Diego 92093-0901
Dear Jim, Thanks a lot! The size of the text file is 189,588,541 bytes. It consists of 238305 rows (including the header) and 50 columns (the first column is for ID and the rest for 49 samples). The first row looks like: "ID" AIRNS_p_Sty5_Mapping250K_Sty_A09_50156.cel AIRNS_p_Sty5_Mapping250K_Sty_A11_50188.cel AIRNS_p_Sty5_Mapping250K_Sty_A12_50204.cel AIRNS_p_Sty5_Mapping250K_Sty_B09_50158.cel AIRNS_p_Sty5_Mapping250K_Sty_C01_50032.cel AIRNS_p_Sty5_Mapping250K_Sty_C12_50208.cel AIRNS_p_Sty5_Mapping250K_Sty_D03_50066.cel AIRNS_p_Sty5_Mapping250K_Sty_D08_50146.cel AIRNS_p_Sty5_Mapping250K_Sty_F03_50070.cel AIRNS_p_Sty5_Mapping250K_Sty_F12_50214.cel AIRNS_p_Sty5_Mapping250K_Sty_G09_50168.cel DOLCE_p_Sty7_Mapping250K_Sty_B04_53892.cel DOLCE_p_Sty7_Mapping250K_Sty_B06_53924.cel DOLCE_p_Sty7_Mapping250K_Sty_C05_53910.cel DOLCE_p_Sty7_Mapping250K_Sty_C10_53990.cel DOLCE_p_Sty7_Mapping250K_Sty_D05_53912.cel DOLCE_p_Sty7_Mapping250K_Sty_E01_53850.cel DOLCE_p_Sty7_Mapping250K_Sty_G12_54030.cel DOLCE_p_Sty7_Mapping250K_Sty_H06_53936.cel DOLCE_p_Sty7_Mapping250K_Sty_H08_53968.cel DOLCE_p_Sty7_Mapping250K_Sty_H11_54016.cel DOLCE_p_Sty7_Mapping250K_Sty_H12_54032.cel GUSTO_p_Sty20_Mapping250K_Sty_C08_81736.cel GUSTO_p_Sty20_Mapping250K_Sty_E03_81660.cel GUSTO_p_Sty20_Mapping250K_Sty_H02_81650.cel HEWED_p_250KSty_Plate_20060123_GOOD_B01_46246.cel HEWED_p_250KSty_Plate_20060123_GOOD_C06_46328.cel HEWED_p_250KSty_Plate_20060123_GOOD_F02_46270.cel HEWED_p_250KSty_Plate_20060123_GOOD_G04_46304.cel HOCUS_p_Sty4_Mapping250K_Sty_B05_55060.cel HOCUS_p_Sty4_Mapping250K_Sty_B12_55172.cel HOCUS_p_Sty4_Mapping250K_Sty_E05_55066.cel SOARS_p_Sty23_Mapping250K_Sty_B07_89024.cel SOARS_p_Sty23_Mapping250K_Sty_C01_88930.cel SOARS_p_Sty23_Mapping250K_Sty_C11_89090.cel SOARS_p_Sty23_Mapping250K_Sty_F07_89032.cel SOARS_p_Sty23_Mapping250K_Sty_H08_89052.cel SOARS_p_Sty23_Mapping250K_Sty_H10_89084.cel VINOS_p_Sty8_Mapping250K_Sty_A04_54082.cel VINOS_p_Sty8_Mapping250K_Sty_A07_54130.cel VINOS_p_Sty8_Mapping250K_Sty_B08_54148.cel VINOS_p_Sty8_Mapping250K_Sty_D01_54040.cel VINOS_p_Sty8_Mapping250K_Sty_D05_54104.cel VINOS_p_Sty8_Mapping250K_Sty_E04_54090.cel VINOS_p_Sty8_Mapping250K_Sty_E12_54218.cel VINOS_p_Sty8_Mapping250K_Sty_G01_54046.cel VINOS_p_Sty8_Mapping250K_Sty_G12_54222.cel VOLTS_p_Sty9_Mapping250K_Sty_G09_57916.cel VOLTS_p_Sty9_Mapping250K_Sty_H12_57966.cel and the second row looks like: "SNP_A-1780271" 1.8564200401306 1.5095599889755 1.7315399646759 1.530769944191 1.6576000452042 1.474179983139 2.1564099788666 1.775720000267 1.5979499816895 2.1641499996185 1.980849981308 2.180370092392 1.8782299757004 2.1485500335693 1.5325000286102 1.7232999801636 2.2281200885773 1.9381999969482 1.8546999692917 2.1590900421143 2.1928400993347 2.0253200531006 2.6680200099945 2.7435901165009 2.0804998874664 3.2142300605774 2.1001501083374 2.147579908371 3.5244200229645 1.374480009079 1.6613099575043 3.1606800556183 2.0917000770569 1.8727999925613 1.8952000141144 1.813570022583 1.8180899620056 2.2553699016571 1.9273999929428 1.6766400337219 1.3424600362778 1.5666999816895 1.7180800437927 1.9548699855804 1.4444999694824 2.2242999076843 1.7591500282288 2.0480198860168 2.638689994812 Thanks a lot! Sincerely, Alex On 6/6/07, jim holtman <jholtman@gmail.com> wrote:> > It would be useful if you could post the first couple of rows of the data > so we can see what it looks like. > > On 6/6/07, ssls sddd <ssls.sddd@gmail.com > wrote: > > > > Dear list, > > > > I need to read a big txt file (around 130Mb; 23800 rows and 49 columns) > > for downstream clustering analysis. > > > > I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")" > > but it took a long time and failed. However, it had no problem if I just > > put > > data of 3 columns. > > > > Is there any way which can load this big file? > > > > Thanks for any suggestions! > > > > Sincerely, > > Alex > > > > [[alternative HTML version deleted]] > > > > ______________________________________________ > > R-help@stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide > > http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > > > -- > Jim Holtman > Cincinnati, OH > +1 513 646 9390 > > What is the problem you are trying to solve? >[[alternative HTML version deleted]]