Is there a way to get around R?s memory-bound limitation by interfacing with a Hadoop database or should I look at products like SAS or JMP to work with data that has hundreds of thousands of records? Any help is appreciated. -- __________________________ *Barry E. King, Ph.D.* Analytics Modeler Qualex Consulting Services, Inc. Barry.King at qlx.com O: (317)940-5464 M: (317)507-0661 __________________________ [[alternative HTML version deleted]]
On Tue, Sep 16, 2014 at 6:40 AM, Barry King <barry.king at qlx.com> wrote:> Is there a way to get around R?s memory-bound limitation by interfacing > with a Hadoop database or should I look at products like SAS or JMP to work > with data that has hundreds of thousands of records? Any help is > appreciated. > __________________________ > *Barry E. King, Ph.D.* > Analytics ModelerPlease change your email to plain text only, per forum standards. You might want to look at bigmemory. http://cran.revolutionanalytics.com/web/packages/bigmemory/index.html -- There is nothing more pleasant than traveling and meeting new people! Genghis Khan Maranatha! <>< John McKown
If you need to start your question with a false dichotomy, by all means choose the option you seem to have already chosen and stop trolling us. If you actually want an answer here, try Googling on the topic first (is "R hadoop" so un-obvious?) and then phrase a specific question so someone has a chance to help you. --------------------------------------------------------------------------- Jeff Newmiller The ..... ..... Go Live... DCN:<jdnewmil at dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go... Live: OO#.. Dead: OO#.. Playing Research Engineer (Solar/Batteries O.O#. #.O#. with /Software/Embedded Controllers) .OO#. .OO#. rocks...1k --------------------------------------------------------------------------- Sent from my phone. Please excuse my brevity. On September 16, 2014 4:40:29 AM PDT, Barry King <barry.king at qlx.com> wrote:>Is there a way to get around R?s memory-bound limitation by interfacing >with a Hadoop database or should I look at products like SAS or JMP to >work >with data that has hundreds of thousands of records? Any help is >appreciated. > >-- >__________________________ >*Barry E. King, Ph.D.* >Analytics Modeler >Qualex Consulting Services, Inc. >Barry.King at qlx.com >O: (317)940-5464 >M: (317)507-0661 >__________________________ > > [[alternative HTML version deleted]] > >______________________________________________ >R-help at r-project.org mailing list >https://stat.ethz.ch/mailman/listinfo/r-help >PLEASE do read the posting guide >http://www.R-project.org/posting-guide.html >and provide commented, minimal, self-contained, reproducible code.
Hundreds of thousands of records usually fit into memory fine. Hadley On Tue, Sep 16, 2014 at 12:40 PM, Barry King <barry.king at qlx.com> wrote:> Is there a way to get around R?s memory-bound limitation by interfacing > with a Hadoop database or should I look at products like SAS or JMP to work > with data that has hundreds of thousands of records? Any help is > appreciated. > > -- > __________________________ > *Barry E. King, Ph.D.* > Analytics Modeler > Qualex Consulting Services, Inc. > Barry.King at qlx.com > O: (317)940-5464 > M: (317)507-0661 > __________________________ > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.-- http://had.co.nz/