> Hello R Community, > > I would like to be able to download recent (yesterday's close is fine) > stock and > mutual fund prices from somewhere and use them for a personal finance > project. > Ideally, I would do this at a website of my choosing (e.g. > morningstar.com) and > have the possibility of getting a wide range of other information about > the security > as well. I see there is a package httpRequest that should allow me to > retrieve the webpage with an posted security symbol. I looked at package > XML for processing the returned webpage, but it didn't work on an > example *.asp file that I tried. I also checked out the nascent package > fBasics, > but I didn't find what I'm looking for. I know there is a Perl module > Finance::Quote; > should I go that route and avoid trying to parse webpages? I was hoping > to learn > how to extract information from webpages in general so I could apply the > techniques > for other purposes too. > > Thanks, > Scott Waichler > scott at lifetime.oregonstate.edu > *********************************************************************** >
Check out get.hist.quote in package tseries. Scott Waichler <scott.waichler <at> verizon.net> writes: : : > Hello R Community, : > : > I would like to be able to download recent (yesterday's close is fine) : > stock and : > mutual fund prices from somewhere and use them for a personal finance : > project. : > Ideally, I would do this at a website of my choosing (e.g. : > morningstar.com) and : > have the possibility of getting a wide range of other information about : > the security : > as well. I see there is a package httpRequest that should allow me to : > retrieve the webpage with an posted security symbol. I looked at package : > XML for processing the returned webpage, but it didn't work on an : > example *.asp file that I tried. I also checked out the nascent package : > fBasics, : > but I didn't find what I'm looking for. I know there is a Perl module : > Finance::Quote; : > should I go that route and avoid trying to parse webpages? I was hoping : > to learn : > how to extract information from webpages in general so I could apply the : > techniques : > for other purposes too. : > : > Thanks, : > Scott Waichler : > scott <at> lifetime.oregonstate.edu : > *********************************************************************** : > : : : ______________________________________________ : R-help <at> stat.math.ethz.ch mailing list : https://www.stat.math.ethz.ch/mailman/listinfo/r-help : PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
On Tue, 20 Apr 2004, Scott Waichler wrote: [snip]> > should I go that route and avoid trying to parse webpages? I was hoping > > to learn > > how to extract information from webpages in general so I could apply the > > techniques > > for other purposes too. > >>From the Introduction to R Data Import/Export (1.8.1):.... In general, statistical systems like R are not particularly well suited to manipulations of large-scale data. ... And ... it can be rewarding to use tools such as `awk' and `perl' to manipulate data before import or after export. ... You imply that you wish to learn a general technique, therefore I'd vote for a general purpose language. Perl is a good choice. HTH Itay -------------------------------------------------------------- itayf at fhcrc.org Fred Hutchinson Cancer Research Center