similar to: Downloading a html table

Displaying 20 results from an estimated 1100 matches similar to: "Downloading a html table"

2012 May 26
3
Problem with readHTMLTable
Hello All, i was trying to simply run the readHTMLTable on the example published in the package. And on a page I was working on. So running: u = "http://en.wikipedia.org/wiki/List_of_countries_by_population" tables = readHTMLTable(u) returns the following error: Error in tb[["thead"]] : subscript out of bounds looking up this error on the web, didnt give me any hint. Is
2012 Jun 14
1
readHTMLTable function - unable to find an inherited method ~ for signature "NULL"
Hi R experts, I have been playing with library(XML) recently and found out that readHTMLTable workls flawlessly for some website, but it does give me an error like below ... Error in function (classes, fdef, mtable) : unable to find an inherited method for function "readHTMLTable", for signature "NULL" let's say..for example, this code works fine a
2009 Nov 26
1
How to suppress errors generated by readHTMLTable?
library(XML) download.file('http://polya.umdnj.edu/polya_db2/gene.php?llid=109079&unigene=&submit=Submit','index.html') tables=readHTMLTable("index.html",error=function(...){}) tables readHTMLTable gives me the following errors. Could somebody let me know how to suppress them? Opening and ending tag mismatch: center and table htmlParseEntityRef: expecting
2013 Mar 20
1
htmlParse (from XML library) working sporadically in the same code
I am using htmlParse from XML library on a paricular website. Sometimes code fails, sometimes it works, most of the time id doesn't and i cannot see why. The file i am trying to parse is  http://www.londonstockexchange.com/exchange/prices-and-markets/international-markets/indices/home/sp-500.html?page=0 Sometimes the following code works n<-readHTMLTable(htmlParse(url)) But most of the
2013 Jan 15
1
readHTMLTable (XML package)
Hi, I am using XML::readHTMLTable and getting the below error. Does anyone know why? Does this function not work with https? I didn't see anything in help about that. > library(XML) > wampage<-readHTMLTable('https://hr-workforce-analytics.llnl.gov/wf_pi_pop.html',1) Error in htmlParse(doc) : File https://hr-workforce-analytics.llnl.gov/wf_pi_pop.html does not exist Dan
2010 Mar 18
1
Do colClasses in readHTMLTable (XML Package) work?
Hi, I can't get the colClasses option to work in the readHTMLTable function of the XML package. Here's a code fragment: require("XML") doc <- "http://www.nber.org/cycles/cyclesmain.html" table <- getNodeSet(htmlParse(doc),"//table") [[2]] # The main table is the second one because it's embedded in the page table. xt
2012 Jun 07
1
How to set cookies in RCurl
Hi, I am trying to access a website and read its content. The website is a restricted access website that I access through a proxy server (which therefore requires me to enable cookies). I have problems in allowing Rcurl to receive and send cookies. The following lines give me: library(RCurl) library(XML) url <- "http://www.theurl.com" content <- readHTMLTable(url) content
2017 Jul 10
2
Problems with time formats when importing data using readHTMLTable
Hi, I am extracting positions data from the marine traffic website. The table has a "Timestamp" column which, in the browser, appears with the format yyyy-mm-dd HH:MM (UTC), e.g. 2017-07-10 14:04 (UTC). When I import the table, the same date "2017-07-10 14:04 (UTC)" appears as "1499696500149969650021 minutes ago", This is the more recent date and time. Older
2011 May 04
1
issue with "strange" characters (locale settings)
WinXP-x32, R-21.13.0 Dear list, I have a problem that (I think) relates to the interaction between Windows and R. I am trying to scrape a table with data on the Hawai'ian Islands, This is my code: library(XML) u <- "http://en.wikipedia.org/wiki/Hawaii" tables <- readHTMLTable(u) Islands <- tables[[5]] The output is (first set of columns):
2011 May 05
1
issue with "strange" characters (readHTMLTable)
Thank you. The line of code you give certainly resolves several of the issues. I didn't realize that font support is such a tough matter to realize. Let me express my gratitude to those who provide this for us in R. On 04-05-11, Prof Brian Ripley <ripley at stats.ox.ac.uk> wrote: Oh, please! This is about the contributed package XML, not R and not Windows. Some of
2012 Aug 09
2
read htm table error
Hi I am using Version R 2.15 and I haven't been able read html table. Following is my code and error message. Error in htmlParse(doc) : error in creating parser for http://en.wikipedia.org/wiki/Brazil_national_football_team theurl <- "http://en.wikipedia.org/wiki/Brazil_national_football_team" tables <- readHTMLTable(theurl) Regards, Kiung [[alternative HTML version
2012 Sep 19
1
scraping with session cookies
Hi, I am starting coding in r and one of the things that i want to do is to scrape some data from the web. The problem that I am having is that I cannot get passed the disclaimer page (which produces a session cookie). I have been able to collect some ideas and combine them in the code below but I dont get passed the disclaimer page. I am trying to agree the disclaimer with the postForm and write
2017 Jul 10
0
Problems with time formats when importing data using readHTMLTable
Not reproducible. [1][2][3] If our answers don't seem to apply to your situation, it will likely be because you did not explain your question clearly. Not plain text. This is a plain text mailing list, and the best-case scenario when you let your email program send HTML is that what you saw is not what we see (worst case is your email is scrambled on our end). Have you read the
2010 Nov 04
3
postForm() in RCurl and library RHTMLForms
Hi RUsers, Suppose I want to see the data on the website url <- "http://www.nseindia.com/content/indices/ind_histvalues.htm" for the index "S&P CNX NIFTY" for dates "FromDate"="01-11-2010","ToDate"="02-11-2010" then read the html table from the page using readHTMLtable() I am using this code webpage <-
2011 May 26
2
What am I doing wrong with sapply ?
Statement 9 using sapply does not seem to give the correct answer (or at least to me). Yet I do what I think is the same thing with statement 11 and I get the answer I'm looking for. 9 : s <-sapply(unlist(v[c(1:length(v))]), max) 11: for(i in 1 :length(v)) v1[i] <- max(unlist(v[i])) Shouldn't I get the same answer ? library(XML) rm(list=ls()) url <-
2017 Jul 11
2
Problems with time formats when importing data using readHTMLTable
Dear Jeff, I am sorry, I didn't notice that it was not plain text. I hope that it is now in the correct format. I explain the problem again, now with more detais. I am collecting the track positions of our research vessel from www.marinetraffic.com. In the page, the data appear in a table: Timestamp Source Speed (kn) Latitude (?) Longitude (?) Course (?) Show on Map
2013 Apr 03
7
Canadian politcal party colours in ggplot2
A stupid question but does anyone know how to express the actual colours used by the main Canadian political parties? I want to do a couple of ggplot2 plots and have lines or rectangles that accurately reflect the party colours. I can probably play around with RColorBrewer or something to figure it out but if some some already has got them it would save me some time especially with the NDP
2016 Jun 21
2
Problemas con tildes y otros caracteres en R y RStudio
Hola. Tengo algún tipo de problema con las tildes, a la hora de trabajar en R o en RStudio, que no sé resolver. Intentando reproducir en dos PCs distintos, ambos con Windows 7, uno de los últimos ejercicios que ha publicado Carlos Gil Bellosta en su blog ( https://www.datanalytics.com/2016/06/20/6602-767-km-alrededor-de-espana-para-visitar-todas-sus-capitales-de-provincia/), me ocurre que al
2012 Sep 17
1
memory leak using XML readHTMLTable
Hi, I'm using the XML package to scrape data and I'm trying to figure out how to eliminate the memory leak I'm currently experiencing. In the searches I've done, it sounds like the existence of the leak is fairly well known. What isn't as clear is exactly how to solve it. The general process I'm using is this: require(XML) myFunction <- function(URL) { html
2010 Oct 06
2
Converting scraped data
Dear Colleagues, I used this code to scrape data from the URL conatined within. This code should be reproducible. require("XML") library(XML) theurl <- "http://www.queensu.ca/cora/_trends/mip_2006.htm" tables <- readHTMLTable(theurl) n.rows <- unlist(lapply(tables, function(t) dim(t)[1])) class(tables) test<-data.frame(tables, stringsAsFactors=FALSE)