Displaying 20 results from an estimated 400 matches similar to: "Analyzing Publications from Pubmed via XML"
2012 Dec 27
1
Conjunction and disjunction in pubmed query
Hi:
I am trying to query pubmed abstracts using the following syntax:
url= "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?"
search = paste(url, "db=pubmed&term=", queryTerm1, "+AND+",
queryTerm2,"+OR+",queryTerm3, "+OR+", queryTerm4,
"[abstract]&retmax=100&usehistory=y", sep="")
docId <-
2012 Dec 11
1
query multiple terms in PubMed abstract
Hi:
I am trying to search PubMed abstracts which contains BOTH two terms:
COL4A1 AND Ocular. I am using the following code:
url= "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?"
search = paste(url,
"db=pubmed&term=COL4A1+AND+Ocular[abstract]&retmax=300", sep="")
docId <- xmlTreeParse(getURL(paste(url, search, sep="")),
2005 May 02
2
"Special" characters in URI
Hello!
I am crossposting this to R-help and BioC, since it is relevant to both
groups.
I wrote a wrapper for Entrez search utility (link for this is provided bellow),
which can add some new search functionality to existing code in Bioconductor's
package 'annotate'*.
http://eutils.ncbi.nlm.nih.gov/entrez/query/static/esearch_help.html
Entrez search utuility returns a XML document
2005 May 08
2
Extract just some fields from XML
Hello!
I am trying to get specific fields from an XML document and I am totally
puzzled. I hope someone can help me.
# URL
URL<-"http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&id=11877539,11822933,11871444&retmode=xml&rettype=citation"
# download a XML file
tmp <- xmlTreeParse(URL, isURL = TRUE)
tmp <- xmlRoot(tmp)
Now I want to extract only
2005 Jun 14
1
protection stack overflow??
Hi dear Rers,
I am using SSOAP package to access SOAP service at NCBI.
I followed the example code in SSOAP but failed.
> z <- .SOAP("http://www.ncbi.nlm.nih.gov/entrez/eutils/soap/soap_adapter.cgi", method="run_eInfo", db="pubmed", action = I("einfo"))
Error: protect(): protection stack overflow
what's wrong?
Thanks very much.
Regards
2008 Feb 18
2
Huge number
Hi,
I'm trying to calculate p-value to findout definitely expressed genes
compare A to B situation.
I got this data(this is a part of data) from whole organism , and each
number means each expression values (that means, we could think 'a' gene
is 13 in A situation, and it turns 30 in B situation)
To findout probability, I'm going to use Audic - Claverie Method. ( The
significance
2010 Sep 08
1
XML getNodeSet syntax for PUBMED XML export
I am looking for the syntax to capture XML tags marked with
/DescriptorName MajorTopicYN="Y"/ , but the combination of the internal
space (between "Name" and "Major" and the embedded quote marks are
defeating me. I can get all the "DescriptorName" tags, but these include
both MajroTopicYN = "Y" and "N" variants. Any suggestions?
2017 Jun 09
1
efetch result not in character format
Hi,
I want to use reutils to obtain the accession numbers of a query search in
character format. When I use efetch, the accession number isn't in a
character format, and I'm not sure if the number is accurate, because I get
the error:
Error in file.exists(destfile) : object 'destfile' not found
This is what I tried:
UIDs<-esearch( "Methylation" )
accession_numbers
2005 Jun 14
0
question about SSOAP
Dear R folks:
I am trying to use SSOAP (version 0.2-2) package in R (version
2.1.0,linux) to access SOAP service on NCBI
(http://www.ncbi.nlm.nih.gov)
its WSDL file is at http://www.ncbi.nlm.nih.gov/entrez/eutils/soap/eutils.wsdl
but some errors occured:
> ncbi <- processWSDL("http://www.ncbi.nlm.nih.gov/entrez/eutils/soap/eutils.wsdl")
> ff <-
2005 May 10
0
Fwd: Extract just some fields from XML]
Duncan, you are a king!
Thanks a lot for this cookie. It really helped me. Thanks for the code
as well as detailed explanation at the end.
>Hi Gregor.
>
>Here is a function that will collect all of the nodes in the
>XML document whose names are in the vector elementNames
>
>getElements =
>function(elementNames)
>{
> els = list()
>
> startElement = function(node,
2013 Jan 15
0
paper - download - pubmed
Hi,
I actually need to download pdfs through R code.
The thing which I want to do is that, search for a paper in pubmed,
which is possible by using GetPubMed function in the package "NCBI2R?".
GetPubMed(searchterm, file = "", download = TRUE , showurl = FALSE,
xldiv = ";", hyper = "HYPERLINK",
MaxRet = 30000, sme = FALSE, smt = FALSE, quiet = TRUE,
2008 Apr 10
5
Extending Bluecloth/Redcloth
I''d like to extend bleucloth or redcloth to support custom tags, e.g. I
want to use markup like this:
[pubmed:18332676]
which shall be extended to:
<a href="http://www.ncbi.nlm.nih.gov/pubmed/18332676">Behav Pharmacol.
2008 Mar;19(2):121-128.</a>
Does anyone know, if this is possible and has some hints how to do
this?! I have not decided, wether I want to use
2011 Mar 30
1
Package XML: Parse Garmin *.tcx file problems
I'm struggling with package XML to parse a Garmin file (named *.tcx).
I wonder if it's form is incomplete, but appreciably reluctant to paste
even a shortened version.
The output below shows I can get nodes, but an attempt at value of a
single node comes up empty (even though there is data there.
One question: Has anybody succeeded parsing Garmin .tcx (xml) files?
Thanks!
Michael
2007 Nov 18
4
Re ad HTML table
You can use htmlTreeParse and xpathApply from the XML library.
something like:
xpathApply( htmlTreeParse("http://blabla", useInt=T), "//td", function(x)
xmlValue(x))
should do it.
Gamma wrote:
>
> anyone care to explain how to read a html table, it's streaming data
> (updated every second) and i am looking for a suitable function.
>
> The imported html
2008 May 02
1
How to parse XML
I would like to learn how to parse a mixed text/xml document I
downloaded from the sec.gov website (see example below). I would like
to parse this to get the value for each xml tag and then access it
within R, but I don't know much about xml so I don't even know where to
start debugging the errors I am getting in this example code. Can
anyone help me get started?
Thanks, Roger
ftp
2005 Jul 19
1
Minor "bug" in source()
For R v2.1.1 patched and R v2.2.0 devel:
Calling source(file, chdir=TRUE) with is.character(file) != TRUE, that
is, with 'file' as a connection, will generate an error. Example:
> file <- textConnection("cat('Hello world\n')")
> source(file, chdir=TRUE)
Error in source(file, chdir = TRUE) : Object "ofile" not found
Of course, it does not make
2008 Jun 10
1
Parse XML
Could someone provide a link or examples of parsing XML document in R? Few
specific questions below:
For instance I can retrieve specific nodes using this:
node <- xpathApply(xml, "//" %+% xtag, xmlValue)
1) I want to be able to retrieve parent node for this node, how can I do
this? getParentNode() does not seem to cut it.
2) How can I retrieve children nodes for a particular
2011 Mar 29
2
Scrap java scripts and styles from an html document
Hi,
I am working on developing a web crawler in R and I needed some help with
regard to removal of javascripts and style sheets from the html document of
a web page.
i tried using the xml package, hence the function xpathApply
library(XML)
txt =
xpathApply(html,"//body//text()[not(ancestor::script)][not(ancestor::style)]",
xmlValue)
The output comes out as text lines, without any html
2009 Sep 03
1
encoding problem using xml package
Dear list
I tried to read an xml file using the xml package. Unfortunately, some encoding problems occure. E.g. german Umlaut will be red correctly. I assume that the occurs due to (internal?) conversion to utf-8. To illustrate the problem, I have wrote to xml files.
File Test 1
-----------
<?xml version="1.0" encoding="ISO-8859-1"?>
<Daten>
<ITEM>
2008 Jun 12
1
XML parameters to Column Headers for importing into a dataset
Dear List,
Do you know any way I can convert XML parameters into column headers. My
data is in a csv file with each row containing a xml form of data , and
multiple parameters (
<param1> data_val1 </param2> , <param2> data_val2 </param2> )
I want to convert it so each row caters to one record and each parameter
becomes a different column.
param1