Unsubscribe
-----Original Message-----
From: r-help-request at r-project.org
Date: Wed, 05 May 2010 12:00:09
To: <r-help at r-project.org>
Subject: R-help Digest, Vol 87, Issue 5
Send R-help mailing list submissions to
r-help at r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request at r-project.org
You can reach the person managing the list at
r-help-owner at r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. aregImpute (Hmisc package) : error in matxv(X, xcof)...
(Marc Carpentier)
2. Agreement (HB8)
3. Re: How to rbind listed data frames?
(it-r-help at ml.epigenomics.com)
4. All possible paths between two nodes in a flowgraph using
igraphs? (jcano)
5. superscript (Kay Cichini)
6. All possible paths between two nodes in a flowgraph using
igraphs? (jcano)
7. Re: superscript (Jorge Ivan Velez)
8. Re: superscript (Duncan Murdoch)
9. Re: aregImpute (Hmisc package) : error in matxv(X, xcof)...
(Uwe Ligges)
10. Re: superscript (Kay Cichini)
11. Re: superscript (Kay Cichini)
12. Re: All possible paths between two nodes in a flowgraph using
igraphs? (Nikhil Kaza)
13. How to replace all <NA> values in a data.frame with another (
not 0) value (Nevil Amos)
14. Agreement (HB8)
15. Show number at each bar in barchart? (someone)
16. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Lanna Jin)
17. Re: aregImpute (Hmisc package) : error in matxv(X, xcof)...
(Frank E Harrell Jr)
18. Re: Need help on having multiple distributions in one graph
(Frank E Harrell Jr)
19. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Lanna Jin)
20. Re: Problem with vignette compilation during R CMD check
(pomchip at free.fr)
21. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Muhammad Rahiz)
22. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (John Kane)
23. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Muhammad Rahiz)
24. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Bart Joosen)
25. Re: Show number at each bar in barchart? (John Kane)
26. Re: error in La.svd Lapack routine 'dgesdd' (Douglas Bates)
27. make a column from the row names (Mohan L)
28. Re: Plotting legend outside of multiple panels (Patrick Lenon)
29. Using R with screenreading software (Rainer Scheuchenpflug)
30. Re: Show number at each bar in barchart? (Jorge Ivan Velez)
31. Re: make a column from the row names (John Kane)
32. R for web browser (Lanna Jin)
33. Re: Using R with screenreading software (Duncan Murdoch)
34. Odp: How to replace all <NA> values in a data.frame with
another ( not 0) value (Petr PIKAL)
35. Idiomatic looping over list name, value pairs in R (Luis N)
36. Re: Show number at each bar in barchart? (David Winsemius)
37. Memory issues using R withing Eclipse-StatET (Harsh)
38. Kernel density estimate plot for 3-dimensional data
(Pascal Martin)
39. Re: Idiomatic looping over list name, value pairs in R
(Christos Argyropoulos)
40. Re: Idiomatic looping over list name, value pairs in R
(Duncan Murdoch)
41. fit printed output onto a single page (Abiel X Reinhart)
42. Lazy evaluation in function call (Thorn)
43. Re: Kernel density estimate plot for 3-dimensional data
(Duncan Murdoch)
44. strange behavior of RODBC and/or ssconvert (stefan.duke at gmail.com)
45. Re: Idiomatic looping over list name, value pairs in R (Luis N)
46. Re: 3D version of triax.plot (package plotrix) (Gabriele Esposito)
47. Re: Kernel density estimate plot for 3-dimensional data
(Pascal Martin)
48. Re: Kernel density estimate plot for 3-dimensional data
(Duncan Murdoch)
49. Avoiding for-loop for splitting vector into subvectors based
on positions (Joris Meys)
50. Re: Avoiding for-loop for splitting vector into subvectors
based on positions (jim holtman)
51. Re: R for web browser (Tal Galili)
52. Re: / Operator not meaningful for factors (John Kane)
53. Re: ISO Eric Kort (rtiff) (cgw at witthoft.com)
54. read.table: skipping trailing delimiters (Marshall Feldman)
55. Re: strange behavior of RODBC and/or ssconvert
(Gabor Grothendieck)
56. Flushing print buffer (Marshall Feldman)
57. Re: read.table: skipping trailing delimiters (Marc Schwartz)
58. Re: read.table: skipping trailing delimiters (Gabor Grothendieck)
59. Re: Flushing print buffer (jim holtman)
60. Re: Flushing print buffer (jim holtman)
61. Package Rsafd (Bo Li)
62. legend with lines and points (threshold)
63. Re: Package Rsafd (David Winsemius)
64. Re: Package Rsafd (David Winsemius)
65. Re: R for web browser (j verzani)
66. help overlay scatterplot to effects plot (Anderson, Chris)
67. How to make predictions with the predict() method on an
arimax object using arimax() from TSA library (a a)
68. Re : aregImpute (Hmisc package) : error in matxv(X, xcof)...
(Marc Carpentier)
69. unsubcribe (Galois Theory)
70. Re: Re : aregImpute (Hmisc package) : error in matxv(X,
xcof)... (David Winsemius)
71. Re: unsubcribe (Cedrick W. Johnson)
72. randomforests - how to classify (pdb)
73. installing a package in linux (Fahim Md)
74. Re: randomforests - how to classify (Changbin Du)
75. R formula language---a min and max function? (ivo welch)
76. Re: R formula language---a min and max function? (David Winsemius)
77. Re: R formula language---a min and max function? (ivo welch)
78. Re: R formula language---a min and max function?
(Gabor Grothendieck)
79. Re: R formula language---a min and max function? (David Winsemius)
80. rgl: plane3d or abline() analog (Michael Friendly)
81. Re: rgl: plane3d or abline() analog (David Winsemius)
82. Re: generating correlated random variables from different
distributions (Greg Snow)
83. Re: Agreement (Tobias Verbeke)
84. Error when invoking x11() (Alex Chelminsky)
85. Re: Avoiding for-loop for splitting vector into subvectors
based on positions (Joris Meys)
86. Two Questions on R (call by reference and pre-compilation)
(Ruihong Huang)
87. timing a function (pdb)
88. Re: timing a function (mohamed.lajnef at inserm.fr)
89. Re: R formula language---a min and max function? (ivo welch)
90. Re: timing a function (Joris Meys)
91. Re: R formula language---a min and max function? (David Winsemius)
92. Re: Lazy evaluation in function call (Joris Meys)
93. Re: Show number at each bar in barchart? (Carl Witthoft)
94. Re: Two Questions on R (call by reference and
pre-compilation) (Steve Lianoglou)
95. Re: Lazy evaluation in function call (Bert Gunter)
96. Openings in the Consulting Department of XLSolutions Corp
(sue at xlsolutions-corp.com)
97. Re: R formula language---a min and max function?
(Gabor Grothendieck)
98. Re: R formula language---a min and max function?
(Gabor Grothendieck)
99. readLines with space-delimiter? (Seth)
100. Re: rgl: plane3d or abline() analog (Duncan Murdoch)
101. Re: Two Questions on R (call by reference and
pre-compilation) (Duncan Murdoch)
102. Re: installing a package in linux (Tengfei Yin)
103. Symbolic eigenvalues and eigenvectors (John Mesheimer)
104. Visualizing binary response data? (Kim Jung Hwa)
105. Re: Symbolic eigenvalues and eigenvectors (Steve Lianoglou)
106. Re: Symbolic eigenvalues and eigenvectors (John Mesheimer)
107. Re: rgl: plane3d or abline() analog (Michael Friendly)
108. Re: Symbolic eigenvalues and eigenvectors (Gabor Grothendieck)
109. Re: Visualizing binary response data? (Thomas Stewart)
110. Re: Visualizing binary response data? (Frank E Harrell Jr)
111. Re: readLines with space-delimiter? (jim holtman)
112. Re: Cross-checking a custom function for separability
indices (Nikos Alexandris)
113. Re: Symbolic eigenvalues and eigenvectors (Steve Lianoglou)
114. Re: Delete rows with duplicate field... (kMan)
115. converting an objects list (Anthony Fristachi)
116. masking of objects between mtrace() and getYahooData() (zerdna)
117. Re: How to make predictions with the predict() method on an
arimax object using arimax() from TSA library (Dennis Murphy)
118. question about 'write.table' (karena)
119. Re : Re : aregImpute (Hmisc package) : error in matxv(X,
xcof)... (Marc Carpentier)
120. Re: Estimating theta for negative binomial model (Tim Clark)
121. Re: Errors when trying to open odfWeave documents (Paul)
122. Creating Crosstabs using a sparse table (merrittr)
123. Re: How to replace all <NA> values in a data.frame with
another ( not 0) value (Nevil Amos)
124. Help with dummy.coef (James M. Curran)
125. Re: readLines with space-delimiter? (Seth)
126. better way to trick data frame structure? (Seth)
127. Odp: better way to trick data frame structure? (Petr PIKAL)
128. Re: Odp: better way to trick data frame structure? (Seth)
129. Converting dollar value (factors) to numeric (Wang, Kevin (SYD))
130. Re: Lazy evaluation in function call (Thorn)
131. Re: Two Questions on R (call by reference and
pre-compilation) (Ruihong Huang)
132. Re: installing a package in linux (Ruihong Huang)
133. Re: Converting dollar value (factors) to numeric (Ruihong Huang)
134. Re: Converting dollar value (factors) to numeric
(Fredrik Karlsson)
135. A question regarding the loess function (Scott MacDonald)
136. Re: Converting dollar value (factors) to numeric (Phil Spector)
137. help with restart (Wincent)
138. Re: Two Questions on R (call by reference and
pre-compilation) (Seth)
139. Re: converting an objects list (Jim Lemon)
140. Re: fit printed output onto a single page (Jim Lemon)
141. concatenate values of two columns (n.vialma at libero.it)
142. Memory issue (Alex van der Spek)
143. puzzles with assign() (David.Epstein)
----------------------------------------------------------------------
Message: 1
Date: Tue, 4 May 2010 03:23:52 -0700 (PDT)
From: Marc Carpentier <marc.carpentier at ymail.com>
To: r-help at r-project.org
Subject: [R] aregImpute (Hmisc package) : error in matxv(X, xcof)...
Message-ID: <554581.31018.qm at web28212.mail.ukl.yahoo.com>
Content-Type: text/plain
Dear r-help list,
I'm trying to use multiple?imputation for my MSc thesis.
Having good exemples using the Hmisc package, I tried the aregImpute function.
But with my own dataset, I have the following error :
Erreur dans matxv(X, xcof) : columns in a (51) must be <= length of b (50)
De plus : Warning message:
In f$xcoef[, 1] * f$xcenter :
? la taille d'un objet plus long n'est pas multiple de la taille
d'un objet plus court
? = longer object length is not a multiple of shorter object length
I first tried to "I()" all the continuous variables but the same error
occurs with different numbers :
Erreur dans matxv(X, xcof) : columns in a (37) must be <= length of b (36)...
I'm a student and I'm not familiar with?possible constraints in a
dataset to be effectively imputed. I just found this previous message, where the
author's autoreply?suggests that particular distributions might be an
explanation of algorithms failure :
http://www.mail-archive.com/r-help at r-project.org/msg53534.html
Does anyone know if these messages reflect a specific problem in my dataset ?
And if the number mentioned might give me a hint on which column to look at (and
maybe transform or ignore for the imputation) ?
Thanks for any advice you might have.
Marc
[[alternative HTML version deleted]]
------------------------------
Message: 2
Date: Tue, 4 May 2010 13:06:57 +0200
From: HB8 <hb8hb8 at gmail.com>
To: r-help at r-project.org
Subject: [R] Agreement
Message-ID:
<n2v943ea5311005040406gf927899z98fb8d1aca643ff4 at mail.gmail.com>
Content-Type: text/plain
Hi,
Has Lawrence Lin's code been ported to R?
http://tigger.uic.edu/~hedayat/sascode.html<http://tigger.uic.edu/%7Ehedayat/sascode.html>
Regards, Gregoire Thomas
[[alternative HTML version deleted]]
------------------------------
Message: 3
Date: Tue, 04 May 2010 13:11:05 +0200
From: it-r-help at ml.epigenomics.com
To: Phil Wieland <phwiel at gmx.de>
Subject: Re: [R] How to rbind listed data frames?
Message-ID: <4BE000C9.7070702 at epigenomics.com>
Content-Type: text/plain; charset=ISO-8859-1
assuming all data frames have the same format
do.call("rbind", dataList)
will concatenate all data frames contained in your list object.
Phil Wieland wrote, On 05/04/10 09:51:> I made a list (dataList) of data frames. The list looks like this (the
> first two elements):
>
> [[1]]
> est cond targets
> 1 400 exo_depth_65 Hautklinik
> 2 300 exo_depth_65 Ostturm_UKM
> 3 200 exo_depth_65 Kreuzung_Roxeler/Albert_Schweizer
> ...
>
>
> [[2]]
> est cond targets
> 1 400 control Hautklinik
> 2 220 control Ostturm_UKM
> 3 300 control Kreuzung_Roxeler/Albert_Schweizer
> ...
>
> Now I would like to merge the data frames with rbind. It works fine this
> way:
>
> rbind(dataList[[1]], dataList[[2]], ...)
>
> but I would like to use lapply or a for loop to get rid of specifying
> the subscripts. The output of lapply(dataList, rbind) is always
>
> [,1] [,2]
> [1,] List,3 List,3
>
> Thanks for help...
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Matthias Burger Project Manager/ Biostatistician
Epigenomics AG Kleine Praesidentenstr. 1 10178 Berlin, Germany
phone:+49-30-24345-0 fax:+49-30-24345-555
http://www.epigenomics.com matthias.burger at epigenomics.com
--
Epigenomics AG Berlin Amtsgericht Charlottenburg HRB 75861
Vorstand: Geert Nygaard (CEO/Vorsitzender)
Oliver Schacht PhD (CFO)
Aufsichtsrat: Prof. Dr. Dr. hc. Rolf Krebs (Chairman/Vorsitzender)
------------------------------
Message: 4
Date: Tue, 4 May 2010 04:21:00 -0700 (PDT)
From: jcano <javier.cano at urjc.es>
To: r-help at r-project.org
Subject: [R] All possible paths between two nodes in a flowgraph using
igraphs?
Message-ID: <1272972060823-2125321.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi all
Is there any systematic way to compute all possible paths, first-order loops
and j-th order loops between two given nodes in a flowgraph (directed graph
with cycles) - preferably using the igraph library in R? I have checked the
igraph documentation but I can't figure out any direct and systematic way to
do so. Any ideas?
I use the following definitions from Butler, R. and A. Huzurbazar (1997).
Stochastic Network Models for Survival Analysis. Journal of the American
Statistical Association 92 (437), 246-257.
- A path from node i to j is any possible sequence of nodes from i to j
which does not pass through any intermediate node more than once.
- A first-order loop is any closed path in the flowgraph that returns to the
initial node of the loop without passing through any intermediate node more
than once.
- A jth-order loop consists of j nontouching first-order loops.
For example, in the flowgraph below
http://n4.nabble.com/file/n2125321/flowgraph_subsume.jpg
there are 18 paths between nodes 1 and a:
- 1a;
- 12a, 124a, 1243a, 1245a, 12436a, 124365a, 12456a, 124563a;
- 13a, 134a, 136a, 1342a, 1345a, 13456a, 1365a, 13654a, 136542a.
3 first-order loops:
- 12431, 1245631, 45634;
and no loops of order two or more.
Thanks in advance
jcano
--
View this message in context:
http://r.789695.n4.nabble.com/All-possible-paths-between-two-nodes-in-a-flowgraph-using-igraphs-tp2125321p2125321.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 5
Date: Tue, 4 May 2010 04:33:10 -0700 (PDT)
From: Kay Cichini <Kay.Cichini at uibk.ac.at>
To: r-help at r-project.org
Subject: [R] superscript
Message-ID: <1272972790381-2125341.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
hello,
i need to add legend text: "4th-root transformation", with the
"th"
superscripted -
tried much - but nothing worked..
thanks for any hints,
kay
-----
------------------------
Kay Cichini
Postgraduate student
Institute of Botany
Univ. of Innsbruck
------------------------
--
View this message in context:
http://r.789695.n4.nabble.com/superscript-tp2125341p2125341.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 6
Date: Tue, 4 May 2010 04:34:53 -0700 (PDT)
From: jcano <javier.cano at urjc.es>
To: r-help at r-project.org
Subject: [R] All possible paths between two nodes in a flowgraph using
igraphs?
Message-ID: <1272972893939-2125347.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi all
Is there any systematic way to compute all possible paths, first-order loops
and j-th order loops between two given nodes in a flowgraph (directed graph
with cycles) - preferably using the igraph library in R? I have checked the
igraph documentation but I can't figure out any direct and systematic way to
do so. Any ideas?
I use the following definitions from Butler, R. and A. Huzurbazar (1997).
Stochastic Network Models for Survival Analysis. Journal of the American
Statistical Association 92 (437), 246-257.
- A path from node i to j is any possible sequence of nodes from i to j
which does not pass through any intermediate node more than once.
- A first-order loop is any closed path in the flowgraph that returns to the
initial node of the loop without passing through any intermediate node more
than once.
- A jth-order loop consists of j nontouching first-order loops.
For example, in the flowgraph below
there are 18 paths between nodes 1 and a:
- 1a;
- 12a, 124a, 1243a, 1245a, 12436a, 124365a, 12456a, 124563a;
- 13a, 134a, 136a, 1342a, 1345a, 13456a, 1365a, 13654a, 136542a.
6 first-order loops:
- 12431, 13421, 1245631, 1365421, 45634, 43654;
and no loops of order two or more.
Thanks in advance
jcano http://n4.nabble.com/file/n2125347/flowgraph_subsume.jpg
--
View this message in context:
http://r.789695.n4.nabble.com/All-possible-paths-between-two-nodes-in-a-flowgraph-using-igraphs-tp2125347p2125347.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 7
Date: Tue, 4 May 2010 07:37:48 -0400
From: Jorge Ivan Velez <jorgeivanvelez at gmail.com>
To: Kay Cichini <Kay.Cichini at uibk.ac.at>
Cc: r-help at r-project.org
Subject: Re: [R] superscript
Message-ID:
<t2q317737de1005040437u5f5ba509odf8d8eb822e39ed3 at mail.gmail.com>
Content-Type: text/plain
Hi Kay,
Try
> plot(1:10)
> legend('topleft', expression(4^th*"-root
transformation"))
HTH,
Jorge
On Tue, May 4, 2010 at 7:33 AM, Kay Cichini <> wrote:
>
> hello,
>
> i need to add legend text: "4th-root transformation", with the
"th"
> superscripted -
> tried much - but nothing worked..
>
> thanks for any hints,
> kay
>
> -----
> ------------------------
> Kay Cichini
> Postgraduate student
> Institute of Botany
> Univ. of Innsbruck
> ------------------------
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/superscript-tp2125341p2125341.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 8
Date: Tue, 04 May 2010 07:44:14 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Kay Cichini <Kay.Cichini at uibk.ac.at>
Cc: r-help at r-project.org
Subject: Re: [R] superscript
Message-ID: <4BE0088E.804 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Kay Cichini wrote:> hello,
>
> i need to add legend text: "4th-root transformation", with the
"th"
> superscripted -
> tried much - but nothing worked..
>
This puts it in the title for the plot:
plot(1, main=expression(paste("4"^"th"," root
transformation")))
This puts it in a legend:
legend("topleft", pch=1,
expression(paste("4"^"th"," root
transformation")))
Duncan Murdoch
------------------------------
Message: 9
Date: Tue, 04 May 2010 13:52:31 +0200
From: Uwe Ligges <ligges at statistik.tu-dortmund.de>
To: Marc Carpentier <marc.carpentier at ymail.com>
Cc: r-help at r-project.org
Subject: Re: [R] aregImpute (Hmisc package) : error in matxv(X,
xcof)...
Message-ID: <4BE00A7F.2050806 at statistik.tu-dortmund.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Having reproducible examples including data and the actual call that
lead to the error would be really helpful to be able to help.
Uwe Ligges
On 04.05.2010 12:23, Marc Carpentier wrote:> Dear r-help list,
> I'm trying to use multiple imputation for my MSc thesis.
> Having good exemples using the Hmisc package, I tried the aregImpute
function. But with my own dataset, I have the following error :
>
> Erreur dans matxv(X, xcof) : columns in a (51) must be<= length of b
(50)
> De plus : Warning message:
> In f$xcoef[, 1] * f$xcenter :
> la taille d'un objet plus long n'est pas multiple de la taille
d'un objet plus court
> = longer object length is not a multiple of shorter object length
>
> I first tried to "I()" all the continuous variables but the same
error occurs with different numbers :
> Erreur dans matxv(X, xcof) : columns in a (37) must be<= length of b
(36)...
>
> I'm a student and I'm not familiar with possible constraints in a
dataset to be effectively imputed. I just found this previous message, where the
author's autoreply suggests that particular distributions might be an
explanation of algorithms failure :
> http://www.mail-archive.com/r-help at r-project.org/msg53534.html
>
> Does anyone know if these messages reflect a specific problem in my dataset
? And if the number mentioned might give me a hint on which column to look at
(and maybe transform or ignore for the imputation) ?
> Thanks for any advice you might have.
>
> Marc
>
>
>
>
> [[alternative HTML version deleted]]
>
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 10
Date: Tue, 4 May 2010 05:05:08 -0700 (PDT)
From: Kay Cichini <Kay.Cichini at uibk.ac.at>
To: r-help at r-project.org
Subject: Re: [R] superscript
Message-ID: <1272974708504-2125384.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
thanks a lot!
-----
------------------------
Kay Cichini
Postgraduate student
Institute of Botany
Univ. of Innsbruck
------------------------
--
View this message in context:
http://r.789695.n4.nabble.com/superscript-tp2125341p2125384.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 11
Date: Tue, 4 May 2010 05:05:44 -0700 (PDT)
From: Kay Cichini <Kay.Cichini at uibk.ac.at>
To: r-help at r-project.org
Subject: Re: [R] superscript
Message-ID: <1272974744881-2125386.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
thanks a lot!
-----
------------------------
Kay Cichini
Postgraduate student
Institute of Botany
Univ. of Innsbruck
------------------------
--
View this message in context:
http://r.789695.n4.nabble.com/superscript-tp2125341p2125386.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 12
Date: Tue, 4 May 2010 08:17:13 -0400
From: Nikhil Kaza <nikhil.list at gmail.com>
To: jcano <javier.cano at urjc.es>
Cc: r-help at r-project.org
Subject: Re: [R] All possible paths between two nodes in a flowgraph
using igraphs?
Message-ID: <AB131EF2-B524-4866-8859-C1C8D0201A1D at gmail.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Finding all paths between two nodes in a general graph is very hard.
If your graph is sparse you may be able to construct the list of paths
provided of course you take care not to get stuck in a cycle.
But for most practical purposes you may just need edge disjoint path
or vertex disjoint paths.
I am not sure about cycles. But I suppose you can just use the minimum
spanning tree and iteratively add the remaining edges to get the cycles.
Nikhil Kaza
Asst. Professor,
City and Regional Planning
University of North Carolina
nikhil.list at gmail.com
On May 4, 2010, at 7:34 AM, jcano wrote:
>
> Hi all
>
> Is there any systematic way to compute all possible paths, first-
> order loops
> and j-th order loops between two given nodes in a flowgraph
> (directed graph
> with cycles) - preferably using the igraph library in R? I have
> checked the
> igraph documentation but I can't figure out any direct and
> systematic way to
> do so. Any ideas?
> I use the following definitions from Butler, R. and A. Huzurbazar
> (1997).
> Stochastic Network Models for Survival Analysis. Journal of the
> American
> Statistical Association 92 (437), 246-257.
> - A path from node i to j is any possible sequence of nodes from i
> to j
> which does not pass through any intermediate node more than once.
> - A first-order loop is any closed path in the flowgraph that
> returns to the
> initial node of the loop without passing through any intermediate
> node more
> than once.
> - A jth-order loop consists of j nontouching first-order loops.
>
> For example, in the flowgraph below
> there are 18 paths between nodes 1 and a:
> - 1a;
> - 12a, 124a, 1243a, 1245a, 12436a, 124365a, 12456a, 124563a;
> - 13a, 134a, 136a, 1342a, 1345a, 13456a, 1365a, 13654a, 136542a.
> 6 first-order loops:
> - 12431, 13421, 1245631, 1365421, 45634, 43654;
> and no loops of order two or more.
>
> Thanks in advance
>
> jcano http://n4.nabble.com/file/n2125347/flowgraph_subsume.jpg
> --
> View this message in context:
http://r.789695.n4.nabble.com/All-possible-paths-between-two-nodes-in-a-flowgraph-using-igraphs-tp2125347p2125347.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 13
Date: Tue, 04 May 2010 22:54:14 +1000
From: Nevil Amos <nevil.amos at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <4BE018F6.7050900 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
I need to replace <NA> occurrences in multiple columns in a data.frame
with "000/000"
how do I achieve this?
Thanks
Nevil Amos
------------------------------
Message: 14
Date: Tue, 4 May 2010 12:59:48 +0200
From: HB8 <hb8hb8 at gmail.com>
To: r-help at R-project.org
Subject: [R] Agreement
Message-ID:
<j2j943ea5311005040359nb0209c84xc069f01fb540a3f9 at mail.gmail.com>
Content-Type: text/plain
Hi,
Has Lawrence Lin's code been ported to R?
http://tigger.uic.edu/~hedayat/sascode.html
Regards, Gregoire Thomas
[[alternative HTML version deleted]]
------------------------------
Message: 15
Date: Tue, 4 May 2010 05:41:02 -0700 (PDT)
From: someone <vonhoffen at t-online.de>
To: r-help at r-project.org
Subject: [R] Show number at each bar in barchart?
Message-ID: <1272976862738-2125438.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
when i plot a barchart with 5 bars there is one bar pretty long and the
others get smaller
like (20, 80, 20, 5, 2)
is there a way of displaying the number accoirding to each bar next to it?
like in a bwplot the panel option N?
--
View this message in context:
http://r.789695.n4.nabble.com/Show-number-at-each-bar-in-barchart-tp2125438p2125438.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 16
Date: Tue, 4 May 2010 06:02:29 -0700 (PDT)
From: Lanna Jin <lannajin at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <1272978149379-2125464.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Try: "x[which(is.na(x)),] <- 000/000", where is x is your data
frame
-----
Lanna Jin
lannajin at gmail.com
510-898-8525
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-replace-all-NA-values-in-a-data-frame-with-another-not-0-value-tp2125458p2125464.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 17
Date: Tue, 4 May 2010 08:05:24 -0500
From: Frank E Harrell Jr <f.harrell at Vanderbilt.Edu>
To: <r-help at r-project.org>
Subject: Re: [R] aregImpute (Hmisc package) : error in matxv(X,
xcof)...
Message-ID: <4BE01B94.8050508 at vanderbilt.edu>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
On 05/04/2010 06:52 AM, Uwe Ligges wrote:> Having reproducible examples including data and the actual call that
> lead to the error would be really helpful to be able to help.
>
> Uwe Ligges
In addition to that, this kind of message usually means that you have a
singularity somewhere, e.g., you are using too many knots for spline
terms or have a tiny cell in a categorical variable.
Frank
>
> On 04.05.2010 12:23, Marc Carpentier wrote:
>> Dear r-help list,
>> I'm trying to use multiple imputation for my MSc thesis.
>> Having good exemples using the Hmisc package, I tried the aregImpute
>> function. But with my own dataset, I have the following error :
>>
>> Erreur dans matxv(X, xcof) : columns in a (51) must be<= length of b
(50)
>> De plus : Warning message:
>> In f$xcoef[, 1] * f$xcenter :
>> la taille d'un objet plus long n'est pas multiple de la taille
d'un
>> objet plus court
>> = longer object length is not a multiple of shorter object length
>>
>> I first tried to "I()" all the continuous variables but the
same error
>> occurs with different numbers :
>> Erreur dans matxv(X, xcof) : columns in a (37) must be<= length of b
>> (36)...
>>
>> I'm a student and I'm not familiar with possible constraints in
a
>> dataset to be effectively imputed. I just found this previous message,
>> where the author's autoreply suggests that particular distributions
>> might be an explanation of algorithms failure :
>> http://www.mail-archive.com/r-help at r-project.org/msg53534.html
>>
>> Does anyone know if these messages reflect a specific problem in my
>> dataset ? And if the number mentioned might give me a hint on which
>> column to look at (and maybe transform or ignore for the imputation) ?
>> Thanks for any advice you might have.
>>
>> Marc
>>
>>
>>
>>
>> [[alternative HTML version deleted]]
>>
>>
--
Frank E Harrell Jr Professor and Chairman School of Medicine
Department of Biostatistics Vanderbilt University
------------------------------
Message: 18
Date: Tue, 4 May 2010 08:06:09 -0500
From: Frank E Harrell Jr <f.harrell at Vanderbilt.Edu>
To: <r-help at r-project.org>
Subject: Re: [R] Need help on having multiple distributions in one
graph
Message-ID: <4BE01BC1.4080904 at vanderbilt.edu>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
On 05/03/2010 11:14 PM, Jorge Ivan Velez wrote:> Hi Joseph,
>
> How about this?
>
> matplot(cbind(m0, m1, m3, m4), type = 'l', lty = 1)
> legend('topright', paste('m', c(0, 1, 3, 4), sep =
""), lty = 1, col = 1:4)
>
> See ?matplot and ?legend for details.
>
> HTH,
> Jorge
Also see the labcurve function in the Hmisc package, which will draw
curves and label them where they are most separated.
Frank
>
>
> On Mon, May 3, 2010 at 6:42 PM,<> wrote:
>
>> R-listers:
>>
>> I have searched the help files and everything I have related to R
graphics.
>> I cannot find how to graph y against
>> several distributions on a single graph. Here is code for creating 4
>> Poisson distributions with different mean values, although I would
prefer
>> having it in a loop: The top of the y axis for the first distribution,
with
>> count of 0, is .6, which is the highest point for any of the
distributions.
>>
>> obs<- 1:20 y<- obs-1
>> m0<- (exp(-.5) * .5^y)/factorial(y)
>> m1<- (exp(-1) * 1^y)/factorial(y)
>> m3<- (exp(-3) * 3^y)/factorial(y)
>> m4<- (exp(-5) * 5^y)/factorial(y)
>>
>> How do I plot the graph of each distribution on y, all on a single
graph? I
>> have spent so many hours on this,
>> which is really quite simple in applications such as Stata. Thanks very
>> much for the assistance:
>>
>> Joseph Hilbe
>> hilbe at asu.edu or jhilbe at aol.com
>>
--
Frank E Harrell Jr Professor and Chairman School of Medicine
Department of Biostatistics Vanderbilt University
------------------------------
Message: 19
Date: Tue, 4 May 2010 06:06:25 -0700 (PDT)
From: Lanna Jin <lannajin at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <1272978385694-2125471.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Whoops, my bad. Maybe try using "gsub"
-----
Lanna Jin
lannajin at gmail.com
510-898-8525
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-replace-all-NA-values-in-a-data-frame-with-another-not-0-value-tp2125458p2125471.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 20
Date: Tue, 4 May 2010 15:17:40 +0200 (CEST)
From: pomchip at free.fr
To: r-help at r-project.org
Subject: Re: [R] Problem with vignette compilation during R CMD check
Message-ID:
<326524792.3027911272979060267.JavaMail.root at
zimbra20-e3.priv.proxad.net>
Content-Type: text/plain; charset=utf-8
Thanks Uwe
------------------------------
Message: 21
Date: Tue, 04 May 2010 14:20:03 +0100
From: Muhammad Rahiz <muhammad.rahiz at ouce.ox.ac.uk>
To: "nevil.amos at sci.monash.edu.au" <nevil.amos at
sci.monash.edu.au>
Cc: "r-help at stat.math.ethz.ch" <r-help at stat.math.ethz.ch>
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <4BE01F03.8040204 at ouce.ox.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi Nevil,
You can try a method like this
x <- c(rnorm(5),rep(NA,3),rnorm(5)) # sample data
dat <- data.frame(x,x) # make sample dataframe
dat2 <- as.matrix(dat) # conver to matrix
y <- which(is.na(dat)==TRUE) # get index of NA values
dat2[y] <- "000/000" # replace all na
values
with "000/000"
Muhammad
Nevil Amos wrote:> I need to replace <NA> occurrences in multiple columns in a
data.frame
> with "000/000"
>
> how do I achieve this?
>
> Thanks
>
> Nevil Amos
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 22
Date: Tue, 4 May 2010 06:21:53 -0700 (PDT)
From: John Kane <jrkrideau at yahoo.ca>
To: r-help at stat.math.ethz.ch, nevil.amos at sci.monash.edu.au
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <271402.74922.qm at web38405.mail.mud.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
?replace
Something like this should work
replace(df1, is.na(df1), "000/000")
--- On Tue, 5/4/10, Nevil Amos <nevil.amos at gmail.com> wrote:
> From: Nevil Amos <nevil.amos at gmail.com>
> Subject: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
> To: r-help at stat.math.ethz.ch
> Received: Tuesday, May 4, 2010, 8:54 AM
> I need to replace <NA>
> occurrences in multiple columns? in a data.frame with
> "000/000"
>
> how do I achieve this?
>
> Thanks
>
> Nevil Amos
>
> ______________________________________________
> R-help at r-project.org
> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.
>
------------------------------
Message: 23
Date: Tue, 04 May 2010 14:25:15 +0100
From: Muhammad Rahiz <muhammad.rahiz at ouce.ox.ac.uk>
To: Lanna Jin <lannajin at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <4BE0203B.8030505 at ouce.ox.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
000/000 returns NaN, which is no different than NA unless you want it as
string i.e. "000/000"
Muhammad
Lanna Jin wrote:> Try: "x[which(is.na(x)),] <- 000/000", where is x is your
data frame
>
> -----
> Lanna Jin
>
> lannajin at gmail.com
> 510-898-8525
>
------------------------------
Message: 24
Date: Tue, 4 May 2010 06:25:38 -0700 (PDT)
From: Bart Joosen <bartjoosen at hotmail.com>
To: r-help at r-project.org
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <1272979538232-2125509.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
try x[is.na(x)] <- "000/000"
Bart
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-replace-all-NA-values-in-a-data-frame-with-another-not-0-value-tp2125458p2125509.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 25
Date: Tue, 4 May 2010 06:29:49 -0700 (PDT)
From: John Kane <jrkrideau at yahoo.ca>
To: r-help at r-project.org, someone <vonhoffen at t-online.de>
Subject: Re: [R] Show number at each bar in barchart?
Message-ID: <795962.72804.qm at web38403.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Try this. My appologies for not giving the attribution but I forget who wrote
it.
my.values=100000:100005
x <- barplot(my.values, ylim=c(0,110000))
text(x, my.values, my.values, pos=3)
text(x, my.values, "wibble", pos=3)
--- On Tue, 5/4/10, someone <vonhoffen at t-online.de> wrote:
> From: someone <vonhoffen at t-online.de>
> Subject: [R] Show number at each bar in barchart?
> To: r-help at r-project.org
> Received: Tuesday, May 4, 2010, 8:41 AM
>
> when i plot a barchart with 5 bars there is one bar pretty
> long and the
> others get smaller
> like (20, 80, 20, 5, 2)
> is there a way of displaying the number accoirding to each
> bar next to it?
> like in a bwplot the panel option N?
> --
> View this message in context:
http://r.789695.n4.nabble.com/Show-number-at-each-bar-in-barchart-tp2125438p2125438.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org
> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.
>
------------------------------
Message: 26
Date: Tue, 4 May 2010 08:32:03 -0500
From: Douglas Bates <bates at stat.wisc.edu>
To: steven mosher <moshersteven at gmail.com>
Cc: r-help <r-help at r-project.org>
Subject: Re: [R] error in La.svd Lapack routine 'dgesdd'
Message-ID:
<x2q40e66e0b1005040632k6ee76d56w1dbbaf49f2e7076f at mail.gmail.com>
Content-Type: text/plain; charset=windows-1252
Google the name dgesdd to get the documentation where you will find
that the error code indicates that the SVD algorithm failed to
converge. Evaluation of the singular values and vectors is done via
an iterative optimization and on some occasions will fail to converge.
Frequently this is related to the scaling of the matrix. If some
rows or columns are a very large magnitude relative to others the
convergence of the optimization can be impeded.
Providing a reproducible example of such an error condition will help
in diagnosing what is happening.
If you wonder why the error message is so enigmatic, it is because the
underlying code is Fortran and does not provide much flexibility for
informative error trapping.
On Tue, May 4, 2010 at 1:24 AM, steven mosher <moshersteven at gmail.com>
wrote:> Error in La.svd(x, nu, nv) : error code 1 from Lapack routine ?dgesdd?
>
> what resources are there to track down errors like this
>
> ? ? ? ?[[alternative HTML version deleted]]
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 27
Date: Tue, 4 May 2010 18:36:46 +0530
From: Mohan L <l.mohanphy at gmail.com>
To: r-help at r-project.org
Subject: [R] make a column from the row names
Message-ID:
<n2ha827c4211005040606n2c73cff0w3be82d38b4c5aa08 at mail.gmail.com>
Content-Type: text/plain
Dear All,
> avglog
01/11/09 02/11/09 03/11/09 04/11/09
9.750000 4.500000 4.500000 8.666667> avglog1 <- data.frame(avglog)
> avglog1
avglog
01/11/09 9.750000
02/11/09 4.500000
03/11/09 4.500000
04/11/09 8.666667
The first column isnt a column, It's the row names. I makeing a column from
the row names by using the following
> value1$Day <- rownames(value1)
> value1
avglog Day
01/11/09 9.750000 01/11/09
02/11/09 4.500000 02/11/09
03/11/09 4.500000 03/11/09
04/11/09 8.666667 04/11/09
But I want like this :
Day avglog Index
1 1 9.750000 9.750000*100
2 2 4.500000 4.500000*100
3 3 4.500000 4.500000*100
4 4 8.666667 8.666667*100
How to achieve it? Any help will be appreciated.
Thanks & Rg
Mohan L
[[alternative HTML version deleted]]
------------------------------
Message: 28
Date: Tue, 04 May 2010 08:22:04 -0500
From: Patrick Lenon <lenon at fstrf-wi.org>
To: r-help at r-project.org, jrp.capecod at gmail.com
Subject: Re: [R] Plotting legend outside of multiple panels
Message-ID: <4BE01F7C.2030904 at fstrf-wi.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Another solution I've used is to set up an additional layout space and
put the legend in there with no graph. You print a blank dummy graph
and then add the legend to the "blank" layout panel like so:
if (floatLegend) {
# We want to float the legend independently
# so we have to add it here as the only visible component of a
# dummy graph.
legText <- yourLegendNames
# create a blank graph -- automatically scales -1 to +1 on both axes
op <- par(mar=plotMargins)
tsFake <- barplot(0,0, axes=FALSE)
legend(x=1, y=0,
legend=legText,
# set fill, angle, density to match your real graph scheme
xjust=1,
yjust=0.5)
par(op)
}
Hope that helps.
--
Patrick Lenon
Database Engineer
Frontier Science and Technology Foundation
(608)441-2947
------------------------------
Message: 29
Date: Tue, 4 May 2010 15:41:41 +0200
From: "Rainer Scheuchenpflug"
<scheuchenpflug at psychologie.uni-wuerzburg.de>
To: <r-help at r-project.org>
Subject: [R] Using R with screenreading software
Message-ID: <002e01caeb8f$8736b2c0$95a41840$@uni-wuerzburg.de>
Content-Type: text/plain; charset="iso-8859-1"
Dear R-Experts,
a student of mine tries to use the Windows-Rconsole with screen reading
software (she is blind), and cannot access the command line (Menus are ok).
The company which produces her screen reader tells her that this is due to
the cursor used in Rconsole, which is static, not blinking. They maintain
that if the cursor could be changed to a blinking one, she should be able to
access the command line and outputs.
For my last exam she used R in a Dosbox as workaround, but encountered other
problems, esp. with scrolling. So: Is it possible to change the cursor
type/behavior in R-Console?
She uses R 2.8.1, Windows 2000, and screenreader Virgo 4.6 from Baum Retec,
if that is any help.
Your assistance with this problem and any other tips for teaching R to blind
users will be much appreciated,
Rainer Scheuchenpflug
Dr. Rainer Scheuchenpflug
Lehrstuhl f?r Psychologie III
R?ntgenring 11
97070 W?rzburg
Tel: 0931-31-82185
Fax: 0931-31-82616
Mail: scheuchenpflug at psychologie.uni-wuerzburg.de
Web: http://www.izvw.de
------------------------------
Message: 30
Date: Tue, 4 May 2010 09:40:46 -0400
From: Jorge Ivan Velez <jorgeivanvelez at gmail.com>
To: someone <vonhoffen at t-online.de>
Cc: r-help at r-project.org
Subject: Re: [R] Show number at each bar in barchart?
Message-ID:
<k2g317737de1005040640n4a24fb94waa8383fe3f621b42 at mail.gmail.com>
Content-Type: text/plain
Hi someone,
Try this:
x <- c(20, 80, 20, 5, 2)
b <- barplot(x, ylim = c(0, 85), las = 1)
text(b, x+2, pch = x)
HTH,
Jorge
On Tue, May 4, 2010 at 8:41 AM, someone <> wrote:
>
> when i plot a barchart with 5 bars there is one bar pretty long and the
> others get smaller
> like (20, 80, 20, 5, 2)
> is there a way of displaying the number accoirding to each bar next to it?
> like in a bwplot the panel option N?
> --
> View this message in context:
>
http://r.789695.n4.nabble.com/Show-number-at-each-bar-in-barchart-tp2125438p2125438.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 31
Date: Tue, 4 May 2010 06:44:44 -0700 (PDT)
From: John Kane <jrkrideau at yahoo.ca>
To: r-help at r-project.org, Mohan L <l.mohanphy at gmail.com>
Subject: Re: [R] make a column from the row names
Message-ID: <908651.90602.qm at web38405.mail.mud.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Have a look at ?substring
--- On Tue, 5/4/10, Mohan L <l.mohanphy at gmail.com> wrote:
> From: Mohan L <l.mohanphy at gmail.com>
> Subject: [R] make a column from the row names
> To: r-help at r-project.org
> Received: Tuesday, May 4, 2010, 9:06 AM
> Dear All,
>
> > avglog
> 01/11/09 02/11/09 03/11/09 04/11/09
> 9.750000 4.500000 4.500000 8.666667
> > avglog1 <- data.frame(avglog)
> > avglog1
> ? ? ? ? ???avglog
> 01/11/09 9.750000
> 02/11/09 4.500000
> 03/11/09 4.500000
> 04/11/09 8.666667
>
> The first column isnt a column, It's the row names. I
> makeing a column from
> the row names by using the following
>
> > value1$Day <- rownames(value1)
> > value1
> ? ? ? ? ? ? avglog?
> ? ? Day
> 01/11/09 9.750000 01/11/09
> 02/11/09 4.500000 02/11/09
> 03/11/09 4.500000 03/11/09
> 04/11/09 8.666667 04/11/09
>
> But I want like this :
>
> ? ? Day? ? avglog? ? ?
> ???Index
> 1? ? 1? ? 9.750000? ?
> 9.750000*100
> 2? ? 2? ? 4.500000? ?
> 4.500000*100
> 3? ? 3? ? 4.500000? ?
> 4.500000*100
> 4? ? 4? ? 8.666667? ?
> 8.666667*100
>
>
> How to achieve it? Any help will be appreciated.
>
> Thanks & Rg
> Mohan L
>
> ??? [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org
> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.
>
------------------------------
Message: 32
Date: Tue, 4 May 2010 06:51:58 -0700 (PDT)
From: Lanna Jin <lannajin at gmail.com>
To: r-help at r-project.org
Subject: [R] R for web browser
Message-ID: <1272981118791-2125571.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi Everyone,
Does anyone know of any projects for running an interactive R session within
a web browser?
I'm looking for something similar to the one on the Ruby website
(http://tryruby.org), except for R.
Thanks for your responses in advance!
Lanna
-----
Lanna Jin
lannajin at gmail.com
510-898-8525
--
View this message in context:
http://r.789695.n4.nabble.com/R-for-web-browser-tp2125571p2125571.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 33
Date: Tue, 04 May 2010 09:52:26 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Rainer Scheuchenpflug
<scheuchenpflug at psychologie.uni-wuerzburg.de>
Cc: r-help at r-project.org
Subject: Re: [R] Using R with screenreading software
Message-ID: <4BE0269A.2070906 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/05/2010 9:41 AM, Rainer Scheuchenpflug wrote:> Dear R-Experts,
>
> a student of mine tries to use the Windows-Rconsole with screen reading
> software (she is blind), and cannot access the command line (Menus are ok).
> The company which produces her screen reader tells her that this is due to
> the cursor used in Rconsole, which is static, not blinking. They maintain
> that if the cursor could be changed to a blinking one, she should be able
to
> access the command line and outputs.
>
> For my last exam she used R in a Dosbox as workaround, but encountered
other
> problems, esp. with scrolling. So: Is it possible to change the cursor
> type/behavior in R-Console?
> She uses R 2.8.1, Windows 2000, and screenreader Virgo 4.6 from Baum Retec,
> if that is any help.
>
> Your assistance with this problem and any other tips for teaching R to
blind
> users will be much appreciated,
> Rainer Scheuchenpflug
We are aware of problems when using the Windows Rgui with screen reading
software, but nobody in R Core has expertise in this area. If you know
of any programmers who do and who could contribute code to the project,
I think we'd appreciate it.
In the meantime, using Rterm in a command window is one solution. There
are also other front ends available that may work: running R from
within Emacs, or using the JGR front end (see the article on p. 9 of
http://stat-computing.org/newsletter/issues/scgn-16-2.pdf).
Duncan Murdoch
------------------------------
Message: 34
Date: Tue, 4 May 2010 16:06:38 +0200
From: Petr PIKAL <petr.pikal at precheza.cz>
To: nevil.amos at sci.monash.edu.au
Cc: r-help-bounces at r-project.org, r-help at stat.math.ethz.ch
Subject: [R] Odp: How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID:
<OFBC50462C.00B36925-ONC1257719.004D12EB-C1257719.004D8DF4 at
precheza.cz>
Content-Type: text/plain; charset="US-ASCII"
Hi
r-help-bounces at r-project.org napsal dne 04.05.2010 14:54:14:
> I need to replace <NA> occurrences in multiple columns in a
data.frame
> with "000/000"
Be careful if you replace NA in numeric columns.
> str(test)
'data.frame': 10 obs. of 3 variables:
$ mp: num 20.9 19.9 20.1 20.2 18.9 ...
$ so: num 18.8 18.6 18.2 17.9 18.1 ...
$ le: num 48 49.1 48.8 42.6 46.1 ...> test[2,2] <- NA
> test[is.na(test)] <- "000/000"
> str(test)
'data.frame': 10 obs. of 3 variables:
$ mp: num 20.9 19.9 20.1 20.2 18.9 ...
$ so: chr "18.75" "000/000" "18.25"
"17.89" ...
$ le: num 48 49.1 48.8 42.6 46.1 ...>
numeric column is now character and you can not use it for further
analysis without some fiddling around.
Regards
Petr
>
> how do I achieve this?
>
> Thanks
>
> Nevil Amos
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 35
Date: Tue, 4 May 2010 23:24:11 +0900
From: Luis N <globophobe at gmail.com>
To: r-help at r-project.org
Subject: [R] Idiomatic looping over list name, value pairs in R
Message-ID:
<p2wc6e072941005040724u2c2e66e4u3a20edca1a8c82a9 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Considering the python code:
for k, v in d.items(): do_something(k); do_something_else(v)
I have the following for R:
for (i in c(1:length(d))) { do_something(names(d[i]));
do_something_else(d[[i]]) }
This does not seem seems idiomatic. What is the best way of doing the
same with R?
Thanks.
Luis
------------------------------
Message: 36
Date: Tue, 4 May 2010 10:31:46 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Jorge Ivan Velez <jorgeivanvelez at gmail.com>
Cc: r-help at r-project.org, someone <vonhoffen at t-online.de>
Subject: Re: [R] Show number at each bar in barchart?
Message-ID: <982FA4DD-6CC1-469B-843E-9A28D9CD3E4C at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 9:40 AM, Jorge Ivan Velez wrote:
> Hi someone,
>
> Try this:
>
> x <- c(20, 80, 20, 5, 2)
> b <- barplot(x, ylim = c(0, 85), las = 1)
> text(b, x+2, pch = x)
>
I suspect he wanted the "counts" in the label:
x <- c(20, 80, 20, 5, 2)
b <- barplot(x, ylim = c(0, 85), las = 1)
text(b, x+2, labels=x, pch = x)
... although perhaps his specification by analogy to bwplot with
panel option "N" was more meaningful to you that it was to me. I have
no idea what that was supposed to suggest.
--
David.
> HTH,
> Jorge
>
> On Tue, May 4, 2010 at 8:41 AM, someone <> wrote:
>
>>
>> when i plot a barchart with 5 bars there is one bar pretty long and
>> the
>> others get smaller
>> like (20, 80, 20, 5, 2)
>> is there a way of displaying the number accoirding to each bar next
>> to it?
>> like in a bwplot the panel option N?
>> --
>> View this message in context:
>>
http://r.789695.n4.nabble.com/Show-number-at-each-bar-in-barchart-tp2125438p2125438.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 37
Date: Tue, 4 May 2010 20:02:28 +0530
From: Harsh <singhalblr at gmail.com>
To: r-help at r-project.org
Subject: [R] Memory issues using R withing Eclipse-StatET
Message-ID:
<n2re0bbde351005040732jf5b3aa25q34d7c652d54735cc at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi useRs,
I use R within Eclipse via StatET, and it seems to me that some memory
intensive tasks fail due to this environment.
For example: I was trying to find the distance matrix of a matrix with
(10000 rows and 500 columns), and it failed in StatET, whereas it
worked in vanilla R.
I'm using R 2.10.1 on WinXP.
Thanks for any help in this matter.
Regards,
Harsh
------------------------------
Message: 38
Date: Tue, 4 May 2010 14:33:29 +0000 (GMT)
From: Pascal Martin <pascal15martin at yahoo.de>
To: r-help at r-project.org
Subject: [R] Kernel density estimate plot for 3-dimensional data
Message-ID: <956952.67901.qm at web25501.mail.ukl.yahoo.com>
Content-Type: text/plain
Hi!
I have a problem with Kernel density estimate plot for 3-dimensional data in
ks-package.
Here the example:
# load ks, spatstat
# three-dimensional kernel density of B
B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
x <- unclass(B$data)$df
H <- Hpi(x)
fhat <- kde(x, H=H)
plot(fhat)
plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
If I try to insert my own coordinates instead of the artificial 3D-pattern, it
does not work.
It would be great, if anybody could help me!
Thanks
Pascal
[[alternative HTML version deleted]]
------------------------------
Message: 39
Date: Tue, 4 May 2010 17:34:53 +0300
From: Christos Argyropoulos <argchris at hotmail.com>
To: <globophobe at gmail.com>, <r-help at r-project.org>
Subject: Re: [R] Idiomatic looping over list name, value pairs in R
Message-ID: <BLU149-W4751993EF4A405A77558BD8F30 at phx.gbl>
Content-Type: text/plain
Can you give an example of what the python code is supposed to do?
Some of us are not familiar with python, and the R code is not particularly
informative. You seem to encode information on both the values and the names of
the elements of the vector "d". If this is the case, why don't you
create a data frame (or a matrix) and call apply on the columns?
Christos
> Date: Tue, 4 May 2010 23:24:11 +0900
> From: globophobe at gmail.com
> To: r-help at r-project.org
> Subject: [R] Idiomatic looping over list name, value pairs in R
>
> Considering the python code:
>
> for k, v in d.items(): do_something(k); do_something_else(v)
>
> I have the following for R:
>
> for (i in c(1:length(d))) { do_something(names(d[i]));
> do_something_else(d[[i]]) }
>
> This does not seem seems idiomatic. What is the best way of doing the
> same with R?
>
> Thanks.
>
> Luis
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
_________________________________________________________________
Hotmail: Powerful Free email with security by Microsoft.
[[alternative HTML version deleted]]
------------------------------
Message: 40
Date: Tue, 04 May 2010 10:38:40 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Luis N <globophobe at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Idiomatic looping over list name, value pairs in R
Message-ID: <4BE03170.5090500 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/05/2010 10:24 AM, Luis N wrote:> Considering the python code:
>
> for k, v in d.items(): do_something(k); do_something_else(v)
>
> I have the following for R:
>
> for (i in c(1:length(d))) { do_something(names(d[i]));
> do_something_else(d[[i]]) }
>
> This does not seem seems idiomatic. What is the best way of doing the
> same with R?
>
You could do it as
for (name in names(d)) {
do_something(name)
do_something(d[[name]])
}
or
sapply(names(d), function(name) {
do_something(name)
do_something_else(d[[name]])
})
or
do_both <- function(name) {
do_something(name)
do_something_else(d[[name]])
}
sapply(names(d), do_both)
My choice would be the first version, but yours might differ.
Duncan Murdoch
------------------------------
Message: 41
Date: Tue, 4 May 2010 10:12:29 -0400
From: Abiel X Reinhart <abiel.x.reinhart at jpmchase.com>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: [R] fit printed output onto a single page
Message-ID:
<DF11712721890A4880B3FA1B046893720DED3B9231 at
EMARC105VS01.exchad.jpmchase.net>
Content-Type: text/plain; charset="us-ascii"
Is there a way to force a certain block of captured output to fit onto a single
printed page, where one can specify the properties of the page (dimensions,
margins, etc)? For example, I might want to generate 10 different cuts of a data
table and then capture all the output into a PDF, ensuring that each run of the
data table fits onto a single page (i.e. the font-size of the output may have to
be dynamically adjusted).
Thanks,
Abiel
This communication is for informational purposes only. It is not
intended as an offer or solicitation for the purchase or sale of
any financial instrument or as an official confirmation of any
transaction. All market prices, data and other information are not
warranted as to completeness or accuracy and are subject to change
without notice. Any comments or statements made herein do not
necessarily reflect those of JPMorgan Chase & Co., its subsidiaries
and affiliates.
This transmission may contain information that is privileged,
confidential, legally privileged, and/or exempt from disclosure
under applicable law. If you are not the intended recipient, you
are hereby notified that any disclosure, copying, distribution, or
use of the information contained herein (including any reliance
thereon) is STRICTLY PROHIBITED. Although this transmission and any
attachments are believed to be free of any virus or other defect
that might affect any computer system into which it is received and
opened, it is the responsibility of the recipient to ensure that it
is virus free and no responsibility is accepted by JPMorgan Chase &
Co., its subsidiaries and affiliates, as applicable, for any loss
or damage arising in any way from its use. If you received this
transmission in error, please immediately contact the sender and
destroy the material in its entirety, whether in electronic or hard
copy format. Thank you.
Please refer to http://www.jpmorgan.com/pages/disclosures for
disclosures relating to European legal entities.
------------------------------
Message: 42
Date: Tue, 4 May 2010 14:26:43 +0000 (UTC)
From: Thorn <thorn.thaler at rdls.nestle.com>
To: r-help at stat.math.ethz.ch
Subject: [R] Lazy evaluation in function call
Message-ID: <loom.20100504T161307-498 at post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Hi everybody,
how is it possible to refer to an argument passed to a function in the
function call? What I like to do, is something like
f <- function(x,y) x+y
f(2, x) # should give 4
The problem is of course that x is only known inside the function. Of course I
could specify something like
f(z<-2,z)
but I'm just curious whether it is possible to use a fancy combination of
eval, substitute or quote ;)
BR, thorn
------------------------------
Message: 43
Date: Tue, 04 May 2010 10:46:03 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Pascal Martin <pascal15martin at yahoo.de>
Cc: r-help at r-project.org
Subject: Re: [R] Kernel density estimate plot for 3-dimensional data
Message-ID: <4BE0332B.30503 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/05/2010 10:33 AM, Pascal Martin wrote:> Hi!
>
> I have a problem with Kernel density estimate plot for 3-dimensional data
in ks-package.
> Here the example:
>
> # load ks, spatstat
> # three-dimensional kernel density of B
> B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
> x <- unclass(B$data)$df
> H <- Hpi(x)
> fhat <- kde(x, H=H)
> plot(fhat)
> plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
>
> If I try to insert my own coordinates instead of the artificial 3D-pattern,
it does not work.
> It would be great, if anybody could help me!
You need to be more explicit about what "does not work" means. The
example above works (though I don't like the axes in the first plot; I
prefer what you get
with plot(fhat, box=FALSE)). What problem are you having?
Duncan Murdoch
------------------------------
Message: 44
Date: Tue, 4 May 2010 16:49:29 +0200
From: "stefan.duke at gmail.com" <stefan.duke at gmail.com>
To: r-help at r-project.org
Subject: [R] strange behavior of RODBC and/or ssconvert
Message-ID:
<q2sa211af3b1005040749pf81ff29dl3f7461241d0df4cd at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear All,
I have the following problem when reading files (a lot of them) in the
spreadsheetML format into R. The spreadsheetML format is an xml format
to allow easy import of multisheet data in Excel. As far as I can see,
a direct import into R (using the XML package) is not feasible. I use
the software ssconvert (included in Gnumeric) and call it from R. It
converts the spreadsheetML into xls format.
When I now import the newly created xls-file using RODBC package, the
last row in each sheet is missing. However, when I open the xls-file
the last row is present (hence, ssconvert doesn't delete it). When I
now save the xls-file, and import it again using the RODB package, the
data is now complete.
Any idea what to do about that? My main problem is to get the
spreadsheetML into R so I tried other file formats to which ssconvert
can convert to, but only excel supports multisheets.
Best,
Stefan
Example code:
system(paste('ssconvert "excelcohortdata_men_reference
scenario.xml"
"excelcohortdata_men_reference scenario22.xls"'))
channel1<-odbcConnectExcel("excelcohortdata_men_reference
scenario10.xls")
odbcGetInfo(channel1)
sqlTables(channel1)
sqlQuery(channel1, "select * from \"age 9in 2010$\"" )
sqlFetch(chanel1, "age 9in 2010""
------------------------------
Message: 45
Date: Tue, 4 May 2010 23:58:44 +0900
From: Luis N <globophobe at gmail.com>
To: r-help at r-project.org
Subject: Re: [R] Idiomatic looping over list name, value pairs in R
Message-ID:
<o2vc6e072941005040758xa0013828s69195004143c1475 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Thank you. Your response was enlightening.
On Tue, May 4, 2010 at 11:38 PM, Duncan Murdoch
<murdoch.duncan at gmail.com> wrote:> On 04/05/2010 10:24 AM, Luis N wrote:
>>
>> Considering the python code:
>>
>> for k, v in d.items(): do_something(k); do_something_else(v)
>>
>> I have the following for R:
>>
>> for (i in c(1:length(d))) { do_something(names(d[i]));
>> do_something_else(d[[i]]) }
>>
>> This does not seem seems idiomatic. What is the best way of doing the
>> same with R?
>>
>
> You could do it as
>
> for (name in names(d)) {
> ?do_something(name)
> ?do_something(d[[name]])
> }
>
> or
>
> sapply(names(d), function(name) {
> ?do_something(name)
> ?do_something_else(d[[name]])
> })
>
> or
>
> do_both <- function(name) {
> ?do_something(name)
> ?do_something_else(d[[name]])
> }
> sapply(names(d), do_both)
>
> My choice would be the first version, but yours might differ.
>
> Duncan Murdoch
------------------------------
Message: 46
Date: Tue, 4 May 2010 17:01:14 +0200
From: Gabriele Esposito <gabriele.esposito at gmail.com>
To: Jim Lemon <jim at bitwrit.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] 3D version of triax.plot (package plotrix)
Message-ID:
<o2u3e4df97e1005040801l3a99265audf715755094788de at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi Jim,
thanks! With just the rgl package I cannot do that. For the moment, I
merged two values into one and use triax.plot in the package plotrix,
but that's not fully satisfactory. If you find something, let me know,
thank you a lot!
Gabriele
On Mon, May 3, 2010 at 2:29 PM, Jim Lemon <jim at bitwrit.com.au>
wrote:> On 05/03/2010 09:43 PM, Gabriele Esposito wrote:
>>
>> Good afternoon,
>>
>> I am looking for a way to do a scatterplot of 4 values summing to 1
>> inside a 3D symplex, i.e. an equilateral pyramid. With the function
>> triax.plot I can do that with 3 values summing to 1, but I can't
find
>> an equivalent with an extra dimension.
>>
> Hi Gabriele,
>
> I don't have time at the moment to do anything but point you to the rgl
> package. Might be able to look at it more closely tomorrow.
>
> Jim
>
------------------------------
Message: 47
Date: Tue, 4 May 2010 15:04:41 +0000 (GMT)
From: Pascal Martin <pascal15martin at yahoo.de>
To: r-help at r-project.org
Subject: Re: [R] Kernel density estimate plot for 3-dimensional data
Message-ID: <99666.78839.qm at web25502.mail.ukl.yahoo.com>
Content-Type: text/plain
________________________________
An: Duncan Murdoch <murdoch.duncan at gmail.com>
Gesendet: Dienstag, den 4. Mai 2010, 17:03:46 Uhr
Betreff: AW: [R] Kernel density estimate plot for 3-dimensional data
#B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
creates a 3d pattern with random points.
But I want it to create a Kernel density estimate plot with my coordinates.
I show it in an example:
> x<- scan()
1: 1 2 3 4 5 6 7 8 9 10
11:
Read 10 items> y<- scan()
1: 10 9 8 7 6 5 4 3 2 1
11:
Read 10 items> z<- scan()
1: 6 5 7 4 8 3 9 2 10 1
11:
Read 10 items> B<- pp3(x,y,z, c(0,10), c(0,10), c(0,10))
> x <- unclass(B$data)$df
> H <- Hpi(x)
From this point, it shows an error in chol.default(S12)
and accordingly the rest does not go on.
> fhat <- kde(x, H=H)
> plot(fhat)
> plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
________________________________
Von: Duncan Murdoch <murdoch.duncan at gmail.com>
CC: r-help at r-project.org
Gesendet: Dienstag, den 4. Mai 2010, 16:46:03 Uhr
Betreff: Re: [R] Kernel density estimate plot for 3-dimensional data
On 04/05/2010 10:33 AM, Pascal Martin wrote:> Hi!
>
> I have a problem with Kernel density estimate plot for 3-dimensional data
in ks-package.
> Here the example:
>
> # load ks, spatstat
> # three-dimensional kernel density of B
> B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
> x <- unclass(B$data)$df
> H <- Hpi(x)
> fhat <- kde(x, H=H)
> plot(fhat)
> plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
>
> If I try to insert my own coordinates instead of the artificial 3D-pattern,
it does not work.
[[elided Yahoo spam]]
You need to be more explicit about what "does not work" means. The
example above works (though I don't like the axes in the first plot; I
prefer what you get
with plot(fhat, box=FALSE)). What problem are you having?
Duncan Murdoch
[[alternative HTML version deleted]]
------------------------------
Message: 48
Date: Tue, 04 May 2010 11:17:39 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Pascal Martin <pascal15martin at yahoo.de>
Cc: R-Help <r-help at r-project.org>
Subject: Re: [R] Kernel density estimate plot for 3-dimensional data
Message-ID: <4BE03A93.3030304 at gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 04/05/2010 11:03 AM, Pascal Martin wrote:> #B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
> creates a 3d pattern with random points.
> But I want it to create a Kernel density estimate plot with my coordinates.
> I show it in an example:
>
> > x<- scan()
> 1: 1 2 3 4 5 6 7 8 9 10
> 11:
> Read 10 items
> > y<- scan()
> 1: 10 9 8 7 6 5 4 3 2 1
> 11:
> Read 10 items
> > z<- scan()
> 1: 6 5 7 4 8 3 9 2 10 1
> 11:
> Read 10 items
> > B<- pp3(x,y,z, c(0,10), c(0,10), c(0,10))
>
> > x <- unclass(B$data)$df
> > H <- Hpi(x)
>
> From this point, it shows an error in chol.default(S12)
> and accordingly the rest does not go on.
>
Those points all lie in a plane (y = 11-x); I imagine that causes the
density estimate to overflow. I get the same problem with your data,
but not
with non-planar data.
Duncan Murdoch> > fhat <- kde(x, H=H)
>
> > plot(fhat)
> > plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
>
>
>
> ________________________________
> Von: Duncan Murdoch <murdoch.duncan at gmail.com>
> An: Pascal Martin <pascal15martin at yahoo.de>
> CC: r-help at r-project.org
> Gesendet: Dienstag, den 4. Mai 2010, 16:46:03 Uhr
> Betreff: Re: [R] Kernel density estimate plot for 3-dimensional data
>
> On 04/05/2010 10:33 AM, Pascal Martin wrote:
> > Hi!
> >
> > I have a problem with Kernel density estimate plot for 3-dimensional
data in ks-package.
> > Here the example:
> >
> > # load ks, spatstat
> > # three-dimensional kernel density of B
> > B <- pp3(runif(300), runif(300), runif(300), box3(c(0,1)))
> > x <- unclass(B$data)$df
> > H <- Hpi(x)
> > fhat <- kde(x, H=H)
> > plot(fhat)
> > plot(fhat, axes=FALSE, box=FALSE, drawpoints=TRUE);
axes3d(c('x','y','z'))
> >
> > If I try to insert my own coordinates instead of the artificial
3D-pattern, it does not work.
> > It would be great, if anybody could help me!
>
> You need to be more explicit about what "does not work" means.
The
> example above works (though I don't like the axes in the first plot; I
> prefer what you get
> with plot(fhat, box=FALSE)). What problem are you having?
>
> Duncan Murdoch
>
>
>
>
------------------------------
Message: 49
Date: Tue, 4 May 2010 17:29:06 +0200
From: Joris Meys <jorismeys at gmail.com>
To: R mailing list <r-help at r-project.org>
Subject: [R] Avoiding for-loop for splitting vector into subvectors
based on positions
Message-ID:
<j2mb5e1ab9a1005040829i13ed5ecax7a2da028163027e2 at mail.gmail.com>
Content-Type: text/plain
Dear all,
I'm trying to optimize code and want to avoid for-loops as much as possible.
I'm applying a calculation on subvectors from a big one, and I get the
subvectors by using a vector of starting positions:
x <- 1:10
pos <- c(1,4,7)
n <- length(x)
I try to do something like this :
pos2 <- c(pos, n+1)
out <- c()
for(i in 1:n){
tmp <- x[pos2[i]:pos2[i+1]]
out <- c(out, length(tmp))
}
Never mind the length function, I apply a far more complicated one. It's
about the use of the indices in the for-loop. I didn't see any way of doing
that with an apply, unless there is a very convenient way of splitting my
vector in a list of the subvectors or so.
Anybody an idea?
Cheers
--
Joris Meys
Statistical Consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
Coupure Links 653
B-9000 Gent
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
[[alternative HTML version deleted]]
------------------------------
Message: 50
Date: Tue, 4 May 2010 11:50:42 -0400
From: jim holtman <jholtman at gmail.com>
To: Joris Meys <jorismeys at gmail.com>
Cc: R mailing list <r-help at r-project.org>
Subject: Re: [R] Avoiding for-loop for splitting vector into
subvectors based on positions
Message-ID:
<h2u644e1f321005040850k52c0cc31n18d3d0c5048cc0af at mail.gmail.com>
Content-Type: text/plain
Try this:
> x <- 1:10
> pos <- c(1,4,7)
> pat <- rep(seq_along(pos), times=diff(c(pos, length(x) + 1)))
> split(x, pat)
$`1`
[1] 1 2 3
$`2`
[1] 4 5 6
$`3`
[1] 7 8 9 10
On Tue, May 4, 2010 at 11:29 AM, Joris Meys <jorismeys at gmail.com>
wrote:
> Dear all,
>
> I'm trying to optimize code and want to avoid for-loops as much as
> possible.
> I'm applying a calculation on subvectors from a big one, and I get the
> subvectors by using a vector of starting positions:
>
> x <- 1:10
> pos <- c(1,4,7)
> n <- length(x)
>
> I try to do something like this :
> pos2 <- c(pos, n+1)
>
> out <- c()
> for(i in 1:n){
> tmp <- x[pos2[i]:pos2[i+1]]
> out <- c(out, length(tmp))
> }
>
> Never mind the length function, I apply a far more complicated one.
It's
> about the use of the indices in the for-loop. I didn't see any way of
doing
> that with an apply, unless there is a very convenient way of splitting my
> vector in a list of the subvectors or so.
>
> Anybody an idea?
> Cheers
> --
> Joris Meys
> Statistical Consultant
>
> Ghent University
> Faculty of Bioscience Engineering
> Department of Applied mathematics, biometrics and process control
>
> Coupure Links 653
> B-9000 Gent
>
> tel : +32 9 264 59 87
> Joris.Meys at Ugent.be
> -------------------------------
> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
[[alternative HTML version deleted]]
------------------------------
Message: 51
Date: Tue, 4 May 2010 18:57:05 +0300
From: Tal Galili <tal.galili at gmail.com>
To: Lanna Jin <lannajin at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] R for web browser
Message-ID:
<i2o440d7af41005040857v79e5336dm72ffbfa0ecd3664e at mail.gmail.com>
Content-Type: text/plain
I wrote about R-Node last month, it offers what you are talking about:
http://www.r-statistics.com/2010/04/r-node-a-web-front-end-to-r-with-protovis/
----------------Contact
Details:-------------------------------------------------------
Contact me: Tal.Galili at gmail.com | 972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
----------------------------------------------------------------------------------------------
On Tue, May 4, 2010 at 4:51 PM, Lanna Jin <lannajin at gmail.com> wrote:
>
> Hi Everyone,
>
> Does anyone know of any projects for running an interactive R session
> within
> a web browser?
> I'm looking for something similar to the one on the Ruby website
> (http://tryruby.org), except for R.
>
> Thanks for your responses in advance!
>
> Lanna
>
> -----
> Lanna Jin
>
> lannajin at gmail.com
> 510-898-8525
> --
> View this message in context:
> http://r.789695.n4.nabble.com/R-for-web-browser-tp2125571p2125571.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 52
Date: Tue, 4 May 2010 09:05:48 -0700 (PDT)
From: John Kane <jrkrideau at yahoo.ca>
To: Petr PIKAL <petr.pikal at precheza.cz>
Cc: r-help at r-project.org
Subject: Re: [R] / Operator not meaningful for factors
Message-ID: <722384.77840.qm at web38404.mail.mud.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
--- On Tue, 5/4/10, Petr PIKAL <petr.pikal at precheza.cz> wrote:
> From: Petr PIKAL <petr.pikal at precheza.cz>
> Subject: Re: [R] / Operator not meaningful for factors
> To: "John Kane" <jrkrideau at yahoo.ca>
> Cc: r-help at r-project.org, "vincent.deluard"
<vincent.deluard at trimtabs.com>
> Received: Tuesday, May 4, 2010, 3:38 AM
> Hi
>
> r-help-bounces at r-project.org
> napsal dne 04.05.2010 00:50:00:
>
> >? I think that you are correct.? R has the
> annoying habit of converting
> > character data to factors when you don't want it to
> while it is
> importing
> > data.? This is because the in the option
> "stringsAsFactors" is set to
> TRUE for
> > some weird historical reasons.
>
> It is a matter of opinion. I consider it quite useful
> feature. If I see by
>
> str(some.data) or summary(data0 that numeric columns are
> factors I know
> something is wrong with input.
I'm not denying that it can be useful but IIRC from a discussion a couple of
years ago, it was a fairly arbitary decision.
On the other hand it can be very annoying when one has some kinds of data.
>
> and when I want to use ggplot, xyplot or just plot my data
> with different
> colours/sizes/pchs/.... it is quite easy to use
> as.numeric(my.factor) to
> get numeric representation of levels.
>
> Finally you can easily change labels, concatenate levels
> and so on.
>
> Just my 2 cents.
>
> Regards
> Petr
>
>
>
> >
> > Try the command str(insert name of data) and see what
> happens.? It
> should show
> > you which columns of data are being treated as
> factors.
> >
> > You can convert the back to character or to
> numeric.? See the FAQ Part 7
> "How
> > do I convert factors to numeric? " or you can use the
> String as options
> > command in the read.table to FALSE
> >
> > Something like this should work, I think, but it's not
> tested
> > read.table("C:/rdata/trees.csv",
> stringsAsFactors=FALSE)
> >
> >
> >
> >
> >
> > --- On Mon, 5/3/10, vincent.deluard <vincent.deluard at
trimtabs.com>
>
> wrote:
> >
> > > From: vincent.deluard <vincent.deluard at trimtabs.com>
> > > Subject: Re: [R] / Operator not meaningful for
> factors
> > > To: r-help at r-project.org
> > > Received: Monday, May 3, 2010, 6:22 PM
> > >
> > > Hi there,
> > >
> > > This will sound very stupid because I just
> started using R
> > > but I see you had
> > > similar problems.
> > >
> > > I just loaded a very large dataset (2950*6602)
> from csv
> > > into R. The format
> > > is ticker=row, date=column.
> > > Every time I want to compute basic operations, R
> returns
> > > "In Ops.factor: not
> > > meaningful for factors"
> > >
> > > I believe it is because R does not read the data
> as numbers
> > > but I am not
> > > sure. Can anybody help?
> > >
> > > Thanks!
> > > --
> > > View this message in context:
> http://r.789695.n4.nabble.com/Operator-not-
> > meaningful-for-factors-tp791563p2124697.html
> > > Sent from the R help mailing list archive at
> Nabble.com.
> > >
> > > ______________________________________________
> > > R-help at r-project.org
> > > mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > > and provide commented, minimal, self-contained,
> > > reproducible code.
> > >
> >
> > ______________________________________________
> > R-help at r-project.org
> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained,
> reproducible code.
>
>
------------------------------
Message: 53
Date: Tue, 4 May 2010 16:11:31 +0000 (GMT)
From: cgw at witthoft.com
To: ripley at stats.ox.ac.uk
Cc: r-help at r-project.org, carl at witthoft.com
Subject: Re: [R] ISO Eric Kort (rtiff)
Message-ID: <1684018604.50808.1272989491608.JavaMail.mail at webmail01>
Content-Type: text/plain; charset=UTF-8
Thanks, Brian. I can see where to mod readTiff to return the original data
ranges; and where to mod writeTiff so it writes files with something better than
the current 0:255 resolution range.
I have found an additional problem with readTiff, so is there someone I can
write to about it?
What I found was, for some tiff images created within my company, readTiff does
not convert the source data correctly. The files contain greyscale data, 16
bits per pixel (i.e. 0 to 4095). Whether it's a mistake in the way the tags
were originally written to the file, or a mistake in the way that readTiff
interprets the libtiff outputs, I don't know, but readTiff only reads the
upper byte of each pixel. This produces data with a range of 0 to 15 (prior to
being autoscaled into pixmap's [0,1] space ). I can dig up the values
returned by tools like tiffdump, so if someone out there in R-help land can
point me to the pertinent values, I'll do all I can to help solve this
problem.
Thanks again
Carl
May 4, 2010 01:46:05 AM, ripley at stats.ox.ac.uk wrote:
==========================================
On Mon, 3 May 2010, Carl Witthoft wrote:
> I wanted to ask Eric a question or two about the rtiff package, but his
> listed email address bounces w/ 550 error. Does anyone know how to reach
> him, or whether he's actively maintaining rtiff?
He is not. The latest version of rtiff was done by the CRAN team to
fix a number of errors and keep it building on the CRAN platfoms --
you will see it was packaged by me.
> If anyone's interested, my primary desire is for rtiff (or other tool)
to
> provide me with the raw range of pixel values in a tiff file. rtiff dumps
> straight into a pixmap object, so the data are autoscaled into [0,1] .
It is very simple package, easy for you to modify -- this could be
done in readTiff in a couple of minutes.
An alternative is BioC package EBImage (provided your ImageMagick
installation supports TIFF).
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 54
Date: Tue, 04 May 2010 12:11:38 -0400
From: Marshall Feldman <marsh at uri.edu>
To: r-help at r-project.org
Subject: [R] read.table: skipping trailing delimiters
Message-ID: <4BE0473A.8060608 at uri.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi,
I am trying to read a tab-delimited file that has trailing tab
delimiters. It's a simple file with two legitimate fields. I'm using the
first as row.names, and the second should be the only column in the
resulting data frame.
Initially, R was filling the last column with NA's, but I was able to
stop that by setting
colClasses=c("character","character",NULL). Still,
the data frame is coming in with an extra column, only now its values
are set to "".
Is there any way to skip the trailing delimited field entirely? I've
searched for an answer without luck.
Thanks.
Marsh Feldman
------------------------------
Message: 55
Date: Tue, 4 May 2010 12:20:46 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: "stefan.duke at gmail.com" <stefan.duke at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] strange behavior of RODBC and/or ssconvert
Message-ID:
<n2z971536df1005040920k69d873c5x7547688bfada3851 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
The original seems not to have gotten through. Here it is again.
On Tue, May 4, 2010 at 11:14 AM, Gabor Grothendieck
<ggrothendieck at gmail.com> wrote:> Try a few of the solutions here:
> http://rwiki.sciviews.org/doku.php?id=tips:data-io:ms_windows
> and see if they all give you the same result.
>
------------------------------
Message: 56
Date: Tue, 04 May 2010 12:23:45 -0400
From: Marshall Feldman <marsh at uri.edu>
To: r-help at r-project.org
Subject: [R] Flushing print buffer
Message-ID: <4BE04A11.3050802 at uri.edu>
Content-Type: text/plain
Hello,
I have a function with these lines:
test <- function(object,...){
cat("object: has ",nrow(object),"labels\n")
cat("Head:\n")
head(object,...)
cat("\nTail:\n")
tail(object,...)
}
If I feed it a data frame object, it only prints out the tail part. If I
comment out the last two lines of the function, it does print the head
part. Obviously there's a buffer not being flushed between the head and
the tail calls, but I don't know how to flush it. Can someone help me?
Thanks.
Marsh Feldman
[[alternative HTML version deleted]]
------------------------------
Message: 57
Date: Tue, 04 May 2010 11:27:30 -0500
From: Marc Schwartz <marc_schwartz at me.com>
To: Marshall Feldman <marsh at uri.edu>
Cc: r-help at r-project.org
Subject: Re: [R] read.table: skipping trailing delimiters
Message-ID: <2E000477-9F9D-4DE2-8240-E8F0D1D22AB4 at me.com>
Content-Type: text/plain; charset=us-ascii
On May 4, 2010, at 11:11 AM, Marshall Feldman wrote:
> Hi,
>
> I am trying to read a tab-delimited file that has trailing tab delimiters.
It's a simple file with two legitimate fields. I'm using the first as
row.names, and the second should be the only column in the resulting data frame.
>
> Initially, R was filling the last column with NA's, but I was able to
stop that by setting
colClasses=c("character","character",NULL). Still, the data
frame is coming in with an extra column, only now its values are set to
"".
>
> Is there any way to skip the trailing delimited field entirely? I've
searched for an answer without luck.
>
> Thanks.
> Marsh Feldman
The easiest way to remove a single final column is to post-process the data
frame that you imported. So if your imported data frame is called 'DF':
DF.New <- DF[, -ncol(DF)]
See ?ncol and ?Extract
You could also do more complex sub-setting using the ?subset function or
consider pre-processing the file to be imported with command line tools such as
cut or awk.
For example, using the 'iris' data set:
> str(iris)
'data.frame': 150 obs. of 5 variables:
$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
$ Species : Factor w/ 3 levels
"setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
> str(iris[, -ncol(iris)])
'data.frame': 150 obs. of 4 variables:
$ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
$ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
$ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
$ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
HTH,
Marc Schwartz
------------------------------
Message: 58
Date: Tue, 4 May 2010 12:34:15 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: Marshall Feldman <marsh at uri.edu>
Cc: r-help at r-project.org
Subject: Re: [R] read.table: skipping trailing delimiters
Message-ID:
<n2i971536df1005040934je49dded9y54de383190244dd2 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Re-read the colClasses section of ?read.table. Use "NULL", not NULL.
On Tue, May 4, 2010 at 12:11 PM, Marshall Feldman <marsh at uri.edu>
wrote:> Hi,
>
> I am trying to read a tab-delimited file that has trailing tab delimiters.
> It's a simple file with two legitimate fields. I'm using the first
as
> row.names, and the second should be the only column in the resulting data
> frame.
>
> Initially, R was filling the last column with NA's, but I was able to
stop
> that by setting
colClasses=c("character","character",NULL). Still, the data
> frame is coming in with an extra column, only now its values are set to
"".
>
> Is there any way to skip the trailing delimited field entirely? I've
> searched for an answer without luck.
>
> ? ?Thanks.
> ? ?Marsh Feldman
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 59
Date: Tue, 4 May 2010 12:43:55 -0400
From: jim holtman <jholtman at gmail.com>
To: Marshall Feldman <marsh at uri.edu>
Cc: r-help at r-project.org
Subject: Re: [R] Flushing print buffer
Message-ID:
<h2n644e1f321005040943h5a53ceaeqd5c165bff2d00f33 at mail.gmail.com>
Content-Type: text/plain
explicitly print your data:
print(head(object,...))
On Tue, May 4, 2010 at 12:23 PM, Marshall Feldman <marsh at uri.edu>
wrote:
> Hello,
>
> I have a function with these lines:
>
> test <- function(object,...){
> cat("object: has ",nrow(object),"labels\n")
> cat("Head:\n")
> head(object,...)
> cat("\nTail:\n")
> tail(object,...)
> }
>
> If I feed it a data frame object, it only prints out the tail part. If I
> comment out the last two lines of the function, it does print the head
> part. Obviously there's a buffer not being flushed between the head and
> the tail calls, but I don't know how to flush it. Can someone help me?
>
> Thanks.
>
> Marsh Feldman
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
[[alternative HTML version deleted]]
------------------------------
Message: 60
Date: Tue, 4 May 2010 12:47:40 -0400
From: jim holtman <jholtman at gmail.com>
To: Marshall Feldman <marsh at uri.edu>
Cc: r-help at r-project.org
Subject: Re: [R] Flushing print buffer
Message-ID:
<n2q644e1f321005040947wf10b7ed0i70959ccca6216a41 at mail.gmail.com>
Content-Type: text/plain
I should have also had you read FAQ 7.16
On Tue, May 4, 2010 at 12:43 PM, jim holtman <jholtman at gmail.com>
wrote:
> explicitly print your data:
>
> print(head(object,...))
> On Tue, May 4, 2010 at 12:23 PM, Marshall Feldman <marsh at uri.edu>
wrote:
>
>> Hello,
>>
>> I have a function with these lines:
>>
>> test <- function(object,...){
>> cat("object: has
",nrow(object),"labels\n")
>> cat("Head:\n")
>> head(object,...)
>> cat("\nTail:\n")
>> tail(object,...)
>> }
>>
>> If I feed it a data frame object, it only prints out the tail part. If
I
>> comment out the last two lines of the function, it does print the head
>> part. Obviously there's a buffer not being flushed between the head
and
>> the tail calls, but I don't know how to flush it. Can someone help
me?
>>
>> Thanks.
>>
>> Marsh Feldman
>>
>>
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jim Holtman
> Cincinnati, OH
> +1 513 646 9390
>
> What is the problem that you are trying to solve?
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
[[alternative HTML version deleted]]
------------------------------
Message: 61
Date: Tue, 4 May 2010 16:58:59 +0000
From: Bo Li <libodeer at hotmail.com>
To: <r-help at r-project.org>
Subject: [R] Package Rsafd
Message-ID: <SNT108-W64E85F2A04E299AFB71317D1F30 at phx.gbl>
Content-Type: text/plain
Dear R community,
I am looking for the package "Rsafd". It is not listed on the CRAN
directory. I am wondering anyone has idea with this package. Thans a lot!
Bo
_________________________________________________________________
The New Busy is not the old busy. Search, chat and e-mail from your inbox.
N:WL:en-US:WM_HMP:042010_3
[[alternative HTML version deleted]]
------------------------------
Message: 62
Date: Tue, 4 May 2010 10:11:57 -0700 (PDT)
From: threshold <r.kozarski at gmail.com>
To: r-help at r-project.org
Subject: [R] legend with lines and points
Message-ID: <1272993117007-2125971.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi, say there are x and y given as:
level x y
3 0.112 0.012
2 0.432 0.111
1 0.415 0.053
3 0.38 0.005
2 0.607 0.01
1 NA NA
3 0.572 0.01
2 0.697 0.039
1 0.377 0.006
3 NA NA
2 0.571 0.003
1 0.646 0.014
3 0.063 0.024
2 0.115 0.017
1 0.035 0.042
3 0.426 0
I did the following plot:
plot(y ~ x, pch=c(1,2,3), col=c('red', 'green',
'blue'));
abline(lm(y ~ x), col='red');
lines(lowess.na(y,x), col='blue')
abline(lm(y~x),col='red')
legend('topright',c('Top','Middle','Bottom'),
col=c('red', 'green',
'blue'),pch=c(1,2,3))
legend('right',c('linear','LOWESS'),
col=c('red','blue'),lty=c(1,2))
where:
lowess.na <- function(x, y) { #do lowess with missing data
x1 <- subset(x,(!is.na(x)) &(!is.na(y)))
y1 <- subset(y, (!is.na(x)) &(!is.na(y)))
lowess.na <- lowess(x1~y1)
}
I want ONE legend to involve points (empirical) and lines from linear and
lowess fit together. I guess it is simple but.....
best, robert
--
View this message in context:
http://r.789695.n4.nabble.com/legend-with-lines-and-points-tp2125971p2125971.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 63
Date: Tue, 4 May 2010 13:12:50 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Bo Li <libodeer at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Package Rsafd
Message-ID: <2781F169-A5DB-4CE4-BDF4-96C25AF75905 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Try R-Forge
On May 4, 2010, at 12:58 PM, Bo Li wrote:
>
> Dear R community,
>
> I am looking for the package "Rsafd". It is not listed on the
CRAN
> directory. I am wondering anyone has idea with this package. Thans a
> lot!
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 64
Date: Tue, 4 May 2010 13:28:31 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: David Winsemius <dwinsemius at comcast.net>
Cc: r-help at r-project.org
Subject: Re: [R] Package Rsafd
Message-ID: <061C66C4-C680-4F55-BE0A-04EBDCF1C0AF at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 1:12 PM, David Winsemius wrote:
> Try R-Forge
I made that suggestion based on finding links to R-Forge searching for
"rsafd" with RSeek but that was misleading. Try instead:
http://orfe.princeton.edu/~rcarmona/SVbook/svbook.html
>
> On May 4, 2010, at 12:58 PM, Bo Li wrote:
>
>>
>> Dear R community,
>>
>> I am looking for the package "Rsafd". It is not listed on the
CRAN
>> directory. I am wondering anyone has idea with this package. Thans
>> a lot!
>
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 65
Date: Tue, 4 May 2010 18:00:52 +0000 (UTC)
From: j verzani <jverzani at gmail.com>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] R for web browser
Message-ID: <loom.20100504T195707-898 at post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Lanna Jin <lannajin <at> gmail.com> writes:
>
>
> Hi Everyone,
>
> Does anyone know of any projects for running an interactive R session
within
> a web browser?
> I'm looking for something similar to the one on the Ruby website
> (http://tryruby.org), except for R.
>
> Thanks for your responses in advance!
>
You can run R code through the sage software. (Sage is a CAS and also an
interface to numerous open-source software packages.) The main interface for
sage is through a notebook within a web browser. A freely accessible notebook
server can be found at www.sagenb.org. Recent work involves integrating R's
plotting features within the notebook.
> Lanna
>
> -----
> Lanna Jin
>
> lannajin <at> gmail.com
> 510-898-8525
------------------------------
Message: 66
Date: Tue, 4 May 2010 10:05:48 -0700
From: "Anderson, Chris" <chris.anderson at paradigmcorp.com>
To: "'r-help at R-project.org'" <r-help at
r-project.org>
Subject: [R] help overlay scatterplot to effects plot
Message-ID:
<FA84E830E1587543BB3DB47D68908FCB6423EFFA55 at
conmail02.paradigmhealth.com>
Content-Type: text/plain
I have a process where I am creating a effects plot similar to the cowles effect
example. I would like to add the point estimates to the effects plot, can
someone show me the correct syntax. I have included the "R" effects
example, so you can show me the correct syntax. Thanks
mod.cowles <- glm(volunteer ~ sex + neuroticism*extraversion,
data=Cowles, family=binomial)
eff.cowles <- allEffects(mod.cowles, xlevels=list(neuroticism=0:24,
extraversion=seq(0, 24, 6)), given.values=c(sexmale=0.5))
eff.cowles
plot(eff.cowles, 'neuroticism:extraversion',
ylab="Prob(Volunteer)",
ticks=list(at=c(.1,.25,.5,.75,.9)))
Chris Anderson
Data Analyst
Medical Affairs
wk: 925-677-4870
cell: 707-315-8486
Fax:925-677-4670
</pre><br>This electronic message transmission, including any
attachments, contains <br>information which may be confidential,
privileged and/or otherwise exempt <br>from disclosure under applicable
law. The information is intended to be for the <br>use of the
individual(s) or entity named above. If you are not the intended
<br>recipient or the employee or agent responsible for delivering the
message <br>to the intended recipient, you are hereby notified that any
disclosure, copying, <br>distribution or use of the contents of this
information is strictly prohibited. If <br>you have received this
electronic transmission in error, please notify the sender <br>immediately
by telephone (800-676-6777) or by a "reply to sender only"
<br>message and destroy all electronic and hard copies of the
communication, <br>including attachments. Thank
you.<br><br>For more information on Paradigm Management Services,
LLC, please visit <br>http://www.paradigmcorp.com <br>
[[alternative HTML version deleted]]
------------------------------
Message: 67
Date: Tue, 4 May 2010 16:18:51 +0000
From: a a <aahmad31 at hotmail.com>
To: <r-help at r-project.org>
Subject: [R] How to make predictions with the predict() method on an
arimax object using arimax() from TSA library
Message-ID: <SNT115-W274A92DE1DC3EE5F21A8B8CDF30 at phx.gbl>
Content-Type: text/plain
Hi R Users,
I'm fairly new to R (about 3 months use thus far.)
I wanting to use the arimax function from the TSA library to incorporate some
exogenous inputs into the basic underllying arima model.Then with that newly
model of type arimax, I would like to make a prediction.
To avoid being bogged down with issues specific to my own work, I would like to
refer to readers to the example given in the TSA documentation which would also
then clarify my own issues:
>library(TSA)
>
>data(airmiles)
>air.ml=arimax(log(airmiles),order=c(0,1,1),seasonal=list(order=c(0,1,1),
>period=12),xtransf=data.frame(I911=1*(seq(airmiles)==69),
>I911=1*(seq(airmiles)==69)),
>transfer=list(c(0,0),c(1,0)),xreg=data.frame(Dec96=1*(seq(airmiles)==12),
>Jan97=1*(seq(airmiles)==13),Dec02=1*(seq(airmiles)==84)),method='ML')
Ok,so I've run the above code and an object called air.ml has now been
created of class type arimax.According to the documentation this is the same
type as arima.So now I make a prediction,say 15 time steps ahead:
>forecast=predict(air.m1, n.ahead=15)
The following error is produced:
Error in predict.Arima(air.m1, n.ahead = 15) :
'xreg' and 'newxreg' have different numbers of columns
--------------------------------------------------------------------------------------------------------------------
Question is how to to get a prediction correctly using predict.(I've seen
DSE package but that seems overkill to make just a simple prediction)??
Thank you in advance for any repsonses.
A.A
Part time student at BBK college,UofL.
_________________________________________________________________
Hotmail is redefining busy with tools for the New Busy. Get more from your
inbox.
N:WL:en-US:WM_HMP:042010_2
[[alternative HTML version deleted]]
------------------------------
Message: 68
Date: Tue, 4 May 2010 18:13:08 +0000 (GMT)
From: Marc Carpentier <marc.carpentier at ymail.com>
To: Uwe Ligges <ligges at statistik.tu-dortmund.de>
Cc: r-help at r-project.org
Subject: [R] Re : aregImpute (Hmisc package) : error in matxv(X,
xcof)...
Message-ID: <606503.14770.qm at web28210.mail.ukl.yahoo.com>
Content-Type: text/plain; charset="iso-8859-1"
Ok. I was afraid to refer to a known and obvious error.
Here is a testing dataset (pb1.csv) and commented code (pb1.R) with the
problems.
Thanks for any help.
Marc
________________________________
De : Uwe Ligges <ligges at statistik.tu-dortmund.de>
? : Marc Carpentier <marc.carpentier at ymail.com>
Cc : r-help at r-project.org
Envoy? le : Mar 4 mai 2010, 13 h 52 min 31 s
Objet : Re: [R] aregImpute (Hmisc package) : error in matxv(X, xcof)...
Having reproducible examples including data and the actual call that
lead to the error would be really helpful to be able to help.
Uwe Ligges
On 04.05.2010 12:23, Marc Carpentier wrote:> Dear r-help list,
> I'm trying to use multiple imputation for my MSc thesis.
> Having good exemples using the Hmisc package, I tried the aregImpute
function. But with my own dataset, I have the following error :
>
> Erreur dans matxv(X, xcof) : columns in a (51) must be<= length of b
(50)
> De plus : Warning message:
> In f$xcoef[, 1] * f$xcenter :
> la taille d'un objet plus long n'est pas multiple de la taille
d'un objet plus court
> = longer object length is not a multiple of shorter object length
>
> I first tried to "I()" all the continuous variables but the same
error occurs with different numbers :
> Erreur dans matxv(X, xcof) : columns in a (37) must be<= length of b
(36)...
>
> I'm a student and I'm not familiar with possible constraints in a
dataset to be effectively imputed. I just found this previous message, where the
author's autoreply suggests that particular distributions might be an
explanation of algorithms failure :
> http://www.mail-archive.com/r-help at r-project.org/msg53534.html
>
> Does anyone know if these messages reflect a specific problem in my dataset
? And if the number mentioned might give me a hint on which column to look at
(and maybe transform or ignore for the imputation) ?
> Thanks for any advice you might have.
>
> Marc
>
>
>
>
> [[alternative HTML version deleted]]
>
>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 69
Date: Tue, 4 May 2010 14:25:59 -0400
From: Galois Theory <tgalois at gmail.com>
To: R-help at r-project.org
Subject: [R] unsubcribe
Message-ID:
<AANLkTinEwYSBULEPHFQQBnEhBwDTVDA1a71FWK4A3Exq at mail.gmail.com>
Content-Type: text/plain
[[alternative HTML version deleted]]
------------------------------
Message: 70
Date: Tue, 4 May 2010 14:28:53 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Marc Carpentier <marc.carpentier at ymail.com>
Cc: r-help at r-project.org, Uwe Ligges <ligges at
statistik.tu-dortmund.de>
Subject: Re: [R] Re : aregImpute (Hmisc package) : error in matxv(X,
xcof)...
Message-ID: <4AEBE35C-F938-40C0-BDF5-7540AECD34EE at comcast.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
On May 4, 2010, at 2:13 PM, Marc Carpentier wrote:
> Ok. I was afraid to refer to a known and obvious error.
> Here is a testing dataset (pb1.csv) and commented code (pb1.R) with
> the problems.
> Thanks for any help.
Nothing attached. In all likelihood had you given these file names
with extensions of .txt, they would have made it through the server
filter>
> Marc
>
> ________________________________
> De : Uwe Ligges <ligges at statistik.tu-dortmund.de>
> ? : Marc Carpentier <marc.carpentier at ymail.com>
> Cc : r-help at r-project.org
> Envoy? le : Mar 4 mai 2010, 13 h 52 min 31 s
> Objet : Re: [R] aregImpute (Hmisc package) : error in matxv(X,
> xcof)...
>
> Having reproducible examples including data and the actual call that
> lead to the error would be really helpful to be able to help.
>
> Uwe Ligges
>
> On 04.05.2010 12:23, Marc Carpentier wrote:
>> Dear r-help list,
>> I'm trying to use multiple imputation for my MSc thesis.
>> Having good exemples using the Hmisc package, I tried the
>> aregImpute function. But with my own dataset, I have the following
>> error :
>>
>> Erreur dans matxv(X, xcof) : columns in a (51) must be<= length of
>> b (50)
>> De plus : Warning message:
>> In f$xcoef[, 1] * f$xcenter :
>> la taille d'un objet plus long n'est pas multiple de la
taille
>> d'un objet plus court
>> = longer object length is not a multiple of shorter object length
>>
>> I first tried to "I()" all the continuous variables but the
same
>> error occurs with different numbers :
>> Erreur dans matxv(X, xcof) : columns in a (37) must be<= length of
>> b (36)...
>>
>> I'm a student and I'm not familiar with possible constraints in
a
>> dataset to be effectively imputed. I just found this previous
>> message, where the author's autoreply suggests that particular
>> distributions might be an explanation of algorithms failure :
>> http://www.mail-archive.com/r-help at r-project.org/msg53534.html
>>
>> Does anyone know if these messages reflect a specific problem in my
>> dataset ? And if the number mentioned might give me a hint on which
>> column to look at (and maybe transform or ignore for the
>> imputation) ?
>> Thanks for any advice you might have.
>>
>> Marc
>>
>
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 71
Date: Tue, 04 May 2010 14:48:17 -0400
From: "Cedrick W. Johnson" <cedrick at cedrickjohnson.com>
To: Galois Theory <tgalois at gmail.com>
Cc: R-help at r-project.org
Subject: Re: [R] unsubcribe
Message-ID: <4BE06BF1.8020100 at cedrickjohnson.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
https://stat.ethz.ch/mailman/listinfo/r-help
On 5/4/2010 2:25 PM, Galois Theory wrote:>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 72
Date: Tue, 4 May 2010 12:07:43 -0700 (PDT)
From: pdb <philb at philbrierley.com>
To: r-help at r-project.org
Subject: [R] randomforests - how to classify
Message-ID: <1273000063927-2126166.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I'm experimenting with random forests and want to perform a binary
classification task.
I've tried some of the sample codes in the help files and things run, but I
get a message to the effect 'you don't have very many unique values in
the
target - are you sure you want to do regression?' (sorry, don't know
exact
message but r is busy now so can't check).
In reading the help files I see 2 examples, one for classification and one
for regression. To the uninformed - these don't seem much different to each
other. How does rf know to do regression or classification?
## Classification:
##data(iris)
set.seed(71)
iris.rf <- randomForest(Species ~ ., data=iris, importance=TRUE,
proximity=TRUE)
## Regression:
## data(airquality)
set.seed(131)
ozone.rf <- randomForest(Ozone ~ ., data=airquality, mtry=3,
importance=TRUE, na.action=na.omit)
My target variable only has 2 values - why does it want to do regression?
I've entered code just like that in the classification example above. Also
when it asks me 'are you sure you want to do regression' - how do I say
'NO,
do classification please'?
--
View this message in context:
http://r.789695.n4.nabble.com/randomforests-how-to-classify-tp2126166p2126166.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 73
Date: Tue, 4 May 2010 15:25:34 -0400
From: Fahim Md <fahim.md at gmail.com>
To: r-help at r-project.org
Subject: [R] installing a package in linux
Message-ID:
<r2l90e7dca51005041225uc5748d95pa85a5dbcd85d322 at mail.gmail.com>
Content-Type: text/plain
I recently started using ubuntu 9.10 and I am using gedit editor and R
plugin for writing R code. To install any package I need to do:
$ install.packages()
//window pop-up for mirror selection
//then another window pop up for package selection.
After this as long as I am not exiting, the function of the newly installed
packages are available.
After I exit (i use to put 'no' in 'save workspace' option) from
R, if I
want to again work in R, I have to repeat the process of package install.
This reintallation problem was not there in windows(I was using Tinn-R as
editor, I just need to put require('package-name') to use its function).
Is there anyway so that reinstallation of the package is avoided???
thanks
--Fahim
[[alternative HTML version deleted]]
------------------------------
Message: 74
Date: Tue, 4 May 2010 12:27:46 -0700
From: Changbin Du <changbind at gmail.com>
To: pdb <philb at philbrierley.com>
Cc: r-help at r-project.org
Subject: Re: [R] randomforests - how to classify
Message-ID:
<q2g840048831005041227v90ac747bi251d5b04d9ed7498 at mail.gmail.com>
Content-Type: text/plain
use (as.factor(target) ~., data =your data, ...)
On Tue, May 4, 2010 at 12:07 PM, pdb <philb at philbrierley.com> wrote:
>
> Hi,
>
> I'm experimenting with random forests and want to perform a binary
> classification task.
> I've tried some of the sample codes in the help files and things run,
but I
> get a message to the effect 'you don't have very many unique values
in the
> target - are you sure you want to do regression?' (sorry, don't
know exact
> message but r is busy now so can't check).
>
>
> In reading the help files I see 2 examples, one for classification and one
> for regression. To the uninformed - these don't seem much different to
each
> other. How does rf know to do regression or classification?
>
> ## Classification:
> ##data(iris)
> set.seed(71)
> iris.rf <- randomForest(Species ~ ., data=iris, importance=TRUE,
> proximity=TRUE)
>
>
> ## Regression:
> ## data(airquality)
> set.seed(131)
> ozone.rf <- randomForest(Ozone ~ ., data=airquality, mtry=3,
> importance=TRUE, na.action=na.omit)
>
>
> My target variable only has 2 values - why does it want to do regression?
> I've entered code just like that in the classification example above.
Also
> when it asks me 'are you sure you want to do regression' - how do I
say
> 'NO,
> do classification please'?
>
>
>
>
> --
> View this message in context:
>
http://r.789695.n4.nabble.com/randomforests-how-to-classify-tp2126166p2126166.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Sincerely,
Changbin
--
[[alternative HTML version deleted]]
------------------------------
Message: 75
Date: Tue, 4 May 2010 15:33:06 -0400
From: ivo welch <ivowel at gmail.com>
To: r-help <r-help at stat.math.ethz.ch>
Subject: [R] R formula language---a min and max function?
Message-ID:
<n2l50d1c22d1005041233ncebbac94vb8aedef1783b4439 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear R experts---I would like to estimate a non-linear least squares
expression that looks something like
y ~ a+b*min(c,x)
where a, b, and c are the three parameters. how do I define a min
function in the formula language of R? advice appreciated.
sincerely,
/iaw
------------------------------
Message: 76
Date: Tue, 4 May 2010 15:40:46 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: ivo welch <ivowel at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID: <FDB0214C-BFB4-4DC2-915D-F7E5AA73D554 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed
On May 4, 2010, at 3:33 PM, ivo welch wrote:
> Dear R experts---I would like to estimate a non-linear least squares
> expression that looks something like
>
> y ~ a+b*min(c,x)
>
> where a, b, and c are the three parameters. how do I define a min
> function in the formula language of R? advice appreciated.
?pmin
>
> sincerely,
>
> /iaw
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 77
Date: Tue, 4 May 2010 15:52:09 -0400
From: ivo welch <ivo.welch at gmail.com>
To: David Winsemius <dwinsemius at comcast.net>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID:
<z2t50d1c22d1005041252r50c91b47v92a8cda638a5de63 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
thank you, david. indeed. works great (almost). an example for
anyone else googling this in the future:
> x=1:20
> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
0.002142 : 2 3 10
0.002115 : 2.004 3.000 10.000
0.002114 : 2.006 2.999 10.001
0.002084 : 2.005 2.999 10.000
...
0.002079 : 2.005 2.999 10.000
Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c = 10), :
step factor 0.000488281 reduced below 'minFactor' of 0.000976562
strange error, but unrelated to my question. will figure this one out next.
regards,
/iaw
On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net> wrote:>
> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>
>> Dear R experts---I would like to estimate a non-linear least squares
>> expression that looks something like
>>
>> ?y ~ a+b*min(c,x)
>>
>> where a, b, and c are the three parameters. ?how do I define a min
>> function in the formula language of R? ?advice appreciated.
>
> ?pmin
>
>>
>> sincerely,
>>
>> /iaw
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> David Winsemius, MD
> West Hartford, CT
>
>
------------------------------
Message: 78
Date: Tue, 4 May 2010 15:57:05 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: ivo welch <ivo.welch at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID:
<u2g971536df1005041257s27e7ee8bg567a6284a244e20f at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
You need to use set.seed first so that your example is reproducible.
Using set.seed(1) there is no error:
> set.seed(1)
> x=1:20
> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
0.001657260 : 2 3 10
0.00153709 : 1.998312 3.000547 9.999568
0.001509616 : 1.996222 3.001117 9.998197
0.001509616 : 1.996222 3.001117 9.998197> r1
Nonlinear regression model
model: y ~ a + b * pmin(c, x)
data: parent.frame()
a b c
1.996 3.001 9.998
residual sum-of-squares: 0.001510
Number of iterations to convergence: 3
Achieved convergence tolerance: 3.195e-09
On Tue, May 4, 2010 at 3:52 PM, ivo welch <ivo.welch at gmail.com>
wrote:> thank you, david. ?indeed. ?works great (almost). ?an example for
> anyone else googling this in the future:
>
>> x=1:20
>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
> 0.002142 : ? 2 ?3 10
> 0.002115 : ? 2.004 ?3.000 10.000
> 0.002114 : ? 2.006 ?2.999 10.001
> 0.002084 : ? 2.005 ?2.999 10.000
> ...
> 0.002079 : ? 2.005 ?2.999 10.000
> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c = 10), ?:
> ?step factor 0.000488281 reduced below 'minFactor' of 0.000976562
>
> strange error, but unrelated to my question. ?will figure this one out
next.
>
> regards,
>
> /iaw
>
>
> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net> wrote:
>>
>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>
>>> Dear R experts---I would like to estimate a non-linear least
squares
>>> expression that looks something like
>>>
>>> ?y ~ a+b*min(c,x)
>>>
>>> where a, b, and c are the three parameters. ?how do I define a min
>>> function in the formula language of R? ?advice appreciated.
>>
>> ?pmin
>>
>>>
>>> sincerely,
>>>
>>> /iaw
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 79
Date: Tue, 4 May 2010 15:59:58 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: ivo welch <ivo.welch at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID: <44C0857E-519E-446F-8606-DD737E87A384 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 3:52 PM, ivo welch wrote:
> thank you, david. indeed. works great (almost). an example for
> anyone else googling this in the future:
>
>> x=1:20
>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
> 0.002142 : 2 3 10
> 0.002115 : 2.004 3.000 10.000
> 0.002114 : 2.006 2.999 10.001
> 0.002084 : 2.005 2.999 10.000
> ...
> 0.002079 : 2.005 2.999 10.000
> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c =
> 10), :
> step factor 0.000488281 reduced below 'minFactor' of 0.000976562
>
> strange error, but unrelated to my question. will figure this one
> out next.
I get no error. May be difficult to sort out unless you can reproduce
after setting a random seed.
> x=1:20
> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
0.001560045 : 2 3 10
0.001161253 : 2.003824 2.998973 10.000388
0.001161253 : 2.003824 2.998973 10.000388
--
David.
>
> regards,
>
> /iaw
>
>
> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net
> > wrote:
>>
>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>
>>> Dear R experts---I would like to estimate a non-linear least
squares
>>> expression that looks something like
>>>
>>> y ~ a+b*min(c,x)
>>>
>>> where a, b, and c are the three parameters. how do I define a min
>>> function in the formula language of R? advice appreciated.
>>
>> ?pmin
>>
>>>
>>> sincerely,
>>>
>>> /iaw
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>>
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 80
Date: Tue, 04 May 2010 16:09:23 -0400
From: Michael Friendly <friendly at yorku.ca>
To: R-Help <r-help at stat.math.ethz.ch>
Subject: [R] rgl: plane3d or abline() analog
Message-ID: <4BE07EF3.9060005 at yorku.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
For use with rgl, I'm looking for a function to draw a plane in an rgl
scene that would function
sort of like abline(a, b) does in base graphics, where abline(0, 1)
draws a line of unit slope through
the origin. Analogously, I'd like to have a plane3d function, so that
plane3d(0, 1, 1) draws a
plane through the origin with unit slopes in x & y and plane3d(3, 0, 0)
draws a horizontal plane
at z=3.
I see that scatterplot3d in the scatterplot3d package returns a
plane3d() *function* for a given
plot. I could probably try to adapt this, but before I do, I wonder if
something like this for
rgl exists that I haven't found.
-Michael
--
Michael Friendly Email: friendly AT yorku DOT ca
Professor, Psychology Dept.
York University Voice: 416 736-5115 x66249 Fax: 416 736-5814
4700 Keele Street http://www.math.yorku.ca/SCS/friendly.html
Toronto, ONT M3J 1P3 CANADA
------------------------------
Message: 81
Date: Tue, 4 May 2010 16:19:34 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Michael Friendly <friendly at yorku.ca>
Cc: R-Help <r-help at stat.math.ethz.ch>
Subject: Re: [R] rgl: plane3d or abline() analog
Message-ID: <B2DB7B71-621F-4EEC-B671-1F3FF9EFD7C2 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 4:09 PM, Michael Friendly wrote:
> For use with rgl, I'm looking for a function to draw a plane in an
> rgl scene that would function
> sort of like abline(a, b) does in base graphics, where abline(0, 1)
> draws a line of unit slope through
> the origin. Analogously, I'd like to have a plane3d function, so
> that plane3d(0, 1, 1) draws a
> plane through the origin with unit slopes in x & y and plane3d(3, 0,
> 0) draws a horizontal plane
> at z=3.
>
> I see that scatterplot3d in the scatterplot3d package returns a
> plane3d() *function* for a given
> plot. I could probably try to adapt this, but before I do, I wonder
> if something like this for
> rgl exists that I haven't found.
?quads3d
>
> -Michael
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 82
Date: Tue, 4 May 2010 14:20:59 -0600
From: Greg Snow <Greg.Snow at imail.org>
To: Richard and Barbara Males <rbmales at gmail.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] generating correlated random variables from different
distributions
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC6335CD080A at LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="iso-8859-1"
Transforming variables will generally change the correlation, so your method
will give you correlated variables, but not exactly at the correlations you
specify (though with some trial and error you may be able to get close). If you
are happy with those results, then your problem is solved, if you need more
control over the relationship then something like the Gibbs sample may be what
you need.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: Richard and Barbara Males [mailto:rbmales at gmail.com]
> Sent: Sunday, May 02, 2010 1:37 PM
> To: Greg Snow
> Cc: r-help at r-project.org
> Subject: Re: [R] generating correlated random variables from different
> distributions
>
> Thank you for your reply. The application is a Monte Carlo simulation
> in environmental planning. Different possible remediation measures
> have different costs, and produce different results. For example, a
> $20,000 plan may add 10 acres of wetlands and 12 acres of bird
> habitat. The desire is to describe the uncertainty in the cost and
> the outputs (acres of wetlands, acres of bird habitat) by
> distributions. The cost may be described by a normal distribution,
> mean $20k, $5k SD, and the 12 acres of birds may be described by a
> uniform distribution (10 to 14). [These are just examples, not
> representative of a real problem]. We may know (or think) that
> wetlands and bird habitat are positively correlated (0.6), and that
> there is a stronger correlation of both with cost (0.85). So the
> effort is to generate, through MCS, values at each iteration of cost,
> acres of wetland, and acres of bird habitat, such that the resultant
> values give the same correlation, and the values of cost, bird habitat
> and wetland habitat return the input distributions. The overall
> desire is compare different remediation measures, taking into account
> uncertainty in costs and results.
>
> One possible approach (although I have not tried it yet, but will do
> so in the near future) is to generate, for each iteration, three
> independent (0,1) random variables, correlate them via the Cholesky
> approach, and use them as input to the inverse normal, inverse
> uniform, etc. to get the three variables for each iteration. The
> primary distributions of interest are normal, uniform, triangular,
> gamma, and arbitrary cdf, so this approach seems plausible in that
> inverse distributions are readily available.
>
> Thanks in advance.
>
> Dick Males
> Cincinnati, OH, USA
>
> On Thu, Apr 29, 2010 at 12:31 PM, Greg Snow <Greg.Snow at imail.org>
> wrote:
> > The method you are using (multiply by cholesky) works for normal
> distributions, but not necessarily for others (if you want different
> means/sd, then add/multiply after transforming).
> >
> > For other distributions this process can sometimes give the
> correlation you want, but may change the variable(s) to no longer have
> the desired distribution.
> >
> > The short answer to your question is "It Depends", the full
long
> answer could fill a full semester course. ?If you tell us more of your
> goal we may be able to give a more useful answer. ?The copula package
> is one possibility. ?If you know the conditional distribution of each
> variable given the others then you can use gibbs sampling.
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > greg.snow at imail.org
> > 801.408.8111
> >
> >
> >> -----Original Message-----
> >> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> >> project.org] On Behalf Of Richard and Barbara Males
> >> Sent: Thursday, April 29, 2010 9:18 AM
> >> To: r-help at r-project.org
> >> Subject: [R] generating correlated random variables from different
> >> distributions
> >>
> >> I need to generate a set of correlated random variables for a
Monte
> >> Carlo simulation. ? The solutions I have found
> >> (http://www.stat.uiuc.edu/stat428/cndata.html,
> >> http://www.sitmo.com/doc/Generating_Correlated_Random_Numbers),
> using
> >> Cholesky Decomposition, seem to work only if the variables come
from
> >> the same distribution with the same parameters. ?My situation is
> that
> >> each variable may be described by a different distribution (or
> >> different parameters of the same distribution). ?This approach
does
> >> not seem to work, see code and results below. ?Am I missing
> something
> >> here? ?My math/statistics is not very good, will I need to
generate
> >> correlated uniform random variables on (0,1) and then use the
> inverse
> >> distributions to get the desired results I am looking for? ?That
is
> >> acceptable, but I would prefer to just generate the individual
> >> distributions and then correlate them. ?Any advice much
appreciated.
> >> Thanks in advance
> >>
> >> R. Males
> >> Cincinnati, Ohio, USA
> >>
> >> Sample Code:
> >> # Testing Correlated Random Variables
> >>
> >> # reference
> >> http://www.sitmo.com/doc/Generating_Correlated_Random_Numbers
> >> # reference http://www.stat.uiuc.edu/stat428/cndata.html
> >> # create the correlation matrix
> >> corMat=matrix(c(1,0.6,0.3,0.6,1,0.5,0.3,0.5,1),3,3)
> >> cholMat=chol(corMat)
> >> # create the matrix of random variables
> >> set.seed(1000)
> >> nValues=10000
> >>
> >> # generate some random values
> >>
> >>
matNormalAllSame=cbind(rnorm(nValues),rnorm(nValues),rnorm(nValues))
> >>
> matNormalDifferent=cbind(rnorm(nValues,1,1.5),rnorm(nValues,2,0.5),rnor
> >> m(nValues,6,1.8))
> >>
> matUniformAllSame=cbind(runif(nValues),runif(nValues),runif(nValues))
> >>
> matUniformDifferent=cbind(runif(nValues,1,1.5),runif(nValues,2,3.5),run
> >> if(nValues,6,10.8))
> >>
> >> # bind to a matrix
> >> print("correlation Matrix")
> >> print(corMat)
> >> print("Cholesky Decomposition")
> >> print (cholMat)
> >>
> >> # test same normal
> >>
> >> resultMatNormalAllSame=matNormalAllSame%*%cholMat
> >> print("correlation matNormalAllSame")
> >> print(cor(resultMatNormalAllSame))
> >>
> >> # test different normal
> >>
> >> resultMatNormalDifferent=matNormalDifferent%*%cholMat
> >> print("correlation matNormalDifferent")
> >> print(cor(resultMatNormalDifferent))
> >>
> >> # test same uniform
> >> resultMatUniformAllSame=matUniformAllSame%*%cholMat
> >> print("correlation matUniformAllSame")
> >> print(cor(resultMatUniformAllSame))
> >>
> >> # test different uniform
> >> resultMatUniformDifferent=matUniformDifferent%*%cholMat
> >> print("correlation matUniformDifferent")
> >> print(cor(resultMatUniformDifferent))
> >>
> >> and results
> >>
> >> [1] "correlation Matrix"
> >> ? ? ?[,1] [,2] [,3]
> >> [1,] ?1.0 ?0.6 ?0.3
> >> [2,] ?0.6 ?1.0 ?0.5
> >> [3,] ?0.3 ?0.5 ?1.0
> >> [1] "Cholesky Decomposition"
> >> ? ? ?[,1] [,2] ? ? ?[,3]
> >> [1,] ? ?1 ?0.6 0.3000000
> >> [2,] ? ?0 ?0.8 0.4000000
> >> [3,] ? ?0 ?0.0 0.8660254
> >> [1] "correlation matNormalAllSame" <== ok
> >> ? ? ? ? ? [,1] ? ? ?[,2] ? ? ?[,3]
> >> [1,] 1.0000000 0.6036468 0.3013823
> >> [2,] 0.6036468 1.0000000 0.5005440
> >> [3,] 0.3013823 0.5005440 1.0000000
> >> [1] "correlation matNormalDifferent" <== no good
> >> ? ? ? ? ? [,1] ? ? ?[,2] ? ? ?[,3]
> >> [1,] 1.0000000 0.9141472 0.2676162
> >> [2,] 0.9141472 1.0000000 0.2959178
> >> [3,] 0.2676162 0.2959178 1.0000000
> >> [1] "correlation matUniformAllSame" <== ok
> >> ? ? ? ? ? [,1] ? ? ?[,2] ? ? ?[,3]
> >> [1,] 1.0000000 0.5971519 0.2959195
> >> [2,] 0.5971519 1.0000000 0.5011267
> >> [3,] 0.2959195 0.5011267 1.0000000
> >> [1] "correlation matUniformDifferent" <== no good
> >> ? ? ? ? ? [,1] ? ? ?[,2] ? ? ?[,3]
> >> [1,] 1.0000000 0.2312000 0.0351460
> >> [2,] 0.2312000 1.0000000 0.1526293
> >> [3,] 0.0351460 0.1526293 1.0000000
> >> >
> >>
> >> ______________________________________________
> >> R-help at r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide http://www.R-project.org/posting-
> >> guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
------------------------------
Message: 83
Date: Tue, 04 May 2010 22:23:35 +0200
From: Tobias Verbeke <tobias.verbeke at openanalytics.eu>
To: HB8 <hb8hb8 at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Agreement
Message-ID: <4BE08247.2000500 at openanalytics.eu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi Gr?goire,
HB8 wrote:> Has Lawrence Lin's code been ported to R?
>
http://tigger.uic.edu/~hedayat/sascode.html<http://tigger.uic.edu/%7Ehedayat/sascode.html>
One of Lin's methods (CCC) is available in function
epi.ccc of the epiR package.
Best,
Tobias
------------------------------
Message: 84
Date: Tue, 4 May 2010 16:37:00 -0400
From: Alex Chelminsky <achelminsky at csc.com>
To: r-help at r-project.org
Subject: [R] Error when invoking x11()
Message-ID:
<OFC9BC4CDD.C251E036-ON85257719.0070FE69-85257719.00714063 at csc.com>
Content-Type: text/plain; charset=US-ASCII
I'm running an instance of R under Solaris 10.(sun4u sparc). When I invoke
the x11() interface, I get the following error:
> x11()
Error in x11() : X11 module cannot be loaded
In addition: Warning message:
In x11() :
unable to load shared library '/usr/local/lib/R/modules//R_X11.so':
ld.so.1: R: fatal: libpangocairo-1.0.so.0: open failed: No such file or
directory)
The module is not present in my environment. Is there anything I need to
install for this to work?
Thanks
Alexander Chelminsky
Principal
CSC
GBS | p: +1 781 290 1620 | f: +1 781 890 1208 | m: +1 617 650 5453 |
achelminsky at csc.com | www.csc.com
This is a PRIVATE message. If you are not the intended recipient, please
delete without copying and kindly advise us by e-mail of the mistake in
delivery.
NOTE: Regardless of content, this e-mail shall not operate to bind CSC to
any order or other contract unless pursuant to explicit written agreement
or government initiative expressly permitting the use of e-mail for such
purpose.
------------------------------
Message: 85
Date: Tue, 4 May 2010 23:02:00 +0200
From: Joris Meys <jorismeys at gmail.com>
To: jim holtman <jholtman at gmail.com>
Cc: R mailing list <r-help at r-project.org>
Subject: Re: [R] Avoiding for-loop for splitting vector into
subvectors based on positions
Message-ID:
<p2ob5e1ab9a1005041402ybadc548eud702c2a2d0b7e157 at mail.gmail.com>
Content-Type: text/plain
Thanks, works nicely. I have to do some clocking to see how much the
improvement is, but I surely learnt again.
Attentive readers might have noticed my initial code contains an error.
tmp <- x[pos2[i]:pos2[i+1]]
should be:
tmp <- x[pos2[i]:(pos2[i+1]-1)]
off course...
On Tue, May 4, 2010 at 5:50 PM, jim holtman <jholtman at gmail.com> wrote:
> Try this:
>
> > x <- 1:10
> > pos <- c(1,4,7)
> > pat <- rep(seq_along(pos), times=diff(c(pos, length(x) + 1)))
> > split(x, pat)
> $`1`
> [1] 1 2 3
> $`2`
> [1] 4 5 6
> $`3`
> [1] 7 8 9 10
>
>
>
> On Tue, May 4, 2010 at 11:29 AM, Joris Meys <jorismeys at gmail.com>
wrote:
>
>> Dear all,
>>
>> I'm trying to optimize code and want to avoid for-loops as much as
>> possible.
>> I'm applying a calculation on subvectors from a big one, and I get
the
>> subvectors by using a vector of starting positions:
>>
>> x <- 1:10
>> pos <- c(1,4,7)
>> n <- length(x)
>>
>> I try to do something like this :
>> pos2 <- c(pos, n+1)
>>
>> out <- c()
>> for(i in 1:n){
>> tmp <- x[pos2[i]:pos2[i+1]]
>> out <- c(out, length(tmp))
>> }
>>
>> Never mind the length function, I apply a far more complicated one.
It's
>> about the use of the indices in the for-loop. I didn't see any way
of
>> doing
>> that with an apply, unless there is a very convenient way of splitting
my
>> vector in a list of the subvectors or so.
>>
>> Anybody an idea?
>> Cheers
>> --
>> Joris Meys
>> Statistical Consultant
>>
>> Ghent University
>> Faculty of Bioscience Engineering
>> Department of Applied mathematics, biometrics and process control
>>
>> Coupure Links 653
>> B-9000 Gent
>>
>> tel : +32 9 264 59 87
>> Joris.Meys at Ugent.be
>> -------------------------------
>> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jim Holtman
> Cincinnati, OH
> +1 513 646 9390
>
> What is the problem that you are trying to solve?
>
--
Joris Meys
Statistical Consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
Coupure Links 653
B-9000 Gent
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
[[alternative HTML version deleted]]
------------------------------
Message: 86
Date: Tue, 04 May 2010 23:05:17 +0200
From: Ruihong Huang <ruihong.huang at wiwi.hu-berlin.de>
To: r-help at r-project.org
Subject: [R] Two Questions on R (call by reference and
pre-compilation)
Message-ID: <4BE08C0D.8020500 at wiwi.hu-berlin.de>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
Hi All,
I have two questions on R. Could you please explain them to me? Thank you!
1) When call a function, R typically copys the values to formal
arguments (call by value). This is very cost, if I would like to pass a
huge data set to a function. Is there any situations that R doesn't copy
the data, besides pass data in an environment object.
2) Does R pre-compile the object function to binary when running
"optim"? I experienced the R "optim" is much slower than the
MATLAB
"fmincon" function. I don't know MATLAB has done any
pre-compilation on
the script for object function or not. But perhaps, we can increase R
performance by some sort of pre-compilation during running time.
Thanks in advance.
Best Regards,
Ruihong
------------------------------
Message: 87
Date: Tue, 4 May 2010 14:07:55 -0700 (PDT)
From: pdb <philb at philbrierley.com>
To: r-help at r-project.org
Subject: [R] timing a function
Message-ID: <1273007275946-2126319.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I want to time how long a function takes to execute. Any clues on what to
search for to achieve this?
Thanks in advance.
--
View this message in context:
http://r.789695.n4.nabble.com/timing-a-function-tp2126319p2126319.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 88
Date: Tue, 04 May 2010 23:17:45 +0200
From: mohamed.lajnef at inserm.fr
To: pdb <philb at philbrierley.com>
Cc: r-help at r-project.org
Subject: Re: [R] timing a function
Message-ID: <20100504231745.iyx5rb8lz44o04ow at imp.inserm.fr>
Content-Type: text/plain; charset=UTF-8; DelSp="Yes";
format="flowed"
Hi,
? proc.time() for more help
regards
Ml
pdb <philb at philbrierley.com> a ?crit?:
>
> Hi,
> I want to time how long a function takes to execute. Any clues on what to
> search for to achieve this?
>
> Thanks in advance.
> --
> View this message in context:
> http://r.789695.n4.nabble.com/timing-a-function-tp2126319p2126319.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 89
Date: Tue, 4 May 2010 17:25:19 -0400
From: ivo welch <ivo.welch at gmail.com>
To: David Winsemius <dwinsemius at comcast.net>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID:
<x2j50d1c22d1005041425s3b7e4475w9f8af224ef7424e9 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
thank you, david and gabor. very much appreciated. I should have
thought of setting the seed. this was only an example, of course.
alas, such intermittent errors could still be of concern to me,
because I need to simulate this nls() to find out its properties under
the NULL, so I can't easily tolerate errors. fortunately, I had the
window still open, so getting my y's out was easy, and the rounded
figures produce the same nls error.
> cbind(x,round(y,3))
x y
[1,] 1 5.017
[2,] 2 7.993
[3,] 3 11.014
[4,] 4 13.998
[5,] 5 17.003
[6,] 6 19.977
[7,] 7 23.011
[8,] 8 25.991
[9,] 9 29.003
[10,] 10 32.014
[11,] 11 31.995
[12,] 12 32.004
[13,] 13 32.012
[14,] 14 31.994
[15,] 15 31.998
[16,] 16 32.000
[17,] 17 32.009
[18,] 18 31.995
[19,] 19 32.000
[20,] 20 31.982
> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
0.002138 : 2 3 10
0.002117 : 2.004 3.000 9.999
0.002113 : 2.006 2.999 10.001
0.002082 : 2.005 2.999 10.000
0.002077 : 2.005 2.999 10.000
0.002077 : 2.005 2.999 10.000
Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c = 10), :
step factor 0.000488281 reduced below 'minFactor' of 0.000976562
I really don't care about this example, of course---only about
learning how to avoid nls() from dying on me. so, any advice would be
appreciated.
regards,
/iaw
----
Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
On Tue, May 4, 2010 at 3:59 PM, David Winsemius <dwinsemius at
comcast.net> wrote:>
> On May 4, 2010, at 3:52 PM, ivo welch wrote:
>
>> thank you, david. ?indeed. ?works great (almost). ?an example for
>> anyone else googling this in the future:
>>
>>> x=1:20
>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
>>
>> 0.002142 : ? 2 ?3 10
>> 0.002115 : ? 2.004 ?3.000 10.000
>> 0.002114 : ? 2.006 ?2.999 10.001
>> 0.002084 : ? 2.005 ?2.999 10.000
>> ...
>> 0.002079 : ? 2.005 ?2.999 10.000
>> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c =
10),
>> ?:
>> ?step factor 0.000488281 reduced below 'minFactor' of
0.000976562
>>
>> strange error, but unrelated to my question. ?will figure this one out
>> next.
>
> I get no error. May be difficult to sort out unless you can reproduce after
> setting a random seed.
>
>> x=1:20
>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
> 0.001560045 : ? 2 ?3 10
> 0.001161253 : ? 2.003824 ?2.998973 10.000388
> 0.001161253 : ? 2.003824 ?2.998973 10.000388
>
> --
> David.
>
>>
>> regards,
>>
>> /iaw
>>
>>
>> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net>
>> wrote:
>>>
>>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>>
>>>> Dear R experts---I would like to estimate a non-linear least
squares
>>>> expression that looks something like
>>>>
>>>> ?y ~ a+b*min(c,x)
>>>>
>>>> where a, b, and c are the three parameters. ?how do I define a
min
>>>> function in the formula language of R? ?advice appreciated.
>>>
>>> ?pmin
>>>
>>>>
>>>> sincerely,
>>>>
>>>> /iaw
>>>>
>>>> ______________________________________________
>>>> R-help at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>>>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained, reproducible
code.
>>>
>>> David Winsemius, MD
>>> West Hartford, CT
>>>
>>>
>
> David Winsemius, MD
> West Hartford, CT
>
>
------------------------------
Message: 90
Date: Tue, 4 May 2010 23:36:25 +0200
From: Joris Meys <jorismeys at gmail.com>
To: pdb <philb at philbrierley.com>
Cc: r-help at r-project.org
Subject: Re: [R] timing a function
Message-ID:
<u2xb5e1ab9a1005041436m51024a7fsc2a5c3c9ea0dbf43 at mail.gmail.com>
Content-Type: text/plain
?system.time can help too.
On Tue, May 4, 2010 at 11:07 PM, pdb <philb at philbrierley.com> wrote:
>
> Hi,
> I want to time how long a function takes to execute. Any clues on what to
> search for to achieve this?
>
> Thanks in advance.
> --
> View this message in context:
> http://r.789695.n4.nabble.com/timing-a-function-tp2126319p2126319.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Joris Meys
Statistical Consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
Coupure Links 653
B-9000 Gent
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
[[alternative HTML version deleted]]
------------------------------
Message: 91
Date: Tue, 4 May 2010 17:49:28 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: ivo welch <ivo.welch at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID: <8EC04414-B604-4EC1-BD9B-15038EF86087 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 5:25 PM, ivo welch wrote:
> thank you, david and gabor. very much appreciated. I should have
> thought of setting the seed. this was only an example, of course.
> alas, such intermittent errors could still be of concern to me,
> because I need to simulate this nls() to find out its properties under
> the NULL, so I can't easily tolerate errors. fortunately, I had the
> window still open, so getting my y's out was easy, and the rounded
> figures produce the same nls error.
There would of course be try()
But ... Again, No error on my device:
> xy <- scan()
1: 1 5.017
3: 2 7.993
5: 3 11.014
7: 4 13.998
9: 5 17.003
11: 6 19.977
13: 7 23.011
15: 8 25.991
17: 9 29.003
19: 10 32.014
21: 11 31.995
23: 12 32.004
25: 13 32.012
27: 14 31.994
29: 15 31.998
31: 16 32.000
33: 17 32.009
35: 18 31.995
37: 19 32.000
39: 20 31.982
41:
Read 40 items
> xym <-matrix(xy, ncol=2, byrow=TRUE)
> colnames(xym)<-c("x","y")
> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
0.001505770 : 2 3 10
0.001361737 : 2.003063 2.999924 10.000135
Plotting predict(r1) confirmed a very close fit.
> sessionInfo()
R version 2.10.1 RC (2009-12-09 r50695)
x86_64-apple-darwin9.8.0
locale:
[1] en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] misc3d_0.7-0 rgl_0.91 spatstat_1.18-3 deldir_0.0-12
[5] mgcv_1.6-1 ks_1.6.12 mvtnorm_0.9-9
KernSmooth_2.23-3
[9] lattice_0.18-3
loaded via a namespace (and not attached):
[1] grid_2.10.1 Matrix_0.999375-38 nlme_3.1-96
tools_2.10.1
>
>> cbind(x,round(y,3))
> x y
> [1,] 1 5.017
> [2,] 2 7.993
> [3,] 3 11.014
> [4,] 4 13.998
> [5,] 5 17.003
> [6,] 6 19.977
> [7,] 7 23.011
> [8,] 8 25.991
> [9,] 9 29.003
> [10,] 10 32.014
> [11,] 11 31.995
> [12,] 12 32.004
> [13,] 13 32.012
> [14,] 14 31.994
> [15,] 15 31.998
> [16,] 16 32.000
> [17,] 17 32.009
> [18,] 18 31.995
> [19,] 19 32.000
> [20,] 20 31.982
>
>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
> 0.002138 : 2 3 10
> 0.002117 : 2.004 3.000 9.999
> 0.002113 : 2.006 2.999 10.001
> 0.002082 : 2.005 2.999 10.000
> 0.002077 : 2.005 2.999 10.000
> 0.002077 : 2.005 2.999 10.000
> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c =
> 10), :
> step factor 0.000488281 reduced below 'minFactor' of 0.000976562
>
> I really don't care about this example, of course---only about
> learning how to avoid nls() from dying on me. so, any advice would be
> appreciated.
>
> regards,
>
> /iaw
>
>
>
> ----
> Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
>
>
>
> On Tue, May 4, 2010 at 3:59 PM, David Winsemius <dwinsemius at
comcast.net
> > wrote:
>>
>> On May 4, 2010, at 3:52 PM, ivo welch wrote:
>>
>>> thank you, david. indeed. works great (almost). an example for
>>> anyone else googling this in the future:
>>>
>>>> x=1:20
>>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10),
trace=TRUE )
>>>
>>> 0.002142 : 2 3 10
>>> 0.002115 : 2.004 3.000 10.000
>>> 0.002114 : 2.006 2.999 10.001
>>> 0.002084 : 2.005 2.999 10.000
>>> ...
>>> 0.002079 : 2.005 2.999 10.000
>>> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c
>>> = 10),
>>> :
>>> step factor 0.000488281 reduced below 'minFactor' of
0.000976562
>>>
>>> strange error, but unrelated to my question. will figure this one
>>> out
>>> next.
>>
>> I get no error. May be difficult to sort out unless you can
>> reproduce after
>> setting a random seed.
>>
>>> x=1:20
>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
>> 0.001560045 : 2 3 10
>> 0.001161253 : 2.003824 2.998973 10.000388
>> 0.001161253 : 2.003824 2.998973 10.000388
>>
>> --
>> David.
>>
>>>
>>> regards,
>>>
>>> /iaw
>>>
>>>
>>> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net
>>> >
>>> wrote:
>>>>
>>>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>>>
>>>>> Dear R experts---I would like to estimate a non-linear
least
>>>>> squares
>>>>> expression that looks something like
>>>>>
>>>>> y ~ a+b*min(c,x)
>>>>>
>>>>> where a, b, and c are the three parameters. how do I
define a min
>>>>> function in the formula language of R? advice appreciated.
>>>>
>>>> ?pmin
>>>>
>>>>>
>>>>> sincerely,
>>>>>
>>>>> /iaw
>>
>>
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 92
Date: Tue, 4 May 2010 23:51:45 +0200
From: Joris Meys <jorismeys at gmail.com>
To: Thorn <thorn.thaler at rdls.nestle.com>
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Lazy evaluation in function call
Message-ID:
<j2xb5e1ab9a1005041451ief20310cwa25d4f9b4bfd236d at mail.gmail.com>
Content-Type: text/plain
I think you'll have to code it a bit different. I'd do :
f <- function(x,y){
if(missing(y)) y <-x
x+y
}> f(2)
[1] 4> f(2,3)
[1] 5>
On Tue, May 4, 2010 at 4:26 PM, Thorn <thorn.thaler at rdls.nestle.com>
wrote:
> Hi everybody,
>
> how is it possible to refer to an argument passed to a function in the
> function call? What I like to do, is something like
>
> f <- function(x,y) x+y
> f(2, x) # should give 4
>
> The problem is of course that x is only known inside the function. Of
> course I
> could specify something like
>
> f(z<-2,z)
>
> but I'm just curious whether it is possible to use a fancy combination
of
> eval, substitute or quote ;)
>
> BR, thorn
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Joris Meys
Statistical Consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
Coupure Links 653
B-9000 Gent
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
[[alternative HTML version deleted]]
------------------------------
Message: 93
Date: Tue, 04 May 2010 17:59:49 -0400
From: Carl Witthoft <carl at witthoft.com>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] Show number at each bar in barchart?
Message-ID: <4BE098D5.1090800 at witthoft.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
But before you go any further, please read some of Edward Tufte's
material on clarity and simplicity in graphs.
> On Tue, May 4, 2010 at 8:41 AM, someone <> wrote:
>
>>
>> when i plot a barchart with 5 bars there is one bar pretty long and
>> the
>> others get smaller
>> like (20, 80, 20, 5, 2)
>> is there a way of displaying the number accoirding to each bar next
>> to it?
------------------------------
Message: 94
Date: Tue, 4 May 2010 18:08:38 -0400
From: Steve Lianoglou <mailinglist.honeypot at gmail.com>
To: Ruihong Huang <ruihong.huang at wiwi.hu-berlin.de>
Cc: r-help at r-project.org
Subject: Re: [R] Two Questions on R (call by reference and
pre-compilation)
Message-ID:
<y2ybbdc7ed01005041508i5eb44ae8h175c384cdc3737ac at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi,
On Tue, May 4, 2010 at 5:05 PM, Ruihong Huang
<ruihong.huang at wiwi.hu-berlin.de> wrote:> Hi All,
>
> I have two questions on R. Could you please explain them to me? Thank you!
>
> 1) When call a function, R typically copys the values to formal arguments
> (call by value).
This is technically incorrect.
As far as I know, R has "copy-on-write" semantics. It will only make a
copy of the passed in object if you modify it within your function.
> This is very cost, if I would like to pass a huge data set
> to a function. Is there any situations that R doesn't copy the data,
besides
> pass data in an environment object.
This question comes up quite often, you could try searching the
archives to get more info about that (using gmane might be helpful).
Check out this SO thread as well:
http://stackoverflow.com/questions/2603184/r-pass-by-reference
> 2) Does R pre-compile the object function to binary when running
"optim"? I
> experienced the R "optim" is much slower than the MATLAB
"fmincon" function.
> I don't know MATLAB has done any pre-compilation on the script for
object
> function or not. But perhaps, we can increase R performance by some sort of
> pre-compilation during running time.
If I had to guess, I'd guess that it doesn't, but let's see what the
gurus say ...
--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact
------------------------------
Message: 95
Date: Tue, 4 May 2010 15:35:33 -0700
From: Bert Gunter <gunter.berton at gene.com>
To: "'Joris Meys'" <jorismeys at gmail.com>,
"'Thorn'"
<thorn.thaler at rdls.nestle.com>
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Lazy evaluation in function call
Message-ID: <000901caebda$1c070360$dfd81f0a at gne.windows.gene.com>
Content-Type: text/plain; charset="us-ascii"
Inline below.
-- Bert
Bert Gunter
Genentech Nonclinical Statistics
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On
Behalf Of Joris Meys
Sent: Tuesday, May 04, 2010 2:52 PM
To: Thorn
Cc: r-help at stat.math.ethz.ch
Subject: Re: [R] Lazy evaluation in function call
I think you'll have to code it a bit different. I'd do :
f <- function(x,y){
if(missing(y)) y <-x
x+y
}> f(2)
[1] 4> f(2,3)
[1] 5>
On Tue, May 4, 2010 at 4:26 PM, Thorn <thorn.thaler at rdls.nestle.com>
wrote:
> Hi everybody,
>
> how is it possible to refer to an argument passed to a function in the
> function call? What I like to do, is something like
>
> f <- function(x,y) x+y
> f(2, x) # should give 4
-- No.
f <- function(x, y = x)x+y ## lazy evaluation enables this
> f(2)
[1] 4> f(2,3)
[1] 5
-- Bert
------------------------------
Message: 96
Date: Tue, 04 May 2010 16:16:09 -0700
From: "sue at xlsolutions-corp.com" <sue at
xlsolutions-corp.com>
To: r-help at r-project.org
Subject: [R] Openings in the Consulting Department of XLSolutions Corp
Message-ID:
<20100504161609.aa8924c5d28ca71e2a043bb294e795eb.740ccf3d77.wbe at
mobilemail.secureserver.net>
Content-Type: text/plain; charset="us-ascii"
Dear useRs,
Please help me find the ideal candidates for 2 R programming consulting
positions at XLSolutions Corp.
The job entails building custom applications for our end users. R
programming is vital and skill in data analysis is important. If you are
interested in working with our consulting group, please contact me.
Please pass along to your contacts and If you have questions, please
email me. Telecommuting is possible for these openings.
Regards
Jennifer McDonald
Assistant to Mary RITZ
Research and Consulting
XLSolutions Corporation
North American Division
1700 7th Ave
Suite 2100
Seattle, WA 98101
Phone: 206-686-1578
Email: jen at xlsolutions-corp.com
web: www.xlsolutions-corp.com/rcourses
------------------------------
Message: 97
Date: Tue, 4 May 2010 19:39:41 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: ivo welch <ivo.welch at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID:
<p2x971536df1005041639q43e69b46vebc36cc17466f6dd at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
For fixed c, y is linear in pmin(c, x) so the first two statements
find c and the next solves the remaining linear problem
f <- function(c) sum((y - fitted(lm(y ~ pmin(c, x))))^2)
fit.c <- optimize(f, c(0, max(DF$x))); fit.c
lm(y ~ pmin(fit.c$minimum, x))
as a function of c
On Tue, May 4, 2010 at 5:25 PM, ivo welch <ivo.welch at gmail.com>
wrote:> thank you, david and gabor. ?very much appreciated. ?I should have
> thought of setting the seed. ?this was only an example, of course.
> alas, such intermittent errors could still be of concern to me,
> because I need to simulate this nls() to find out its properties under
> the NULL, so I can't easily tolerate errors. ?fortunately, I had the
> window still open, so getting my y's out was easy, and the rounded
> figures produce the same nls error.
>
>> cbind(x,round(y,3))
> ? ? ? x ? ? ?y
> ?[1,] ?1 ?5.017
> ?[2,] ?2 ?7.993
> ?[3,] ?3 11.014
> ?[4,] ?4 13.998
> ?[5,] ?5 17.003
> ?[6,] ?6 19.977
> ?[7,] ?7 23.011
> ?[8,] ?8 25.991
> ?[9,] ?9 29.003
> [10,] 10 32.014
> [11,] 11 31.995
> [12,] 12 32.004
> [13,] 13 32.012
> [14,] 14 31.994
> [15,] 15 31.998
> [16,] 16 32.000
> [17,] 17 32.009
> [18,] 18 31.995
> [19,] 19 32.000
> [20,] 20 31.982
>
>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
> 0.002138 : ? 2 ?3 10
> 0.002117 : ? 2.004 ?3.000 ?9.999
> 0.002113 : ? 2.006 ?2.999 10.001
> 0.002082 : ? 2.005 ?2.999 10.000
> 0.002077 : ? 2.005 ?2.999 10.000
> 0.002077 : ? 2.005 ?2.999 10.000
> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c = 10), ?:
> ?step factor 0.000488281 reduced below 'minFactor' of 0.000976562
>
> I really don't care about this example, of course---only about
> learning how to avoid nls() from dying on me. ?so, any advice would be
> appreciated.
>
> regards,
>
> /iaw
>
>
>
> ----
> Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
>
>
>
> On Tue, May 4, 2010 at 3:59 PM, David Winsemius <dwinsemius at
comcast.net> wrote:
>>
>> On May 4, 2010, at 3:52 PM, ivo welch wrote:
>>
>>> thank you, david. ?indeed. ?works great (almost). ?an example for
>>> anyone else googling this in the future:
>>>
>>>> x=1:20
>>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10),
trace=TRUE )
>>>
>>> 0.002142 : ? 2 ?3 10
>>> 0.002115 : ? 2.004 ?3.000 10.000
>>> 0.002114 : ? 2.006 ?2.999 10.001
>>> 0.002084 : ? 2.005 ?2.999 10.000
>>> ...
>>> 0.002079 : ? 2.005 ?2.999 10.000
>>> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c =
10),
>>> ?:
>>> ?step factor 0.000488281 reduced below 'minFactor' of
0.000976562
>>>
>>> strange error, but unrelated to my question. ?will figure this one
out
>>> next.
>>
>> I get no error. May be difficult to sort out unless you can reproduce
after
>> setting a random seed.
>>
>>> x=1:20
>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
>> 0.001560045 : ? 2 ?3 10
>> 0.001161253 : ? 2.003824 ?2.998973 10.000388
>> 0.001161253 : ? 2.003824 ?2.998973 10.000388
>>
>> --
>> David.
>>
>>>
>>> regards,
>>>
>>> /iaw
>>>
>>>
>>> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius at
comcast.net>
>>> wrote:
>>>>
>>>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>>>
>>>>> Dear R experts---I would like to estimate a non-linear
least squares
>>>>> expression that looks something like
>>>>>
>>>>> ?y ~ a+b*min(c,x)
>>>>>
>>>>> where a, b, and c are the three parameters. ?how do I
define a min
>>>>> function in the formula language of R? ?advice appreciated.
>>>>
>>>> ?pmin
>>>>
>>>>>
>>>>> sincerely,
>>>>>
>>>>> /iaw
>>>>>
>>>>> ______________________________________________
>>>>> R-help at r-project.org mailing list
>>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>>> PLEASE do read the posting guide
>>>>> http://www.R-project.org/posting-guide.html
>>>>> and provide commented, minimal, self-contained,
reproducible code.
>>>>
>>>> David Winsemius, MD
>>>> West Hartford, CT
>>>>
>>>>
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 98
Date: Tue, 4 May 2010 19:53:20 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: ivo welch <ivo.welch at gmail.com>
Cc: r-help <r-help at stat.math.ethz.ch>
Subject: Re: [R] R formula language---a min and max function?
Message-ID:
<m2s971536df1005041653sdfbc3252oad8bebdc56905ec7 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Tue, May 4, 2010 at 7:39 PM, Gabor Grothendieck
<ggrothendieck at gmail.com> wrote:> For fixed c, y is linear in pmin(c, x) so the first two statements
> find c and the next solves the remaining linear problem
>
Here is a slightly shorter f:
f <- function(c) sum(resid(lm(y ~ pmin(c, x)))^2)
> f <- function(c) sum((y - fitted(lm(y ~ pmin(c, x))))^2)
> fit.c <- optimize(f, c(0, max(DF$x))); fit.c
>
> lm(y ~ pmin(fit.c$minimum, x))
>
>
> as a function of c
>
> On Tue, May 4, 2010 at 5:25 PM, ivo welch <ivo.welch at gmail.com>
wrote:
>> thank you, david and gabor. ?very much appreciated. ?I should have
>> thought of setting the seed. ?this was only an example, of course.
>> alas, such intermittent errors could still be of concern to me,
>> because I need to simulate this nls() to find out its properties under
>> the NULL, so I can't easily tolerate errors. ?fortunately, I had
the
>> window still open, so getting my y's out was easy, and the rounded
>> figures produce the same nls error.
>>
>>> cbind(x,round(y,3))
>> ? ? ? x ? ? ?y
>> ?[1,] ?1 ?5.017
>> ?[2,] ?2 ?7.993
>> ?[3,] ?3 11.014
>> ?[4,] ?4 13.998
>> ?[5,] ?5 17.003
>> ?[6,] ?6 19.977
>> ?[7,] ?7 23.011
>> ?[8,] ?8 25.991
>> ?[9,] ?9 29.003
>> [10,] 10 32.014
>> [11,] 11 31.995
>> [12,] 12 32.004
>> [13,] 13 32.012
>> [14,] 14 31.994
>> [15,] 15 31.998
>> [16,] 16 32.000
>> [17,] 17 32.009
>> [18,] 18 31.995
>> [19,] 19 32.000
>> [20,] 20 31.982
>>
>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10), trace=TRUE )
>> 0.002138 : ? 2 ?3 10
>> 0.002117 : ? 2.004 ?3.000 ?9.999
>> 0.002113 : ? 2.006 ?2.999 10.001
>> 0.002082 : ? 2.005 ?2.999 10.000
>> 0.002077 : ? 2.005 ?2.999 10.000
>> 0.002077 : ? 2.005 ?2.999 10.000
>> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3, c =
10), ?:
>> ?step factor 0.000488281 reduced below 'minFactor' of
0.000976562
>>
>> I really don't care about this example, of course---only about
>> learning how to avoid nls() from dying on me. ?so, any advice would be
>> appreciated.
>>
>> regards,
>>
>> /iaw
>>
>>
>>
>> ----
>> Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
>>
>>
>>
>> On Tue, May 4, 2010 at 3:59 PM, David Winsemius <dwinsemius at
comcast.net> wrote:
>>>
>>> On May 4, 2010, at 3:52 PM, ivo welch wrote:
>>>
>>>> thank you, david. ?indeed. ?works great (almost). ?an example
for
>>>> anyone else googling this in the future:
>>>>
>>>>> x=1:20
>>>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10),
trace=TRUE )
>>>>
>>>> 0.002142 : ? 2 ?3 10
>>>> 0.002115 : ? 2.004 ?3.000 10.000
>>>> 0.002114 : ? 2.006 ?2.999 10.001
>>>> 0.002084 : ? 2.005 ?2.999 10.000
>>>> ...
>>>> 0.002079 : ? 2.005 ?2.999 10.000
>>>> Error in nls(y ~ a + b * pmin(c, x), start = list(a = 2, b = 3,
c = 10),
>>>> ?:
>>>> ?step factor 0.000488281 reduced below 'minFactor' of
0.000976562
>>>>
>>>> strange error, but unrelated to my question. ?will figure this
one out
>>>> next.
>>>
>>> I get no error. May be difficult to sort out unless you can
reproduce after
>>> setting a random seed.
>>>
>>>> x=1:20
>>>> y= 2+3*ifelse(x>10, 10, x)+rnorm(20,0,0.01)
>>>> r1= nls( y~ a+b*pmin(c,x), start=list(a=2, b=3, c=10),
trace=TRUE )
>>> 0.001560045 : ? 2 ?3 10
>>> 0.001161253 : ? 2.003824 ?2.998973 10.000388
>>> 0.001161253 : ? 2.003824 ?2.998973 10.000388
>>>
>>> --
>>> David.
>>>
>>>>
>>>> regards,
>>>>
>>>> /iaw
>>>>
>>>>
>>>> On Tue, May 4, 2010 at 3:40 PM, David Winsemius <dwinsemius
at comcast.net>
>>>> wrote:
>>>>>
>>>>> On May 4, 2010, at 3:33 PM, ivo welch wrote:
>>>>>
>>>>>> Dear R experts---I would like to estimate a non-linear
least squares
>>>>>> expression that looks something like
>>>>>>
>>>>>> ?y ~ a+b*min(c,x)
>>>>>>
>>>>>> where a, b, and c are the three parameters. ?how do I
define a min
>>>>>> function in the formula language of R? ?advice
appreciated.
>>>>>
>>>>> ?pmin
>>>>>
>>>>>>
>>>>>> sincerely,
>>>>>>
>>>>>> /iaw
>>>>>>
>>>>>> ______________________________________________
>>>>>> R-help at r-project.org mailing list
>>>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>>>> PLEASE do read the posting guide
>>>>>> http://www.R-project.org/posting-guide.html
>>>>>> and provide commented, minimal, self-contained,
reproducible code.
>>>>>
>>>>> David Winsemius, MD
>>>>> West Hartford, CT
>>>>>
>>>>>
>>>
>>> David Winsemius, MD
>>> West Hartford, CT
>>>
>>>
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
------------------------------
Message: 99
Date: Tue, 4 May 2010 17:10:10 -0700 (PDT)
From: Seth <sjmyers at syr.edu>
To: r-help at r-project.org
Subject: [R] readLines with space-delimiter?
Message-ID: <1273018210713-2130255.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I am reading a large space-delimited text file into R (41 columns and many
rows) and need to do run each row's values through another R object and then
write to another text file. So, far using readLines and writeLines seems to
be the best bet. I've gotten the data exchange working except each row is
read in as one 'chunk', meaning the row has all values between two
quotes
("41 numbers"). I need to split these based upon the spaces between
them.
What is the simplest means of doing this?
Code so far.
datin<-file("C:\\rforest\\data\\aoidry_predictors_85.txt",
open="rt")
datout<-file("C:\\rforest\\prob85.txt",open="wt")
x<-readLines(datin,n=1)
writeLines(x,con=datout)
Thanks,
Seth
--
View this message in context:
http://r.789695.n4.nabble.com/readLines-with-space-delimiter-tp2130255p2130255.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 100
Date: Tue, 04 May 2010 20:39:19 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: David Winsemius <dwinsemius at comcast.net>
Cc: Michael Friendly <friendly at yorku.ca>, R-Help
<r-help at stat.math.ethz.ch>
Subject: Re: [R] rgl: plane3d or abline() analog
Message-ID: <4BE0BE37.9050906 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/05/2010 4:19 PM, David Winsemius wrote:> On May 4, 2010, at 4:09 PM, Michael Friendly wrote:
>
>
>> For use with rgl, I'm looking for a function to draw a plane in an
>> rgl scene that would function
>> sort of like abline(a, b) does in base graphics, where abline(0, 1)
>> draws a line of unit slope through
>> the origin. Analogously, I'd like to have a plane3d function, so
>> that plane3d(0, 1, 1) draws a
>> plane through the origin with unit slopes in x & y and plane3d(3,
0,
>> 0) draws a horizontal plane
>> at z=3.
>>
>> I see that scatterplot3d in the scatterplot3d package returns a
>> plane3d() *function* for a given
>> plot. I could probably try to adapt this, but before I do, I wonder
>> if something like this for
>> rgl exists that I haven't found.
>>
>
> ?quads3d
>
>
It's harder than that, because a plane intersecting with the bounding
box of the data doesn't necessarily produce a quadrilateral: some are
other polygons (e.g. an
intersection near a corner can be a triangle). Plus, you want to still
see a plane if you add new data and change the bounding box. So this is
something that really needs to be done at the C++ level, and though I've
wanted one every now and then, I've never got around to adding it.
Maybe soon.
Duncan Murdoch
------------------------------
Message: 101
Date: Tue, 04 May 2010 20:43:04 -0400
From: Duncan Murdoch <murdoch.duncan at gmail.com>
To: Ruihong Huang <ruihong.huang at wiwi.hu-berlin.de>
Cc: r-help at r-project.org
Subject: Re: [R] Two Questions on R (call by reference and
pre-compilation)
Message-ID: <4BE0BF18.8000406 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/05/2010 5:05 PM, Ruihong Huang wrote:> Hi All,
>
> I have two questions on R. Could you please explain them to me? Thank you!
>
> 1) When call a function, R typically copys the values to formal
> arguments (call by value). This is very cost, if I would like to pass a
> huge data set to a function. Is there any situations that R doesn't
copy
> the data, besides pass data in an environment object.
>
R doesn't copy data unless it needs to, for example if your function
modifies its copy. So don't worry about the cost, there usually isn't
much of one.> 2) Does R pre-compile the object function to binary when running
> "optim"? I experienced the R "optim" is much slower
than the MATLAB
> "fmincon" function. I don't know MATLAB has done any
pre-compilation on
> the script for object function or not. But perhaps, we can increase R
> performance by some sort of pre-compilation during running time.
>
There's an experimental compiler, but I don't know if there's a
predicted release date for it. R is not an easy language to compile.
Duncan Murdoch>
> Thanks in advance.
>
>
> Best Regards,
> Ruihong
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 102
Date: Tue, 4 May 2010 19:44:14 -0500
From: Tengfei Yin <yintengfei at gmail.com>
To: Fahim Md <fahim.md at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] installing a package in linux
Message-ID:
<r2h316751b31005041744m2e37e154kde5c9c05ec9ab892 at mail.gmail.com>
Content-Type: text/plain
Hi
R basic packages always works fine in my laptop (also ubuntu), you don't
need to reinstall anything once you installed the package, did you do that
in your terminal like
$R (enter R session)>install.packages('package name')
>q()
then everytime you enter the R session, you just library('package
name'),
that should work... I don't know if it is sth about user privilege , do you
use R on your own computer or on other servers?
Regards
Tengfei
On Tue, May 4, 2010 at 2:25 PM, Fahim Md <fahim.md at gmail.com> wrote:
> I recently started using ubuntu 9.10 and I am using gedit editor and R
> plugin for writing R code. To install any package I need to do:
> $ install.packages()
> //window pop-up for mirror selection
> //then another window pop up for package selection.
> After this as long as I am not exiting, the function of the newly installed
> packages are available.
>
> After I exit (i use to put 'no' in 'save workspace' option)
from R, if I
> want to again work in R, I have to repeat the process of package install.
> This reintallation problem was not there in windows(I was using Tinn-R as
> editor, I just need to put require('package-name') to use its
function).
>
> Is there anyway so that reinstallation of the package is avoided???
> thanks
> --Fahim
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Tengfei Yin
MCDB PhD student
1620 Howe Hall, 2274,
Iowa State University
Ames, IA,50011-2274
Homepage: www.tengfei.name
[[alternative HTML version deleted]]
------------------------------
Message: 103
Date: Tue, 4 May 2010 20:54:13 -0400
From: John Mesheimer <john.mesheimer at gmail.com>
To: r-help at r-project.org
Subject: [R] Symbolic eigenvalues and eigenvectors
Message-ID:
<i2pe76a0a371005041754vd6950ae0pf543a8467a82515f at mail.gmail.com>
Content-Type: text/plain
Let's say I had a matrix like this:
library(Ryacas)
x<-Sym("x")
m<-matrix(c(cos (x), sin(x), -sin(x), cos(x)), ncol=2)
How can I use R to obtain the eigenvalues and eigenvectors?
Thanks,
John
[[alternative HTML version deleted]]
------------------------------
Message: 104
Date: Tue, 4 May 2010 21:04:02 -0400
From: Kim Jung Hwa <kimhwamaillist at gmail.com>
To: r-help at r-project.org
Subject: [R] Visualizing binary response data?
Message-ID:
<t2i34af5fc91005041804icc2a0784scc511a2112fbe439 at mail.gmail.com>
Content-Type: text/plain
Hi All,
I'm dealing with binary response data for the first time, and I'm
confused
about what kind of graphics I could explore in order to pick relevant
predictors and their relation with response variable.
I have 8-10 continuous predictors and 4-5 categorical predictors. Can anyone
suggest what kind of graphics I can explore to see how predictors behave
w.r.t. response variable...
Any help would be greatly appreciated, thanks,
Kim
[[alternative HTML version deleted]]
------------------------------
Message: 105
Date: Tue, 4 May 2010 21:06:17 -0400
From: Steve Lianoglou <mailinglist.honeypot at gmail.com>
To: John Mesheimer <john.mesheimer at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Symbolic eigenvalues and eigenvectors
Message-ID:
<y2lbbdc7ed01005041806n561720et1506b135bf484ce9 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi,
On Tue, May 4, 2010 at 8:54 PM, John Mesheimer <john.mesheimer at
gmail.com> wrote:> Let's say I had a matrix like this:
>
> library(Ryacas)
> x<-Sym("x")
> m<-matrix(c(cos (x), sin(x), -sin(x), cos(x)), ncol=2)
>
> How can I use R to obtain the eigenvalues and eigenvectors?
R> help.search('eigenvalues')
is a good start
--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact
------------------------------
Message: 106
Date: Tue, 4 May 2010 21:23:36 -0400
From: John Mesheimer <john.mesheimer at gmail.com>
To: Steve Lianoglou <mailinglist.honeypot at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Symbolic eigenvalues and eigenvectors
Message-ID:
<w2he76a0a371005041823pa365e116y7d49b88fc2937f24 at mail.gmail.com>
Content-Type: text/plain
Thanks, Steve. Can you please show me at least one hit that answers my
question?
On Tue, May 4, 2010 at 9:06 PM, Steve Lianoglou <
mailinglist.honeypot at gmail.com> wrote:
> Hi,
>
> On Tue, May 4, 2010 at 8:54 PM, John Mesheimer <john.mesheimer at
gmail.com>
> wrote:
> > Let's say I had a matrix like this:
> >
> > library(Ryacas)
> > x<-Sym("x")
> > m<-matrix(c(cos (x), sin(x), -sin(x), cos(x)), ncol=2)
> >
> > How can I use R to obtain the eigenvalues and eigenvectors?
>
> R> help.search('eigenvalues')
>
> is a good start
>
> --
> Steve Lianoglou
> Graduate Student: Computational Systems Biology
> | Memorial Sloan-Kettering Cancer Center
> | Weill Medical College of Cornell University
> Contact Info:
http://cbio.mskcc.org/~lianos/contact<http://cbio.mskcc.org/%7Elianos/contact>
>
[[alternative HTML version deleted]]
------------------------------
Message: 107
Date: Tue, 04 May 2010 21:27:46 -0400
From: Michael Friendly <friendly at yorku.ca>
To: Duncan Murdoch <murdoch.duncan at gmail.com>
Cc: R-Help <r-help at stat.math.ethz.ch>
Subject: Re: [R] rgl: plane3d or abline() analog
Message-ID: <4BE0C992.6070303 at yorku.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Duncan Murdoch wrote:> On 04/05/2010 4:19 PM, David Winsemius wrote:
>> On May 4, 2010, at 4:09 PM, Michael Friendly wrote:
>>
>>
>>> For use with rgl, I'm looking for a function to draw a plane in
an
>>> rgl scene that would function
>>> sort of like abline(a, b) does in base graphics, where abline(0, 1)
>>> draws a line of unit slope through
>>> the origin. Analogously, I'd like to have a plane3d function,
so
>>> that plane3d(0, 1, 1) draws a
>>> plane through the origin with unit slopes in x & y and
plane3d(3,
>>> 0, 0) draws a horizontal plane
>>> at z=3.
>>>
>>> I see that scatterplot3d in the scatterplot3d package returns a
>>> plane3d() *function* for a given
>>> plot. I could probably try to adapt this, but before I do, I
>>> wonder if something like this for
>>> rgl exists that I haven't found.
>>>
>>
>> ?quads3d
>>
>>
> It's harder than that, because a plane intersecting with the bounding
> box of the data doesn't necessarily produce a quadrilateral: some are
> other polygons (e.g. an
> intersection near a corner can be a triangle). Plus, you want to
> still see a plane if you add new data and change the bounding box. So
> this is something that really needs to be done at the C++ level, and
> though I've wanted one every now and then, I've never got around to
> adding it. Maybe soon.
If this makes it easier, what I was thinking of was a grid of lines
parallel to the x & y axes which could be clipped
if necessary to the bounding box.
--
Michael Friendly Email: friendly at yorku.ca
Professor, Psychology Dept.
York University Voice: 416 736-5115 x66249 Fax: 416 736-5814
4700 Keele Street http://www.math.yorku.ca/SCS/friendly.html
Toronto, ONT M3J 1P3 CANADA
------------------------------
Message: 108
Date: Tue, 4 May 2010 21:46:48 -0400
From: Gabor Grothendieck <ggrothendieck at gmail.com>
To: John Mesheimer <john.mesheimer at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Symbolic eigenvalues and eigenvectors
Message-ID:
<l2u971536df1005041846pfb6a8d15m71cc50feb878ff35 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Eigenvalues are not directly supported by the Ryacas interface but you
can send this directly to yacas like this:
> yacas("EigenValues({{Cos(x), -Sin(x)}, {Sin(x), Cos(x)}})")
[1] "Starting Yacas!"
expression(Roots((cos(x) - xx)^2 + sin(x)^2))
On Tue, May 4, 2010 at 8:54 PM, John Mesheimer <john.mesheimer at
gmail.com> wrote:> Let's say I had a matrix like this:
>
> library(Ryacas)
> x<-Sym("x")
> m<-matrix(c(cos (x), sin(x), -sin(x), cos(x)), ncol=2)
>
> How can I use R to obtain the eigenvalues and eigenvectors?
>
> Thanks,
> John
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 109
Date: Tue, 4 May 2010 22:12:04 -0400
From: Thomas Stewart <tgstewart at gmail.com>
To: Kim Jung Hwa <kimhwamaillist at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Visualizing binary response data?
Message-ID:
<i2t298ff68e1005041912m94f1e5cl23e21859b6976d9f at mail.gmail.com>
Content-Type: text/plain
For binary w.r.t. continuous, how about a smoothing spline? As in,
x<-rnorm(100)
y<-rbinom(100,1,exp(.3*x-.07*x^2)/(1+exp(.3*x-.07*x^2)))
plot(x,y)
lines(smooth.spline(x,y))
OR how about a more parametric approach, logistic regression? As in,
glm1<-glm(y~x+I(x^2),family=binomial)
plot(x,y)
lines(sort(x),predict(glm1,newdata=data.frame(x=sort(x)),type="response"))
FOR binary w.r.t. categorical it depends. Are the categories ordinal (is
there a natural ordering?) or are the categories nominal (no ordering)? For
nominal categories, the data is essentially a contingency table, and
"strength of the predictor" is a test of independence. You can still
do a
graphical exploration: maybe plotting the proportion of Y=1 for each
category of X. As in,
z<-cut(x,breaks=-3:3)
plot(tapply(y,z,mean))
If your goal is to find strong predictors of Y, you may want to consider
graphical measures that look at the predictors jointly. Maybe with a
generalized additive model (gam)?
There is probably a lot more you can do. Be creative.
-tgs
On Tue, May 4, 2010 at 9:04 PM, Kim Jung Hwa <kimhwamaillist at
gmail.com>wrote:
> Hi All,
>
> I'm dealing with binary response data for the first time, and I'm
confused
> about what kind of graphics I could explore in order to pick relevant
> predictors and their relation with response variable.
>
> I have 8-10 continuous predictors and 4-5 categorical predictors. Can
> anyone
> suggest what kind of graphics I can explore to see how predictors behave
> w.r.t. response variable...
>
> Any help would be greatly appreciated, thanks,
> Kim
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 110
Date: Tue, 4 May 2010 21:17:53 -0500
From: Frank E Harrell Jr <f.harrell at Vanderbilt.Edu>
To: <r-help at r-project.org>
Subject: Re: [R] Visualizing binary response data?
Message-ID: <4BE0D551.4030604 at vanderbilt.edu>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
On 05/04/2010 09:12 PM, Thomas Stewart wrote:> For binary w.r.t. continuous, how about a smoothing spline? As in,
>
> x<-rnorm(100)
> y<-rbinom(100,1,exp(.3*x-.07*x^2)/(1+exp(.3*x-.07*x^2)))
> plot(x,y)
> lines(smooth.spline(x,y))
>
> OR how about a more parametric approach, logistic regression? As in,
>
> glm1<-glm(y~x+I(x^2),family=binomial)
> plot(x,y)
>
lines(sort(x),predict(glm1,newdata=data.frame(x=sort(x)),type="response"))
>
> FOR binary w.r.t. categorical it depends. Are the categories ordinal (is
> there a natural ordering?) or are the categories nominal (no ordering)?
For
> nominal categories, the data is essentially a contingency table, and
> "strength of the predictor" is a test of independence. You can
still do a
> graphical exploration: maybe plotting the proportion of Y=1 for each
> category of X. As in,
>
> z<-cut(x,breaks=-3:3)
> plot(tapply(y,z,mean))
>
> If your goal is to find strong predictors of Y, you may want to consider
> graphical measures that look at the predictors jointly. Maybe with a
> generalized additive model (gam)?
>
> There is probably a lot more you can do. Be creative.
>
> -tgs
And you have to decide why you would look to a graph to select
predictors. This can badly distort later inferences (confidence
intervals, P-values, biased regression coefficients, biased R^2, etc.).
Frank>
>
>
> On Tue, May 4, 2010 at 9:04 PM, Kim Jung Hwa<kimhwamaillist at
gmail.com>wrote:
>
>> Hi All,
>>
>> I'm dealing with binary response data for the first time, and
I'm confused
>> about what kind of graphics I could explore in order to pick relevant
>> predictors and their relation with response variable.
>>
>> I have 8-10 continuous predictors and 4-5 categorical predictors. Can
>> anyone
>> suggest what kind of graphics I can explore to see how predictors
behave
>> w.r.t. response variable...
>>
>> Any help would be greatly appreciated, thanks,
>> Kim
>>
--
Frank E Harrell Jr Professor and Chairman School of Medicine
Department of Biostatistics Vanderbilt University
------------------------------
Message: 111
Date: Tue, 4 May 2010 22:27:33 -0400
From: jim holtman <jholtman at gmail.com>
To: Seth <sjmyers at syr.edu>
Cc: r-help at r-project.org
Subject: Re: [R] readLines with space-delimiter?
Message-ID:
<n2l644e1f321005041927gc713769bt8237d93c778dc524 at mail.gmail.com>
Content-Type: text/plain
Have you considered 'scan' or 'read.table'? This is what is
mostly used in
these situations. Read the chapter in the Intro to R on reading in data.
On Tue, May 4, 2010 at 8:10 PM, Seth <sjmyers at syr.edu> wrote:
>
> Hi,
> I am reading a large space-delimited text file into R (41 columns and many
> rows) and need to do run each row's values through another R object and
> then
> write to another text file. So, far using readLines and writeLines seems
> to
> be the best bet. I've gotten the data exchange working except each row
is
> read in as one 'chunk', meaning the row has all values between two
quotes
> ("41 numbers"). I need to split these based upon the spaces
between them.
> What is the simplest means of doing this?
>
> Code so far.
>
> datin<-file("C:\\rforest\\data\\aoidry_predictors_85.txt",
open="rt")
> datout<-file("C:\\rforest\\prob85.txt",open="wt")
> x<-readLines(datin,n=1)
> writeLines(x,con=datout)
>
> Thanks,
> Seth
> --
> View this message in context:
>
http://r.789695.n4.nabble.com/readLines-with-space-delimiter-tp2130255p2130255.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
[[alternative HTML version deleted]]
------------------------------
Message: 112
Date: Wed, 5 May 2010 04:58:13 +0200
From: Nikos Alexandris <nikos.alexandris at felis.uni-freiburg.de>
To: r-help at r-project.org
Subject: Re: [R] Cross-checking a custom function for separability
indices
Message-ID:
<201005050458.13450.nikos.alexandris at felis.uni-freiburg.de>
Content-Type: text/plain; charset="iso-8859-1"
Nikos:> Hi list!
>
> I have prepared a custom function (below) in order to calculate
> separability indices (Divergence, Bhattacharyya, Jeffries-Matusita,
> Transformed divergene) between two samples of (spectral land cover)
> classes.
I've found a mistake: initially I used "log10" instead of
"log" in the
definition of the Bhatacharryya index.
Thanks to a friend we cross-compared results of the (corrected) custom
function(s) (posted here) and the results derived from ERDAS Imagine (a
proprietary remote sensing box) and they agree concerning the Bhattacharyya
index (and successively the Jeffries-Matusita index). Not sure about the
Divergence (and successively the Transformed Divergence) though.
There is a subtle difference worthwhile to mention:
ERDAS Imagine rejected an observation (when calculating the "mean" of
the
sample(s)). One observation less due to... "pixel inclusion/exclusion"
rules?
No idea. Needless to say, using R one has total control (given he knows what
is going on).
Also, I've expanded this mini personal project by writing more functions in
order to get an output that fits my current needs. Functions are attached as
".R" files and explanations (along with some questions of course) are
given
below [*]. Some things are hardcoded since I have little experience on writing
generic functions.
Would be nice if someone is interested in this and can provide assistance to
make it better, more generic and useful.
Thanks, Nikos
---> # two samples
> sample.1 <- c (1362, 1411, 1457, 1735, 1621, 1621, 1791, 1863, 1863,
1838)
> sample.2 <- c (1354, 1458, 1458, 1458, 1550, 1145, 1428, 1573, 1573,
1657)
> # running the custom function (below)
> separability.measures ( sample.1 , sample.2 )
> Divergence: 1.5658
> Bhattacharryya: 0.1683
> Jeffries-Matusita: 0.3098
> Transformed divergence: 0.3555
after correction the results are:
Divergence: 1.5658
Transformed divergence: 0.3555
Bhattacharryya: 0.1805
Jeffries-Matusita: 0.3302
---
[*] Custom functions for separability measures. Note: the working directory,
location(s) for results export (as csv) as well as the location of each script
that is to be sourced are hard-coded.
1. separability.measures <- function ( Vector.1 , Vector.2 )
comments:
- converts input vectors to matrices
2. separability.measures.info <- function (
Data.Frame.1 ,
Data.Frame.2 ,
Data.Frame.1.label ,
Data.Frame.2.label ,
i ,
print = FALSE
)
comments:
- calls "separability.measures()" and prints out more info about
what is
compared
3. separability.measures.class <- function (
Data.Frame.1 ,
Data.Frame.2 ,
print = FALSE
)
comments:
- calls "separability.measures.info.R()"
- calculates separability measures between samples in "data.frames"
with
identical structure
4. separability.matrix <- function (Data.Frame.M1,Data.Frame.M2)
comments:
- calls "separability.measures.class()"
- constructs a matrix where
- rows are separability indices
- columns equal the "dim (data.frame) [2]" (any of the identical
input
data.frames) and hold... well, the separability measures!
5. separability.matrices <- function ( Reference.Data.Frame ,
Target.Data.Frame.A ,
Target.Data.Frame.B
)
comments:
- calls "separability.matrix()"
- print out something that is meant to ease off reading/comparing the
results _in case_ one wants to compare differences between one (call it)
reference sample and two (call them) target samples, e.g. calculate indices
between:
- sample.1 and sample.2
- sample.1 and sample.3
- compare the above results
(
The output of "separability.matrix()" displays the separability
measures on a
per row-basis ( each row holds results for one index) where:
- in the 1st column are the results (separabilities) for "sample.1 vs.
sample.2", for variable 1 of the (identically structured) input data.frames
- in the 2nd column are the ones for "sample.1 vs. sample.3" for
variable 1
of the (identically structured) input data.frames
- in the 3rd column are the ones for "sample.1 vs. sample.2" for
variable 2
of the (identically structured) input data.frames
- etc.
)
- what is missing is a mechanism that gives proper names for the columns. I
imagine(d) names like (for column 1) "sample.1 vs sample.2,
variable1",
"sample.1 vs sample.3, variable 1" , "sample.1 vs sample.2,
variable 2", etc.
But (a) I am not sure how to implement it and (b) they are _too_ long. So they
just are numbered as usual [1] [2] [3]...
6. separability.matrices_multiref <- function ( Reference.Data.Frame.A ,
Target.Data.Frame.A ,
Reference.Data.Frame.B ,
Target.Data.Frame.B
)
comments:
- same as function 5 (above) but accepts 4 different samples (as data.frames
of course).
Questions:
If you made it read this post till here, maybe you could give hints on how to
handle:
1. the definition of the directory path that leads to the custom functions
that are required to be source(d)? Currently hard-coded at a specific
directory where I store custom functions. Is it better to make 1 _big_ script
which includes all functions? What is good coding practice in such occasions ?
2. "column naming" after a column by column merging of three different
matrices (data.frames)? This concerns "separability.matrices()".
3. the long command built-up using "assign()" and "else()" ?
For some reason
the functions fails to run if the "else(...)" statement is moved in
the next
line. For this reason the script "separability.matrix.R" does not
respect the
"78 characters" width limit.
-------------- next part --------------
# Custom function for various Separability Measures
# by Nikos Alexandris, Freiburg, 8.04.2010
# ( based on Divergence and Jeffries-Matusita, requires input variables
"as.matrices" )
separability.measures <- function ( Vector.1 , Vector.2 ) {
# convert vectors to matrices in case they are not
Matrix.1 <- as.matrix (Vector.1)
Matrix.2 <- as.matrix (Vector.2)
# define means
mean.Matrix.1 <- mean ( Matrix.1 )
mean.Matrix.2 <- mean ( Matrix.2 )
# define difference of means
mean.difference <- mean.Matrix.1 - mean.Matrix.2
# define covariances for supplied matrices
cv.Matrix.1 <- cov ( Matrix.1 )
cv.Matrix.2 <- cov ( Matrix.2 )
# define the halfsum of cv's as "p"
p <- ( cv.Matrix.1 + cv.Matrix.2 ) / 2
#
--%<------------------------------------------------------------------------
# calculate the Bhattacharryya index
bh.distance <- 0.125 *
t ( mean.difference ) *
p^ ( -1 ) *
mean.difference +
0.5 * log (
det ( p ) / sqrt (
det ( cv.Matrix.1 ) *
det ( cv.Matrix.2 )
)
)
#
--%<------------------------------------------------------------------------
# calculate Jeffries-Matusita
# following formula is bound between 0 and 2.0
jm.distance <- 2 * ( 1 - exp ( -bh.distance ) )
# also found in the bibliography:
# jm.distance <- 1000 * sqrt ( 2 * ( 1 - exp ( -bh.distance ) ) )
# the latter formula is bound between 0 and 1414.0
#
--%<------------------------------------------------------------------------
# calculate the divergence
# trace (is the sum of the diagonal elements) of a square matrix
trace.of.matrix <- function ( SquareMatrix ) {
sum ( diag ( SquareMatrix ) ) }
# term 1
divergence.term.1 <- 1/2 *
trace.of.matrix (
( cv.Matrix.1 - cv.Matrix.2 ) *
( cv.Matrix.2^ (-1) - cv.Matrix.1^ (-1) )
)
# term 2
divergence.term.2 <- 1/2 *
trace.of.matrix (
( cv.Matrix.1^ (-1) + cv.Matrix.2^ (-1) ) *
( mean.Matrix.1 - mean.Matrix.2 ) *
t ( mean.Matrix.1 - mean.Matrix.2 )
)
# divergence
divergence <- divergence.term.1 + divergence.term.2
#
--%<------------------------------------------------------------------------
# and the transformed divergence
transformed.divergence <- 2 * ( 1 - exp ( - ( divergence / 8 ) ) )
#
--%<------------------------------------------------------------------------
# print results --- look at "separability.measures.pp"
# get "column" names (hopefully they exist) --- to use for/with...
??? ---
name.of.Matrix.1 <- colnames ( Matrix.1 )
name.of.Matrix.2 <- colnames ( Matrix.2 )
# create new objects to be used with... ?
assign ( "divergence.vector" , divergence
,
envir = .GlobalEnv)
assign ( "jm.distance.vector" , jm.distance
,
envir = .GlobalEnv)
assign ( "bh.distance.vector" , bh.distance
,
envir = .GlobalEnv )
assign ( "transformed.divergence.vector" , transformed.divergence
,
envir = .GlobalEnv )
# print some message --- ?
# a "just do it" solution using cat()
cat ( paste ( "Divergence: " ,
round ( divergence , digits = 4 ) , sep = "" ) , "\n")
cat ( paste ( "Transformed divergence: " ,
round ( transformed.divergence , digits = 4 ) , sep = "" ) ,
"\n" )
cat ( paste ( "Bhattacharryya: " ,
round ( bh.distance , digits = 4 ) , sep = "" ) , "\n")
cat ( paste ( "Jeffries-Matusita: " ,
round ( jm.distance , digits = 4 ) , sep = "" ) , "\n")
cat ("\n")
}
#
--%<------------------------------------------------------------------------
# Sources
# Richards, John A., Jia, X., "Remote Sensing Digital Image Analysis:
...", Springer
# Wikipedia
#
--%<------------------------------------------------------------------------
# Details
# Bhattacharyya distance measures the similarity of two discrete or
# continuous probability distributions.
# (from: http://en.wikipedia.org/wiki/Bhattacharyya_distance)
# Divergence [in vector calculus] is an operator that measures the magnitude
# of a vector field's source or sink at a given point...
# (from: http://en.wikipedia.org/wiki/Divergence)
# Jeffries-Matusita ...
# add info...
# Transformed divergence ...
# add info...
#
--%<------------------------------------------------------------------------
# Acknowledgements
# Augustin Lobo for initial help with the Bhattacharryya Index
# Nikos Koutsias
# useRs
# many more people...
-------------- next part --------------
# Custom function for various Separability Measures
# separability measures between data.frames with identical structure
# by Nikos Alexandris, Freiburg, April 2010
# Code ----------------------------------------------------------------------
>
separability.measures.class <- function (
Data.Frame.1 ,
Data.Frame.2 ,
print = FALSE
) {
# where to look for custom (sub-)functions?
setwd ("~/R/code/functions/")
# DEPENDS on:
# separability.measures.info()
source ("separability.measures.info.R")
# create labels from names of input(ted) data.frames
Data.Frame.1.label <- deparse ( substitute ( Data.Frame.1 ) )
Data.Frame.2.label <- deparse ( substitute ( Data.Frame.2 ) )
# create new vectors to carry results that will be fed in the final matrix
assign ( "divergence.vectorS" ,
numeric(0) , envir = .GlobalEnv )
assign ( "transformed.divergence.vectorS" ,
numeric(0) , envir = .GlobalEnv )
assign ( "bh.distance.vectorS" ,
numeric(0) , envir = .GlobalEnv )
assign ( "jm.distance.vectorS" ,
numeric(0) , envir = .GlobalEnv )
# loop over number of columns of the data.frame(s)
for ( i in 1 : dim ( Data.Frame.1 ) [2] )
# it does not matter which data.frame ( 1st or 2nd ) is used
# they should be of the same structure otherwise its pointless
# call "separability.measures.info" which calls
"separability.measures"
separability.measures.info (
Data.Frame.1 ,
Data.Frame.2 ,
Data.Frame.1.label ,
Data.Frame.2.label ,
i
)
}
#< -----------------------------------------------------------------------
Code
-------------- next part --------------
# Custom function for various Separability Measures
# construct a bi-sample matrix comparing separabilities column-by-column... !@#?
# "rows = separability.indices" and "columns = ( dim (data.frame)
[2] ) * 2"
# by Nikos Alexandris, Freiburg, April 2010
# Code ----------------------------------------------------------------------
>
# requires 3 input data.frames
separability.matrices <- function ( Reference.Data.Frame ,
Target.Data.Frame.A ,
Target.Data.Frame.B
) {
# suppress output -----------------------------------------------------------/
sink ( "/dev/null" )
# where
# Data.Frame.1 -> reference
# Data.Frame.2 -> target A
# Data.Frame.2 -> target B
# UNCOMMENT below to be used for writing results to a csv file -- -- -- -- --
# function name --- to be used for the exported csv filename
separability.function.name <- sys.call (1)
#print ( separability.function.name )
separability.function.name <- gsub ( " " , "_" ,
separability.function.name )
#print ( separability.function.name )
separability.function.name <- gsub ( "," , "_" ,
separability.function.name )
#print ( separability.function.name )
# filename
separability.filename <- paste (
c (separability.function.name) ,
collapse = "_"
)
#print ( separability.filename )
separability.filename.csv <- paste ( separability.filename ,
".csv" ,
sep = ""
)
#print ( separability.filename.csv )
# -- -- -- -- UNCOMMENT above to be used for writing results to a csv file --
# names of rows --- pass as an argument to matrix() directly ---
separability.indices <- list ( c ( "Divergence" ,
"Transformed divergence" ,
"Bhattacharryya" ,
"Jeffries-Matusita"
)
)
# number of rows is "fixed" !
separability.Matrix.rows <- length ( unlist ( separability.indices ) )
# create empty matrix to carry
# assign (
# "separability.Matrix.X" ,
# matrix (
# numeric (0) ,
# nrow = separability.Matrix.rows ,
# ncol = dim ( Reference.Data.Frame) [2] * 2 ,
# ) ,
# envir = .GlobalEnv
# )
# is it there?
# print ( separability.Matrix.X )
# get separabilities between reference and each target
# source the "parent" function
source ( "~/R/code/functions/separability.matrix.R" )
# reference ~ target A
separability.matrix ( Reference.Data.Frame , Target.Data.Frame.A )
separability.Matrix.A <- separability.Matrix.X
# reference ~ target B
separability.matrix ( Reference.Data.Frame , Target.Data.Frame.B )
separability.Matrix.B <- separability.Matrix.X
# combine results in on matrix
assign ( "separability.Matrices",
matrix (
rbind (
separability.Matrix.A ,
separability.Matrix.B
) ,
ncol = dim (separability.Matrix.A) [2] * 2
)
)
# UNsuppress output ---------------------------------------------------------/
sink ()
# row names
rownames (separability.Matrices) <- unlist (separability.indices)
# write to csv
write.csv (
round (separability.Matrices , digits = 3) ,
file = separability.filename.csv ,
eol = "\n",
)
# print out
round ( separability.Matrices , digits = 3 )
}
#< -----------------------------------------------------------------------
Code
# Sources
# combine matrices column by column
#
<http://promberger.info/linux/2007/07/09/r-combine-two-matrices-column-by-column/comment-page-1/#comment-37240>
-------------- next part --------------
# Custom function for various Separability Measures
# more info --- used within a for() loop
# by Nikos Alexandris, Freiburg, April 2010
# Code ----------------------------------------------------------------------
>
separability.measures.info <- function (
Data.Frame.1 ,
Data.Frame.2 ,
Data.Frame.1.label ,
Data.Frame.2.label ,
i ,
print = FALSE
) {
# where to find custom (sub-)function(s)
setwd ( "~/R/code/functions/" )
# DEPENS on:
# separability.measures()
source ("separability.measures.R")
# pretty print ( is it really pretty? )
cat (
"Separability measures between" ,
"\n" , "\n" , " - \"" , colnames (
Data.Frame.1 ) [i] ,
"\" of \"" , Data.Frame.1.label ,
"\"" ,
"\n" , " and" ,
"\n" , " - \"" , colnames ( Data.Frame.2 ) [i]
,
"\" of \"" , Data.Frame.2.label ,
"\"" ,
sep = ""
)
# add empty (new)line
cat (
"\n" , "\n"
)
# calculate separability measures
separability.measures ( Data.Frame.1 [i] , Data.Frame.2 [i] )
# fill the required vectors for the final matrix
assign ( "divergence.vectorS" ,
append ( divergence.vectorS , divergence.vector ) ,
envir = .GlobalEnv )
assign ( "transformed.divergence.vectorS" ,
append ( transformed.divergence.vectorS ,
transformed.divergence.vector) ,
envir = .GlobalEnv )
assign ( "bh.distance.vectorS" ,
append ( bh.distance.vectorS , bh.distance.vector ) ,
envir = .GlobalEnv )
assign ( "jm.distance.vectorS" ,
append ( jm.distance.vectorS , jm.distance.vector ) ,
envir = .GlobalEnv )
# print something to ease off visual separation of output
cat (
"--%<---" ,
"\n"
)
}
#< --------------------------------------------------------------------- Code
#
-------------- next part --------------
# Custom function for various Separability Measures
# construct a matrix with:
# "rows = separability.indices" and "columns = dim (data.frame)
[2]"
# by Nikos Alexandris, Freiburg, April 2010
# Code ----------------------------------------------------------------------
>
separability.matrix <- function (Data.Frame.M1,Data.Frame.M2) {
# where to look for custom (sub-)functions?
setwd ( "~/R/code/functions/" )
# DEPENDS on: separability.measures.class()
source ( "separability.measures.class.R" , local = FALSE )
# UNCOMMENT below to be used for writing results to a csv file ---------------
# function name --- to be used for the exported csv filename
separability.function.name <- sys.call (1)
#print ( separability.function.name )
separability.function.name <- gsub ( " " , "_" ,
separability.function.name )
#print ( separability.function.name )
separability.function.name <- gsub ( "," , "_" ,
separability.function.name )
#print ( separability.function.name )
# filename
separability.filename <- paste (
c (separability.function.name) ,
collapse = "_"
)
#print ( separability.filename )
separability.filename.csv <- paste ( separability.filename ,
".csv" ,
sep = ""
)
#print ( separability.filename.csv )
# ------------ UNCOMMENT above to be used for writing results to a csv file --
# names of rows --- pass as an argument to matrix() directly ---
separability.indices <- list ( c ( "Divergence" ,
"Transformed divergence" ,
"Bhattacharryya" ,
"Jeffries-Matusita"
)
)
# number of rows is "fixed" !
separability.Matrix.rows.number <- length ( unlist ( separability.indices )
)
# number of columns depends on the data frames
if ( dim (Data.Frame.M1) [2] == ( dim (Data.Frame.M2) [2] ) )
assign ( "separability.Matrix.columns.number" , dim
(Data.Frame.M1) [2] , envir = .GlobalEnv ) else cat ( "Warning: number of
columns differs between the 2 data frames (while it should be the same?)."
, "\n" )
# construct a matrix of [ Matrix.rows x Matrix.columns ]
assign (
"separability.Matrix" ,
matrix (
numeric (0) ,
nrow = separability.Matrix.rows.number ,
ncol = separability.Matrix.columns.number ,
dimnames = separability.indices
) ,
envir = .GlobalEnv
)
# remove unnecessary objects
rm ( separability.Matrix.rows.number )
#rm ( separability.Matrix.columns.number )
rm ( separability.indices )
# names of columns derived from the data frames
if ( any ( colnames (Data.Frame.M1) == colnames (Data.Frame.M2) ) )
colnames (separability.Matrix) <- colnames (Data.Frame.M1) else cat (
"Warning: names of columns differ between the 2 data frames (while they
should be the same?)." , "\n" , "\n" )
# suppress print-outs
sink ( "/dev/null" )
# get separability measures
separability.measures.class ( Data.Frame.M1 , Data.Frame.M2 )
# un-suppress print-outs
sink ()
# fill the Matrix
separability.Matrix [1, ] <- divergence.vectorS
separability.Matrix [2, ] <- transformed.divergence.vectorS
separability.Matrix [3, ] <- bh.distance.vectorS
separability.Matrix [4, ] <- jm.distance.vectorS
# ---------------------------------------------------------------------------/
# # make it available globally (for the next function): is this a bad idea?
assign (
"separability.Matrix.X" ,
separability.Matrix ,
envir = .GlobalEnv
)
# ---------------------------------------------------------------------------/
# write to csv file --- UNCOMMENT to write to file
# getwd()
# hardcoded: set _some_ working directory
setwd ( "~/R/data/separabilities/" )
# write to csv
write.csv (
round (separability.Matrix , digits = 3) ,
file = separability.filename.csv ,
eol = "\n",
)
# print the Matrix
round ( separability.Matrix , digits = 3 )
}
#< -----------------------------------------------------------------------
Code
# Comments
# The Jeffries-Matusita (and successively the Bhattacharryya) distance
# has been cross-checked with ERDAS Imagine's respective tool.
# The results are highly accurate. ERDAS "prefers" for some reason,
_not_ to
# include all observations for the calculations of mean(s) and
# variance(s)-covariance(s)
-------------- next part --------------
# Custom function for various Separability Measures
# construct a bi-sample matrix comparing separabilities column-by-column... !@#?
# "rows = separability.indices" and "columns = ( dim (data.frame)
[2] ) * 2"
# for samples derived from different "sources" <<< explain
better please!!!
# by Nikos Alexandris, Freiburg, April 2010
# Code ----------------------------------------------------------------------
>
# requires 4 input data.frames
separability.matrices_multiref <- function ( Reference.Data.Frame.A ,
Target.Data.Frame.A ,
Reference.Data.Frame.B ,
Target.Data.Frame.B
) {
# suppress output -----------------------------------------------------------/
sink ( "/dev/null" )
# where
# Reference.Data.Frame.A -> reference.A
# Target.Data.Frame.A -> target A
# Reference.Data.Frame.B -> reference.B
# Target.Data.Frame.B -> target B
# names of rows --- pass as an argument to matrix() directly ---
separability.indices <- list ( c ( "Divergence" ,
"Transformed divergence" ,
"Bhattacharryya" ,
"Jeffries-Matusita"
)
)
# number of rows is "fixed" !
separability.Matrix.rows <- length ( unlist ( separability.indices ) )
# get separabilities between reference and each target
# source the "parent" function
source ( "~/R/code/functions/separability.matrix.R" )
# reference ~ target A
separability.matrix ( Reference.Data.Frame.A , Target.Data.Frame.A )
separability.Matrix.A <- separability.Matrix.X
# reference ~ target B
separability.matrix ( Reference.Data.Frame.B , Target.Data.Frame.B )
separability.Matrix.B <- separability.Matrix.X
# sort and print out results
# combine results in on matrix
assign ( "separability.Matrices",
matrix (
rbind (
separability.Matrix.A ,
separability.Matrix.B
) ,
ncol = dim (separability.Matrix.A) [2] * 2
)
)
# UNsuppress output ---------------------------------------------------------/
sink ()
# row names
rownames (separability.Matrices) <- unlist (separability.indices)
# print out
round ( separability.Matrices , digits = 3 )
}
#< -----------------------------------------------------------------------
Code
------------------------------
Message: 113
Date: Tue, 4 May 2010 23:01:54 -0400
From: Steve Lianoglou <mailinglist.honeypot at gmail.com>
To: John Mesheimer <john.mesheimer at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Symbolic eigenvalues and eigenvectors
Message-ID:
<r2mbbdc7ed01005042001ic9eb675aj58b302eec9f3e377 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi John,
It's common courtesy to keep all discussions originating from R-help
back on list so it can act as a helpful resource for others to peruse
when they have similar questions.
With that in mind, I think you forgot to CC the list with your very
informative follow-up reply to me:
On Tue, May 4, 2010 at 9:22 PM, John Mesheimer <john.mesheimer at
gmail.com> wrote:> Well, that reply was meant for the list.
>
> As for a personal reply.. well, you's pretty much a dumbass.
In this case, I thought it'd be helpful to let everyone here know how
nice you are when someone tries to help you out.
I was on my way out the door from school when I tried to shoot off a
quick email to help you find the answer to a problem that I
unfortunately misunderstood in your original email. I apologize for
not taking the time to appreciate the full details of your problem.
As for you, I guess you can apologize for, well, being you.
Good luck in your endeavors,
-steve
> On Tue, May 4, 2010 at 9:21 PM, John Mesheimer <john.mesheimer at
gmail.com>
> wrote:
>>
>> Really? Is it? Hadn't thought of that.
>>
>> Anybody else?
>>
>>
>> On Tue, May 4, 2010 at 9:06 PM, Steve Lianoglou
>> <mailinglist.honeypot at gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> On Tue, May 4, 2010 at 8:54 PM, John Mesheimer <john.mesheimer
at gmail.com>
>>> wrote:
>>> > Let's say I had a matrix like this:
>>> >
>>> > library(Ryacas)
>>> > x<-Sym("x")
>>> > m<-matrix(c(cos (x), sin(x), -sin(x), cos(x)), ncol=2)
>>> >
>>> > How can I use R to obtain the eigenvalues and eigenvectors?
>>>
>>> R> help.search('eigenvalues')
>>>
>>> is a good start
>>>
>>> --
>>> Steve Lianoglou
>>> Graduate Student: Computational Systems Biology
>>> ?| Memorial Sloan-Kettering Cancer Center
>>> ?| Weill Medical College of Cornell University
>>> Contact Info: http://cbio.mskcc.org/~lianos/contact
>>
>
>
--
Steve Lianoglou
Graduate Student: Computational Systems Biology
| Memorial Sloan-Kettering Cancer Center
| Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact
------------------------------
Message: 114
Date: Tue, 4 May 2010 22:01:37 -0600
From: "kMan" <kchamberln at gmail.com>
To: "'someone'" <vonhoffen at t-online.de>
Cc: r-help at r-project.org
Subject: Re: [R] Delete rows with duplicate field...
Message-ID: <01d701caec07$aab81f40$00285dc0$@gmail.com>
Content-Type: text/plain; charset="us-ascii"
Dear someone,
Jorge's solution is excellent, assuming it is what you had in mind. Please
note that the help page for unique() has duplicated() listed in its "See
Also" section. Thus, when you studied ?unique(), it would have made sense
to
read about duplicated() as well. Or perhaps you did look into it, but have
not yet seen how to use logical indexes, and as a result, you did not think
duplicated() was relevant for what you wanted.
I gather that unique() does not work for you because you want the rest of
the information, not just the levels of a particular column to be listed?
Keep in mind that you got responses that did not make sense because it was
difficult to make sense out of what you were asking. This is a common theme
on the list, not just you (I get called on it from time to time, too), and
is a normal part of learning the art of asking for R-help. Part of the
solution for not only the "R noob" paradox, but also the
cross-cultural/language ones, is to include clear examples showing what is
needed, in addition to the narrative. For example:
-----------------------------------------
# Data to test>(df<-data.frame(ID=c("userA", "userB",
"userA", "userC"),
> OS=c("Win","OSX","Win", "Win64"),
>
time=c("12:22","23:22","04:44","12:28")))
# Output of df
ID OS time
1 userA Win 12:22
2 userB OSX 23:22
3 userA Win 04:44
4 userC Win64 12:28
# Desired outcome of manipulation (ID as unique; unique(df) and
unique(df$ID) do NOT work for this)
1 userA Win 12:22
2 userB OSX 23:22
4 userC Win64 12:28
--------------------------------------------
would have helped readers orient to your question better. If you tried other
things, that might be helpful for readers to provide more helpful replies.
# Cannot use the results of unique() as an index either> df[unique(df$ID),]
ID OS time
1 userA Win 12:22
2 userB OSX 23:22
3 userA Win 04:44 # <- not unique, expected "userC"
# duplicated() on the entire data.frame() shows no duplicates
[1] FALSE FALSE FALSE FALSE>df[!duplicated(df),] # same as unique(df)
The above would have effectively oriented readers to the specific problem
far better than approximated meaning used in narrative. It also tends to
have the consequence of organizing how some readers may respond, meaning
your effort in showing clear examples/details/working code may make a longer
reply feel more equitable for someone responding.
>duplicated(df$ID) # makes sense, so Jorge suggested using this as an index
[1] FALSE FALSE TRUE FALSE
# using the raw results, only the duplicated row is returned, hence why
Jorge inverted the values>df[duplicated(df$ID),]
ID OS time
3 userA Win 04:44
>df[!duplicated(df$ID),] # selects just the not-duplicated rows. This is
what you needed, right?
ID OS time
1 userA Win 12:22
2 userB OSX 23:22
4 userC Win64 12:28 # note that the original row labels are intact
rownames(df[!duplicated(df$ID),]) # not sure why I'd do this, though it
occurs that I could, given the above
By the way, if you keep your indexes, then all you have to store in the R
environment is the original dataset, and your indexes. It helps me stay
organized.
Welcome, someone, aka "R noob" (as you put it). Keep after it, you
won't be
a "noob" for long.
Sincerely,
KeithC.
------------------------------
Message: 115
Date: Tue, 4 May 2010 16:45:30 -0600
From: Anthony Fristachi <afristachi at portageinc.com>
To: "r-help at r-project.org" <r-help at r-project.org>
Subject: [R] converting an objects list
Message-ID:
<0D0C286F67358D4B8BA789B9655A67180190A0740368 at
pe-exchange.portageinc.com>
Content-Type: text/plain
Hello,
I would like to convert an objects list such as objects() or ls() that
outputs "a101" "a102" "a104"
"a107" "a109"
to read within a list statement as follows : list(a101,a102,a104,a107,a109)
Thanks
Tony
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Tony Fristachi
Risk Assessor
Portage, Inc.
335 Central Park Square
Los Alamos, NM 87544
office: 505-663-1526 / fax: 505-662-7340
cell: 513-375-3263
www.portageinc.com<http://www.portageinc.com>
________________________________
NON-DISCLOSURE NOTICE--This e-mail message and any attachments may contain
confidential and privileged information intended for the sole use of the named
recipient. If you receive this email in error, or if you are not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited. Delivery of this message
to any person other than the intended recipient is not intended to waive any
right or privilege. If you have received this message in error, please delete it
from your computer and contact the sender immediately.
[[alternative HTML version deleted]]
------------------------------
Message: 116
Date: Tue, 4 May 2010 12:00:44 -0700 (PDT)
From: zerdna <azege at yahoo.com>
To: r-help at r-project.org
Subject: [R] masking of objects between mtrace() and getYahooData()
Message-ID: <1272999644936-2126151.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I am using getYahooData from TTR to get daily data. When i do it standalone,
it is fine. It also works fine inside my code. However, when i run code
inside mtrace(), i always get the following error:
Error in xts(cbind(adj[[1]], adj[[2]]), index(obj)):
order.by requires an appropriate time-based object
After a lot of hand wringing, it looks to me that it happens because index
in xts masks index in package mvbutils. Is there any way to avoid this type
of masking of objects?
--
View this message in context:
http://r.789695.n4.nabble.com/masking-of-objects-between-mtrace-and-getYahooData-tp2126151p2126151.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 117
Date: Tue, 4 May 2010 12:50:14 -0700
From: Dennis Murphy <djmuser at gmail.com>
To: a a <aahmad31 at hotmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] How to make predictions with the predict() method on
an arimax object using arimax() from TSA library
Message-ID:
<q2h9a8a6c631005041250x6902f753j1a01de4dd3a05807 at mail.gmail.com>
Content-Type: text/plain
Hi:
If you look at the help page of the predict.Arima() function, you'll
discover the problem:
>forecast=predict(air.m1, n.ahead=15)
Error in predict.Arima(air.m1, n.ahead = 15) :
'xreg' and 'newxreg' have different numbers of columns
^^^^^^^^^^ ## what's this
newxreg??> class(air.m1)
[1] "Arimax" "Arima"
>From the help page for predict.Arima:
Usage
## S3 method for class 'Arima':
predict(object, n.ahead = 1, newxreg = NULL,
se.fit = TRUE, ...)
Arguments object The result of an arima fit. n.ahead The number of steps
ahead for which prediction is required. newxreg New values of xreg to be
used for prediction. Must have at least n.ahead rows.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Evidently, you need to supply a data frame as the argument to newxreg = with
at least 15 rows, which needs to
contain the same covariates as in your model with the same names, whose
values are those you want at the forecast times.
Basically, you've asked to forecast the next 15 steps ahead without
providing the covariate values at those future times.
HTH,
Dennis
On Tue, May 4, 2010 at 9:18 AM, a a <aahmad31 at hotmail.com> wrote:
>
> Hi R Users,
>
>
>
> I'm fairly new to R (about 3 months use thus far.)
>
> I wanting to use the arimax function from the TSA library to incorporate
> some exogenous inputs into the basic underllying arima model.Then with
that
> newly model of type arimax, I would like to make a prediction.
>
>
>
> To avoid being bogged down with issues specific to my own work, I would
> like to refer to readers to the example given in the TSA documentation
which
> would also then clarify my own issues:
>
>
>
> >library(TSA)
>
> >
>
> >data(airmiles)
>
>
>air.ml=arimax(log(airmiles),order=c(0,1,1),seasonal=list(order=c(0,1,1),
>
> >period=12),xtransf=data.frame(I911=1*(seq(airmiles)==69),
>
> >I911=1*(seq(airmiles)==69)),
>
>
>transfer=list(c(0,0),c(1,0)),xreg=data.frame(Dec96=1*(seq(airmiles)==12),
>
>
>Jan97=1*(seq(airmiles)==13),Dec02=1*(seq(airmiles)==84)),method='ML')
>
>
>
> Ok,so I've run the above code and an object called air.ml has now been
> created of class type arimax.According to the documentation this is the
same
> type as arima.So now I make a prediction,say 15 time steps ahead:
>
>
>
> >forecast=predict(air.m1, n.ahead=15)
>
>
>
> The following error is produced:
>
>
>
> Error in predict.Arima(air.m1, n.ahead = 15) :
> 'xreg' and 'newxreg' have different numbers of columns
>
>
--------------------------------------------------------------------------------------------------------------------
>
> Question is how to to get a prediction correctly using predict.(I've
seen
> DSE package but that seems overkill to make just a simple prediction)??
>
>
>
> Thank you in advance for any repsonses.
>
>
>
> A.A
>
> Part time student at BBK college,UofL.
>
>
>
>
>
>
>
> _________________________________________________________________
> Hotmail is redefining busy with tools for the New Busy. Get more from your
> inbox.
>
> N:WL:en-US:WM_HMP:042010_2
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 118
Date: Tue, 4 May 2010 14:02:36 -0700 (PDT)
From: karena <dr.jzhou at gmail.com>
To: r-help at r-project.org
Subject: [R] question about 'write.table'
Message-ID: <1273006956689-2126309.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I have a question about the "write.table"
I have 100 data.frames, loci1, loci2, loci3.............,loci100.
now, I want to print these data.frames to 100 separate files, and the names
of the files are also loci1, loci2, loci3,......., loci100.
how to perform this under a "for" loop?
say,
for (i in 1:100) {
write.table(...., file='...', ........)
}
thank you,
karena
--
View this message in context:
http://r.789695.n4.nabble.com/question-about-write-table-tp2126309p2126309.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 119
Date: Tue, 4 May 2010 19:52:24 +0000 (GMT)
From: Marc Carpentier <marc.carpentier at ymail.com>
To: r-help at r-project.org
Cc: ligges at statistik.tu-dortmund.de
Subject: [R] Re : Re : aregImpute (Hmisc package) : error in
matxv(X, xcof)...
Message-ID: <141089.20791.qm at web28214.mail.ukl.yahoo.com>
Content-Type: text/plain; charset="iso-8859-1"
With "txt" extensions for the server filter (hope it's not the
size "Message body is too big: 182595 bytes with a limit of 150 KB").
Sorry.
----- Message d'origine ----
De : David Winsemius <dwinsemius at comcast.net>
? : Marc Carpentier <marc.carpentier at ymail.com>
Cc : Uwe Ligges <ligges at statistik.tu-dortmund.de>; r-help at
r-project.org
Envoy? le : Mar 4 mai 2010, 20 h 28 min 53 s
Objet : Re: [R] Re : aregImpute (Hmisc package) : error in matxv(X, xcof)...
On May 4, 2010, at 2:13 PM, Marc Carpentier wrote:
> Ok. I was afraid to refer to a known and obvious error.
> Here is a testing dataset (pb1.csv) and commented code (pb1.R) with the
problems.
> Thanks for any help.
Nothing attached. In all likelihood had you given these file names with
extensions of .txt, they would have made it through the server
filter>
> Marc
>
> ________________________________
> De : Uwe Ligges <ligges at statistik.tu-dortmund.de>
> ? : Marc Carpentier <marc.carpentier at ymail.com>
> Cc : r-help at r-project.org
> Envoy? le : Mar 4 mai 2010, 13 h 52 min 31 s
> Objet : Re: [R] aregImpute (Hmisc package) : error in matxv(X, xcof)...
>
> Having reproducible examples including data and the actual call that
> lead to the error would be really helpful to be able to help.
>
> Uwe Ligges
>
> On 04.05.2010 12:23, Marc Carpentier wrote:
>> Dear r-help list,
>> I'm trying to use multiple imputation for my MSc thesis.
>> Having good exemples using the Hmisc package, I tried the aregImpute
function. But with my own dataset, I have the following error :
>>
>> Erreur dans matxv(X, xcof) : columns in a (51) must be<= length of b
(50)
>> De plus : Warning message:
>> In f$xcoef[, 1] * f$xcenter :
>> la taille d'un objet plus long n'est pas multiple de la
taille d'un objet plus court
>> = longer object length is not a multiple of shorter object length
>>
>> I first tried to "I()" all the continuous variables but the
same error occurs with different numbers :
>> Erreur dans matxv(X, xcof) : columns in a (37) must be<= length of b
(36)...
>>
>> I'm a student and I'm not familiar with possible constraints in
a dataset to be effectively imputed. I just found this previous message, where
the author's autoreply suggests that particular distributions might be an
explanation of algorithms failure :
>> http://www.mail-archive.com/r-help at r-project.org/msg53534.html
>>
>> Does anyone know if these messages reflect a specific problem in my
dataset ? And if the number mentioned might give me a hint on which column to
look at (and maybe transform or ignore for the imputation) ?
>> Thanks for any advice you might have.
>>
>> Marc
>>
>
David Winsemius, MD
West Hartford, CT
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: pb1csv.txt
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20100504/3d8c2d29/attachment-0002.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: pb1R.txt
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20100504/3d8c2d29/attachment-0003.txt>
------------------------------
Message: 120
Date: Tue, 4 May 2010 13:18:40 -0700 (PDT)
From: Tim Clark <mudiver1200 at yahoo.com>
To: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Cc: r-help at r-project.org
Subject: Re: [R] Estimating theta for negative binomial model
Message-ID: <97940.15424.qm at web36107.mail.mud.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Brian,
glm.convert() does exactly what I am wanting. Thanks for putting it in MASS.
Sorry about not giving credit. It is a great package and I keep finding
functions that make statistical programming easy!
Aloha,
Tim
Tim Clark
Department of Zoology
University of Hawaii
--- On Mon, 5/3/10, Prof Brian Ripley <ripley at stats.ox.ac.uk> wrote:
> From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
> Subject: Re: [R] Estimating theta for negative binomial model
> To: "Tim Clark" <mudiver1200 at yahoo.com>
> Cc: r-help at r-project.org
> Date: Monday, May 3, 2010, 7:50 PM
> On Mon, 3 May 2010, Tim Clark wrote:
>
> > Dear List,
> >
> > I am trying to do model averaging for a negative
> binomial model using the package AICcmodavg.? I need to
> use glm() since the package does not accept glm.nb()
> models.? I can get glm() to work if I first run glm.nb
> and take theta from that model, but is there a simpler way
> to estimate theta for the glm model?? The two models
> are:
> >
> > mod.nb<-glm.nb(mantas~site,data=mydata)
> > mod.glm<-glm(mantas~site,data=mydata,
> family=negative.binomial(mod.nb$theta))
> >
> > How else can I get theta for the
> family=negative.binomial(theta=???)
>
> library(MASS) is missing here -- do give credit where
> credit is due.
> It contains functions
>
> glm.convert
> theta.ml
>
> and the first does what you seem to want and the second
> answers the actual question you asked.
>
> > Thanks!
> >
> > Tim
> >
> > Tim Clark
> > Department of Zoology
> > University of Hawaii
>
> -- Brian D. Ripley,? ? ? ? ?
> ? ? ? ? ripley at stats.ox.ac.uk
> Professor of Applied Statistics,? http://www.stats.ox.ac.uk/~ripley/
> University of Oxford,? ? ? ? ?
> ???Tel:? +44 1865 272861 (self)
> 1 South Parks Road,? ? ? ? ?
> ? ? ? ? ???+44 1865
> 272866 (PA)
> Oxford OX1 3TG, UK? ? ? ? ? ?
> ? ? Fax:? +44 1865 272595
>
------------------------------
Message: 121
Date: Tue, 04 May 2010 22:12:53 +0100
From: Paul <paul at paulhurley.co.uk>
To: Max Kuhn <mxkuhn at gmail.com>
Cc: r-help at r-project.org, Dieter Menne <dieter.menne at
menne-biomed.de>
Subject: Re: [R] Errors when trying to open odfWeave documents
Message-ID: <4BE08DD5.4030305 at paulhurley.co.uk>
Content-Type: text/plain
Max Kuhn wrote:> I tried with this:
>
> > sessionInfo()
> R version 2.10.1 (2009-12-14)
> i386-pc-mingw32
> <snip>
> I didn't have any issues with the testcases.odt and examples.odt files.
>
> :-/
>
>
> On Sun, May 2, 2010 at 11:39 AM, Dieter Menne
> <dieter.menne at menne-biomed.de> wrote:
>
>> Paul Hurley wrote:
>>
>>> Is there a known problem with odfWeave on Windows? I've seen
some
>>> messages on R-help from a year or two back where people
couldn't install
>>> the XML package, but XML installed fine on my Windows box (it was
more
>>> tricky on Kubuntu, but worked in the end).
>>>
>>>
>> There was a problem with XML and odfWeave on Windows, probably you read
one
>> of my posts. I normally use LateX, but I tried odfWeave and the
examples
>> that failed at that time today with odfWeave_0.7.11 XML_2.8-1 , and it
>> worked.
>>
>> It would be helpful if you could prune your text to the smallest size
that
>> shows the effect, and post it again; best put it on some web site for
>> download, since most attachments are not permitted on this list.
>>
I've tried again, and still get the error. I've posted the console
output and my input and output odt files on my websit, along with where
OpenOffice reports an error.
Console Output:
www.paulhurley.co.uk/images/stories/R/simplerverbose_console_output.txt
formatting Example File Output (Error at 1293,124)
www.paulhurley.co.uk/images/stories/R/formattingOut.odt
Simpler Input (odt file with one sexpr only)
www.paulhurley.co.uk/images/stories/R/simpler.odt
Simpler Output (error at 68,124)
www.paulhurley.co.uk/images/stories/R/simplerOut.odt
Simplest Input (odt file with no R at all)
www.paulhurley.co.uk/images/stories/R/simplest.odt
Simplest Output (error at 68,124)
www.paulhurley.co.uk/images/stories/R/simplestOut.odt
In each file the line that seems to be a problem is like this
<text:list-level-style-bullet text:level="1"
text:style-name="Bullet_20_Symbols" style:num-suffix="."
text:bullet-char="?<U+009C><U+0094>" >
I assume the question mark is the problem, but don't know what causes
the question mark
I have also tried setting my R installation to a US locale instead of UK
(as Max reports his works, and the only difference is the locale) but
the US locale still produces files with this error.
I also checked that odfWeave seems to be using the zip.exe from the
RTools set. This is v2.32 by Info-Zip. I tried telling odfWeave to use
WinZip but I get the same issues. (and have no reason to blame zip).
Sorry about the rambling post, but I don't have a clue what to try next.
Regards
Paul.
[[alternative HTML version deleted]]
------------------------------
Message: 122
Date: Tue, 4 May 2010 16:57:32 -0700 (PDT)
From: merrittr <rob at merrittnet.org>
To: r-help at r-project.org
Subject: [R] Creating Crosstabs using a sparse table
Message-ID: <1273017452397-2130247.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi all
I am trying to read in a table from a survey
and get the error below
gr12 <- read.table("gr12.csv", header=TRUE)
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,
:
line 1 did not have 249 elements
how would I go about reading in a table of this data, I am trying to
generate a series of crostabs after I get it in
ie Gender vs q1 q2 q3 q4...
Religion vs q1 q2 q3...
any help would be appreciated
--
View this message in context:
http://r.789695.n4.nabble.com/Creating-Crosstabs-using-a-sparse-table-tp2130247p2130247.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 123
Date: Wed, 05 May 2010 14:49:10 +1000
From: Nevil Amos <nevil.amos at gmail.com>
To: Muhammad Rahiz <muhammad.rahiz at ouce.ox.ac.uk>
Cc: "r-help at stat.math.ethz.ch" <r-help at stat.math.ethz.ch>
Subject: Re: [R] How to replace all <NA> values in a data.frame with
another ( not 0) value
Message-ID: <4BE0F8C6.302 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Thanks, this works, replacing the NA values with the "000/000' string.
On 4/05/2010 11:20 PM, Muhammad Rahiz wrote:> Hi Nevil,
>
> You can try a method like this
>
> x <- c(rnorm(5),rep(NA,3),rnorm(5)) # sample data
> dat <- data.frame(x,x) # make sample dataframe
> dat2 <- as.matrix(dat) # conver to matrix
> y <- which(is.na(dat)==TRUE) # get index of NA values
> dat2[y] <- "000/000" # replace all na
values
> with "000/000"
>
> Muhammad
>
>
>
>
>
>
> Nevil Amos wrote:
>> I need to replace <NA> occurrences in multiple columns in a
>> data.frame with "000/000"
>>
>> how do I achieve this?
>>
>> Thanks
>>
>> Nevil Amos
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 124
Date: Wed, 05 May 2010 16:56:45 +1200
From: "James M. Curran" <j.curran at auckland.ac.nz>
To: r-help at r-project.org
Subject: [R] Help with dummy.coef
Message-ID: <4BE0FA8D.9030803 at auckland.ac.nz>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi all,
I've been revising some lecture notes and I have something that used to
work with dummy.coef and now doesn't. A search of R help doesn't reveal
any recent discussion on this function, so I'd appreciate your thoughts.
The (compact/relevant) code is below, and complains at the line
rep.int(x1[[i]][1L], nl), levels = xl[[i]]) that the 'times' value is
invalid. This seems to relate to the the polynomial terms, but I'm not
sure how to take it further than that.
battery.df = data.frame(
voltage = c(130,155,74,180,34,40,80,75,20,70,82,
58,150,188,159,126,136,122,106,115,25,70,58,45,
138,110,168,160,174,130,150,139,96,104,82,60),
material = factor(rep(1:3,rep(12,3))),
temp = rep(rep(c(50,65,80),rep(4,3)),3))
battery.df = data.frame(voltage, material, temp)
battery.fit=lm(voltage~material*(temp+I(temp^2)),data = battery.df)
dummy.coef(battery.fit)
Error in rep.int(xl[[i]][1L], nl) : invalid 'times' value
Thanks,
James
--
James M. Curran, Ph.D.
Associate Professor
Director Bioinformatics Institute
Department of Statistics
Faculty of Science
Private Bag 92019
Auckland
New Zealand
------------------------------
Message: 125
Date: Tue, 4 May 2010 22:04:32 -0700 (PDT)
From: Seth <sjmyers at syr.edu>
To: r-help at r-project.org
Subject: Re: [R] readLines with space-delimiter?
Message-ID: <1273035872030-2130434.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks. I wasn't aware that scan or read.table allowed you to read in a
single line, process it, output results, and then read in the next line.
This is what I need to do because the data set is too large to hold in RAM.
I did manage to do this with readLines and overcome the space-delimiter
issue.
--
View this message in context:
http://r.789695.n4.nabble.com/readLines-with-space-delimiter-tp2130255p2130434.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 126
Date: Tue, 4 May 2010 22:49:25 -0700 (PDT)
From: Seth <sjmyers at syr.edu>
To: r-help at r-project.org
Subject: [R] better way to trick data frame structure?
Message-ID: <1273038565570-2130470.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I have a data frame where 1 variable is a factor with only 1 level. I need
the data frame structure to reflect that there are 2 levels for this factor,
even though this is not the case. I am currently adding extra 'fake'
rows
to the data frame to ensure that 2 levels are present, but this is slowing
processing time in a loop quite a bit. Can I manually specify that this
factor variable has two levels (even though this is lying to R)?
Thanks,Seth
--
View this message in context:
http://r.789695.n4.nabble.com/better-way-to-trick-data-frame-structure-tp2130470p2130470.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 127
Date: Wed, 5 May 2010 08:02:27 +0200
From: Petr PIKAL <petr.pikal at precheza.cz>
To: Seth <sjmyers at syr.edu>
Cc: r-help at r-project.org
Subject: [R] Odp: better way to trick data frame structure?
Message-ID:
<OFDADEF153.7E1E7B87-ONC125771A.002110AE-C125771A.00213A28 at
precheza.cz>
Content-Type: text/plain; charset="US-ASCII"
Hi
r-help-bounces at r-project.org napsal dne 05.05.2010 07:49:25:
>
> Hi,
>
> I have a data frame where 1 variable is a factor with only 1 level. I
need> the data frame structure to reflect that there are 2 levels for this
factor,> even though this is not the case. I am currently adding extra
'fake'
rows> to the data frame to ensure that 2 levels are present, but this is
slowing> processing time in a loop quite a bit. Can I manually specify that this
> factor variable has two levels (even though this is lying to R)?
Not at all lying. That is why hated factors are useful. Lust add all
desired levels.
> ff<-factor("M")
> ff
[1] M
Levels: M> levels(ff)<-c("M","F")
> ff
[1] M
Levels: M F
Regards
Petr
> Thanks,Seth
> --
> View this message in context:
http://r.789695.n4.nabble.com/better-way-to-> trick-data-frame-structure-tp2130470p2130470.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 128
Date: Tue, 4 May 2010 23:12:28 -0700 (PDT)
From: Seth <sjmyers at syr.edu>
To: r-help at r-project.org
Subject: Re: [R] Odp: better way to trick data frame structure?
Message-ID: <1273039948932-2130486.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks, works beautifully and saved hours of run time. -seth
--
View this message in context:
http://r.789695.n4.nabble.com/better-way-to-trick-data-frame-structure-tp2130470p2130486.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 129
Date: Wed, 5 May 2010 17:16:37 +1000
From: "Wang, Kevin (SYD)" <kevinwang at kpmg.com.au>
To: <r-help at r-project.org>
Subject: [R] Converting dollar value (factors) to numeric
Message-ID:
<378EB90FD48A9E47A49222E07DF06BAA95A324 at AUSYDEXC37.au.kworld.kpmg.com>
Content-Type: text/plain
Hi,
I'm trying to read in a bunch of CSV files into R where many columns are
coded like $111.11. When reading them in they are treated as factors.
I'm wondering if there is an easy way to convert them into numeric in R
(as I don't want to modify the source data)? I've done some searches
and can't seem to find an easy way to do this.
I apologise if this is a trivial question, I haven't been using R for a
while.
Many thanks in advance!
Cheers
Kev
Kevin Wang> Senior Advisor, Health and Human Services Practice
> Government Advisory Services
>
> KPMG
> 10 Shelley Street
> Sydney NSW 2000 Australia
>
> Tel +61 2 9335 8282
> Fax +61 2 9335 7001
>
kevinwang at kpmg.com.au
> Protect the environment: think before you print
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 130
Date: Wed, 5 May 2010 06:09:16 +0000 (UTC)
From: Thorn <thorn.thaler at rdls.nestle.com>
To: r-help at stat.math.ethz.ch
Subject: Re: [R] Lazy evaluation in function call
Message-ID: <loom.20100505T074734-183 at post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Bert Gunter <gunter.berton <at> gene.com> writes:
>
> Inline below.
>
> -- No.
> f <- function(x, y = x)x+y ## lazy evaluation enables this
> > f(2)
> [1] 4
> > f(2,3)
> [1] 5
Ok, obviously I was not very precise about what I'd like to do. Sorry.
I'm
aware of this functionality. But I'd like to use the same idea for existing
function. Take biplot as an example. There you have an argument
called 'choices' controlling which components you'd like to plot.
The standard
labels of the axis are the names of the two chosen components, PC1 and PC2
say. If one wants to add the explained variance to the axis notation, one
could provide the xlab argument:
biplot(pca-object, xlab=paste("PC1", explained.variance.by.pc1))
This works out well, and serves it purpose sufficiently. If it happens that
you'd like to choose different components, you have to provide the choices
argument. As a consequence you have to change your manually adjusted axis
notation as well. Hence, I was wondering, if one could do it in a somewhat
more automatic fashion:
biplot(pca-object, xlab=paste("PC", choices[1],
explained.varaiance.by(choices
[1]))
I've to admit that this is an academic question, because at the time you
call
biplot, you know which components you want to plot. Thus, you could use the
right values for choices and the labs. But I'm interested whether it can be
done by telling R that it should evaluate choices[1] not at the prompt level,
but at the moment it enters the function biplot, where choices will be known
in any case (since it has default values).
Just to make it clear once again, I'm aware of the possibilities to get this
specific task done. But I'm in particular interested, wheter it could be
done
using R's lazy evaluation mechanism.
BR, Thorn
------------------------------
Message: 131
Date: Wed, 05 May 2010 08:32:55 +0200
From: Ruihong Huang <ruihong.huang at wiwi.hu-berlin.de>
To: Steve Lianoglou <mailinglist.honeypot at gmail.com>
Cc: r-help at r-project.org
Subject: Re: [R] Two Questions on R (call by reference and
pre-compilation)
Message-ID: <4BE11117.6030302 at wiwi.hu-berlin.de>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
Thank all of you!
On 05/05/2010 12:08 AM, Steve Lianoglou wrote:> Hi,
>
> On Tue, May 4, 2010 at 5:05 PM, Ruihong Huang
> <ruihong.huang at wiwi.hu-berlin.de> wrote:
>
>> Hi All,
>>
>> I have two questions on R. Could you please explain them to me? Thank
you!
>>
>> 1) When call a function, R typically copys the values to formal
arguments
>> (call by value).
>>
> This is technically incorrect.
>
> As far as I know, R has "copy-on-write" semantics. It will only
make a
> copy of the passed in object if you modify it within your function.
>
>
>> This is very cost, if I would like to pass a huge data set
>> to a function. Is there any situations that R doesn't copy the
data, besides
>> pass data in an environment object.
>>
> This question comes up quite often, you could try searching the
> archives to get more info about that (using gmane might be helpful).
>
> Check out this SO thread as well:
> http://stackoverflow.com/questions/2603184/r-pass-by-reference
>
>
>> 2) Does R pre-compile the object function to binary when running
"optim"? I
>> experienced the R "optim" is much slower than the MATLAB
"fmincon" function.
>> I don't know MATLAB has done any pre-compilation on the script for
object
>> function or not. But perhaps, we can increase R performance by some
sort of
>> pre-compilation during running time.
>>
> If I had to guess, I'd guess that it doesn't, but let's see
what the
> gurus say ...
>
>
------------------------------
Message: 132
Date: Wed, 05 May 2010 08:53:29 +0200
From: Ruihong Huang <ruihong.business at googlemail.com>
To: Tengfei Yin <yintengfei at gmail.com>
Cc: r-help at r-project.org, Fahim Md <fahim.md at gmail.com>
Subject: Re: [R] installing a package in linux
Message-ID: <4BE115E9.1030608 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 05/05/2010 02:44 AM, Tengfei Yin wrote:> Hi
>
> R basic packages always works fine in my laptop (also ubuntu), you
don't
> need to reinstall anything once you installed the package, did you do that
> in your terminal like
> $R (enter R session)
>
>> install.packages('package name')
>> q()
>>
> then everytime you enter the R session, you just library('package
name'),
> that should work... I don't know if it is sth about user privilege , do
you
> use R on your own computer or on other servers?
>
> Regards
>
> Tengfei
>
>
> On Tue, May 4, 2010 at 2:25 PM, Fahim Md<fahim.md at gmail.com>
wrote:
>
>
>> I recently started using ubuntu 9.10 and I am using gedit editor and R
>> plugin for writing R code. To install any package I need to do:
>> $ install.packages()
>> //window pop-up for mirror selection
>> //then another window pop up for package selection.
>> After this as long as I am not exiting, the function of the newly
installed
>> packages are available.
>>
>> After I exit (i use to put 'no' in 'save workspace'
option) from R, if I
>> want to again work in R, I have to repeat the process of package
install.
>> This reintallation problem was not there in windows(I was using Tinn-R
as
>> editor, I just need to put require('package-name') to use its
function).
>>
>>
There is nothing to do with the editor. I guess, you should run "sudo
R"
(you shouldn't use this to run a normal R session) in Ubuntu, which will
give you the right to write into the R directory typically in /usr/lib.
And then using "install.packages('package.name')" inside this
R
session. To load a library at beginning of each R session, you might
consider the .First function like,
.First <- function(){
library('package.name')
invisible()
}
and then quite R session with work space saved.
Best,
Ruihong>> Is there anyway so that reinstallation of the package is avoided???
>> thanks
>> --Fahim
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>
>
>
------------------------------
Message: 133
Date: Wed, 05 May 2010 09:39:05 +0200
From: Ruihong Huang <ruihong.lang.r at googlemail.com>
To: "Wang, Kevin (SYD)" <kevinwang at kpmg.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] Converting dollar value (factors) to numeric
Message-ID: <4BE12099.606 at googlemail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
If you use Linux, we can simply use "sed" (in Linux terminal, NOT R)
to
delete all leading '$' from the file "test.dat" by
$ sed -e 's/\$//g' test.dat > newdata.dat
And now R will read all this dollar as numeric.
Bests,
Ruihong
On 05/05/2010 09:16 AM, Wang, Kevin (SYD) wrote:> Hi,
>
> I'm trying to read in a bunch of CSV files into R where many columns
are
> coded like $111.11. When reading them in they are treated as factors.
>
> I'm wondering if there is an easy way to convert them into numeric in R
> (as I don't want to modify the source data)? I've done some
searches
> and can't seem to find an easy way to do this.
>
> I apologise if this is a trivial question, I haven't been using R for a
> while.
>
> Many thanks in advance!
>
> Cheers
>
> Kev
>
> Kevin Wang
>
>> Senior Advisor, Health and Human Services Practice
>> Government Advisory Services
>>
>> KPMG
>> 10 Shelley Street
>> Sydney NSW 2000 Australia
>>
>> Tel +61 2 9335 8282
>> Fax +61 2 9335 7001
>>
>>
> kevinwang at kpmg.com.au
>
>
>> Protect the environment: think before you print
>>
>>
>>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 134
Date: Wed, 5 May 2010 09:54:55 +0200
From: Fredrik Karlsson <dargosch at gmail.com>
To: "Wang, Kevin (SYD)" <kevinwang at kpmg.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] Converting dollar value (factors) to numeric
Message-ID:
<q2g376e97ec1005050054ja711d6f6nb9510c11b3775713 at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hi,
Something similar to this maybe?
> test <- as.factor("$111.11")
> test
[1] $111.11
Levels: $111.11> as.numeric(substring(as.character(test),2))
[1] 111.11
To be applied to your data.frame columns.
/Fredrik
On Wed, May 5, 2010 at 9:16 AM, Wang, Kevin (SYD) <kevinwang at
kpmg.com.au> wrote:> Hi,
>
> I'm trying to read in a bunch of CSV files into R where many columns
are
> coded like $111.11. ?When reading them in they are treated as factors.
>
> I'm wondering if there is an easy way to convert them into numeric in R
> (as I don't want to modify the source data)? ?I've done some
searches
> and can't seem to find an easy way to do this.
>
> I apologise if this is a trivial question, I haven't been using R for a
> while.
>
> Many thanks in advance!
>
> Cheers
>
> Kev
>
> Kevin Wang
>> Senior Advisor, Health and Human Services Practice
>> Government Advisory Services
>>
>> KPMG
>> 10 Shelley Street
>> Sydney ?NSW ?2000 ?Australia
>>
>> Tel ? +61 2 9335 8282
>> Fax ? +61 2 9335 7001
>>
> kevinwang at kpmg.com.au
>
>> Protect the environment: think before you print
>>
>>
>
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
"Life is like a trumpet - if you don't put anything into it, you
don't
get anything out of it."
------------------------------
Message: 135
Date: Wed, 5 May 2010 18:07:08 +1000
From: Scott MacDonald <scott.p.macdonald at gmail.com>
To: r-help at r-project.org
Subject: [R] A question regarding the loess function
Message-ID:
<y2uc4adf6e91005050107qf79d0a0ek135f6b182c82cdaa at mail.gmail.com>
Content-Type: text/plain
Hello,
I was hoping that someone familiar with the implementation details of the
loess algorithm might be able to help me resolve some difficulties I am
having. I am attempting to reproduce some of the functionality of the
loess() function in C++. My primary motivation is that I would like to
understand the algorithm in detail.
So far I have managed to create a working port in C++ for the linear
(degree=1) case. I use the GNU Scientific Library (GSL) to perform the
least squares fitting. The program I wrote can be viewed here:
http://pastebin.com/0mhMiZee
The test data I am using comes from here:
http://www.itl.nist.gov/div898/handbook/pmd/section1/dep/dep144.htm
After reading Cleveland 1992 paper: A Package of C and Fortran Routines for
Fitting Local Regression Models, I was under the impression that in
principle the quadratic (degree=2) case would be straight forward, simply
fit y ~ x + I(x^2) instead of y ~ x. I realize that Cleveland goes to some
effort to optimize the computation by using kd-trees and interpolation to
reduce the calculations, but I believe that the proper options to the R
loess.control() function can disable that.
When I attempt to fit the quadratic in the simple way I describe above, I
get results that are very close to R but well outside of any floating point
rounding error and signals something more serious.
The test program I have written in C++ for the quadratic case can be viewed
here: http://pastebin.com/8LZSFSyg
Here is some working R code that performs the quadratic loess fit:
x <- c(0.5578196, 2.0217271, 2.5773252, 3.4140288, 4.3014084, 4.7448394,
5.1073781, 6.5411662, 6.7216176, 7.2600583, 8.1335874, 9.1224379,
11.9296663, 12.3797674, 13.2728619, 14.2767453, 15.3731026, 15.6476637,
18.5605355, 18.5866354, 18.7572812)
y <- c(18.63654, 103.49646, 150.35391, 190.51031, 208.70115, 213.71135,
228.49353, 233.55387, 234.55054, 223.89225, 227.68339, 223.91982, 168.01999,
164.95750, 152.61107, 160.78742, 168.55567, 152.42658, 221.70702, 222.69040,
243.18828)
df <- data.frame(x=x, y=y)
y.loess <- loess(y ~ x, span=0.33, degree=2,
control=loess.control(surface="direct", statistics="exact",
trace.hat="exact"), data=df)
y.predict <- predict(y.loess)
And the output:
> y.predict
[1] 17.7138 111.3904 147.5650 190.6561 209.4832 217.5014 225.5488 233.8617
231.7305 226.7886 224.7809 224.1512
[13] 167.6779 162.0389 154.9118 159.9907 161.1376 157.1839 221.0499 223.8055
242.7856
My C++ program for the quadratic case mentioned above produces the following
output:
X Y SMOOTHED
0.557820 18.636540 17.128316
2.021727 103.496460 113.404389
2.577325 150.353910 145.509245
3.414029 190.510310 188.070299
4.301408 208.701150 209.587060
4.744839 213.711350 217.841596
5.107378 228.493530 223.961597
6.541166 233.553870 232.331387
6.721618 234.550540 231.725151
7.260058 223.892250 229.100556
8.133587 227.683390 225.721553
9.122438 223.919820 222.121049
11.929666 168.019990 167.448082
12.379767 164.957500 162.050603
13.272862 152.611070 158.478596
14.276745 160.787420 157.644046
15.373103 168.555670 161.076443
15.647664 152.426580 161.123333
18.560536 221.707020 225.721022
18.586635 222.690400 226.986782
18.757281 243.188280 235.647504
If you are still reading, thanks!
I have tried poking around in the R source code for the R_loess_raw
(loessc.c) method, but I find it quite dense and hard to follow... if anyone
has some debugging ideas, they would be greatly appreciated.
Thank you for your attention,
Scott
[[alternative HTML version deleted]]
------------------------------
Message: 136
Date: Wed, 5 May 2010 01:14:01 -0700 (PDT)
From: Phil Spector <spector at stat.berkeley.edu>
To: "Wang, Kevin (SYD)" <kevinwang at kpmg.com.au>
Cc: r-help at r-project.org
Subject: Re: [R] Converting dollar value (factors) to numeric
Message-ID:
<alpine.DEB.2.00.1005050109190.13514 at springer.Berkeley.EDU>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Kev-
The most reliable way to do the conversion is as follows:
> x = factor(c('$112.11','$119.15','$121.32'))
> as.numeric(sub('\\$','',as.character(x)))
[1] 112.11 119.15 121.32
This way negative quantities and numbers without
dollar signs are handled correctly. There's certainly
no need to create a new input file.
It may be easier to understand as
as.numeric(sub('$','',as.character(x),fixed=TRUE))
which gives the same result.
- Phil Spector
Statistical Computing Facility
Department of Statistics
UC Berkeley
spector at stat.berkeley.edu
On Wed, 5 May 2010, Wang, Kevin (SYD) wrote:
> Hi,
>
> I'm trying to read in a bunch of CSV files into R where many columns
are
> coded like $111.11. When reading them in they are treated as factors.
>
> I'm wondering if there is an easy way to convert them into numeric in R
> (as I don't want to modify the source data)? I've done some
searches
> and can't seem to find an easy way to do this.
>
> I apologise if this is a trivial question, I haven't been using R for a
> while.
>
> Many thanks in advance!
>
> Cheers
>
> Kev
>
> Kevin Wang
>> Senior Advisor, Health and Human Services Practice
>> Government Advisory Services
>>
>> KPMG
>> 10 Shelley Street
>> Sydney NSW 2000 Australia
>>
>> Tel +61 2 9335 8282
>> Fax +61 2 9335 7001
>>
> kevinwang at kpmg.com.au
>
>> Protect the environment: think before you print
>>
>>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 137
Date: Wed, 5 May 2010 16:34:30 +0800
From: Wincent <ronggui.huang at gmail.com>
To: r help <r-help at stat.math.ethz.ch>
Subject: [R] help with restart
Message-ID:
<t2g38b9f0351005050134t459d2b89g5745029aabb9e64c at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear all, I want to download webpage from a large number of webpage.
For example,
########
link <- c("http://gzbbs.soufun.com/board/2811006802/",
"http://gzbbs.soufun.com/board/2811328226/",
"http://gzbbs.soufun.com/board/2811720258/",
"http://gzbbs.soufun.com/board/2811495702/",
"http://gzbbs.soufun.com/board/2811176022/",
"http://gzbbs.soufun.com/board/2811866676/"
)
# the actual vector will be much longer.
ans <- vector("list",length(link))
for (i in seq_along(link)){
ans[[i]] <- readLines(url(link[i]))
Sys.sleep(8)
}
#######
The problem is, the sever will not response if the retrieval happens
too often and I don't know what the optimal time span between two
retrieval.
When the sever does not response to readLines, it will return an error
and stop. What I want to do is: when an error occurs, I put R to sleep
for say 60 seconds, and redo the readLines on the same link.
I did some search and guess withCallingHandlers and withRestarts will
do the trick. Yet, I didn't find much example on the usage of them.
Can you give me some suggestions? Thanks.
--
Wincent Rong-gui HUANG
Doctoral Candidate
Dept of Public and Social Administration
City University of Hong Kong
http://asrr.r-forge.r-project.org/rghuang.html
------------------------------
Message: 138
Date: Wed, 5 May 2010 01:47:13 -0700 (PDT)
From: Seth <sjmyers at syr.edu>
To: r-help at r-project.org
Subject: Re: [R] Two Questions on R (call by reference and
pre-compilation)
Message-ID: <1273049233602-2130631.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
As far as large data sets, I've just discovered readLines and writeLines
functions. I'm using it now to read in single rows, calculate things on
them, and then write a single row to a file.
--
View this message in context:
http://r.789695.n4.nabble.com/Two-Questions-on-R-call-by-reference-and-pre-compilation-tp2126314p2130631.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 139
Date: Wed, 05 May 2010 18:53:32 +1000
From: Jim Lemon <jim at bitwrit.com.au>
To: Anthony Fristachi <afristachi at portageinc.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] converting an objects list
Message-ID: <4BE1320C.5040207 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 05/05/2010 08:45 AM, Anthony Fristachi wrote:> Hello,
>
> I would like to convert an objects list such as objects() or ls() that
outputs "a101" "a102" "a104"
"a107" "a109"
>
> to read within a list statement as follows :
list(a101,a102,a104,a107,a109)
>
Hi Tony,
Try this:
x<-1:3
y<-letters[4:7]
z<-factor(c("fl","go","tw"))
result<-sapply(objects(),"get")
Jim
------------------------------
Message: 140
Date: Wed, 05 May 2010 19:05:46 +1000
From: Jim Lemon <jim at bitwrit.com.au>
To: Abiel X Reinhart <abiel.x.reinhart at jpmchase.com>
Cc: "r-help at r-project.org" <r-help at r-project.org>
Subject: Re: [R] fit printed output onto a single page
Message-ID: <4BE134EA.2030607 at bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 05/05/2010 12:12 AM, Abiel X Reinhart wrote:> Is there a way to force a certain block of captured output to fit onto a
single printed page, where one can specify the properties of the page
(dimensions, margins, etc)? For example, I might want to generate 10 different
cuts of a data table and then capture all the output into a PDF, ensuring that
each run of the data table fits onto a single page (i.e. the font-size of the
output may have to be dynamically adjusted).
>
Hi Abiel,
One way that comes to mind is to display the block of text on a graphics
device like postscript. First open the postscript device and create an
empty plot:
# notice how culturally sensitive I am, using "letter" size paper!
postscript("mytext.ps",width=8.5,height=11)
# set the margins to zero or something narrow
par(mar=c(0,0,0,0))
plot(0,xlab="",ylab="",xlim=c(0,1),ylim=c(0,1),type="n",axes=FALSE)
Then work out whether your block of text will fit into the plot:
blockwidth<-strwidth(mytextblock)
blockheight<-strheight(mytextblock)
# this should get the reduction (or expansion) to make it fit
cexmult<-1/max(c(blockwidth,blockheight)
text(0.5,0.5,mytextblock,cex=cexmult)
This is off the top of my head (which is a bit dented at the moment) so
you might want to try it out first.
Jim
------------------------------
Message: 141
Date: Wed, 5 May 2010 11:46:33 +0200 (CEST)
From: "n.vialma at libero.it" <n.vialma at libero.it>
To: <r-help at r-project.org>
Subject: [R] concatenate values of two columns
Message-ID: <24065531.4180191273052793297.JavaMail.root at wmail46>
Content-Type: text/plain
Dear list,
I'm trying to concatenate the values of two columns but im not able to do
it:
i have a dataframe with the following two columns:
X VAR1 VAR2
1 2
2 1
3 2
4 3
5 4
6 4
what i would like to obtain is:
X VAR3
1 2
2 1
3 2
4 3
5 4
6 4
I try with paste but what I obtain is:
X VAR3
1 NA2
2 1NA
3 2NA
4 NA3
5 NA4
6 4NA
Thanks a lot!!
[[alternative HTML version deleted]]
------------------------------
Message: 142
Date: Wed, 05 May 2010 11:47:32 +0200
From: Alex van der Spek <doorz at xs4all.nl>
To: r-help at r-project.org
Subject: [R] Memory issue
Message-ID: <4BE13EB4.8030402 at xs4all.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Reading a flat text file 138 Mbyte large into R with a combination of
scan (to get the header) and read.table. After conversion of text time
stamps to POSIXct and conversion of integer codes to factors I convert
everything into one data frame and release the old structures containing
the data by using rm().
Strangely, the rm() does not appear to reduce the used memory. I checked
using memory.size(). Worse still, the amount of memory required grows.
When I save an image the .RData image file is only 23 Mbyte, yet at some
point in to the program, after having done nothing particularly
difficult (two and three way frequency tables and some lattice graphs)
the amount of memory in use is over 1 Gbyte.
Not yet a problem, but it will become a problem. This is using R2.10.0
on Windows Vista.
Does anybody know how to release memory as rm(dat) does not appear to do
this properly.
Regards,
Alex van der Spek
------------------------------
Message: 143
Date: Wed, 5 May 2010 02:47:47 -0700 (PDT)
From: "David.Epstein" <David.Epstein at warwick.ac.uk>
To: r-help at r-project.org
Subject: [R] puzzles with assign()
Message-ID: <1273052867103-2130691.post at n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I'm trying to get code along the following lines to work:
temp.name <- paste(TimePt,'df',sep='.') # invent a relevant
name/symbol as a
character string.
assign(temp.name,IGF.df[IGF.df$TPt==TimePt,]) # this works. The relevant
variable is now a data frame
lm(b ~ Strain+BWt+PWt+PanPix, data=temp.name)) # this gives an error, namely
Error in eval(predvars, data, env) : invalid 'envir' argument
I think it's obvious what I want to achieve, but how is it done? I tried
data=as.name(temp.name)
but that also didn't work. I can't find anything relevant in
"Introduction
to R".
Here is a secondary question:
While trying to understand what assign() does, I looked up help(assign) and
found the example
a <- 1:4
assign("a[1]", 2)
a[1] == 2 #FALSE
get("a[1]") == 2 #TRUE
Could someone explain this puzzling example, or point me to an explanation
of environments and how to operate with them?
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/puzzles-with-assign-tp2130691p2130691.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
_______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 87, Issue 5
*************************************