Hello everyone,
My name is Matthew and I'm new to this list. greetings to everyone.
Sorry if I'm asking an old question, but I have an m x n matrix where the
rows are value profiles and the columns are conditions.
What I want to do is calculate the correlation between all possible pairs of
rows.
That is, if there are 10 rows in my matrix, then I want to calculate 10 x 10 =
100 correlation values (all against all).
Now R is slow when I use two for loops.
What kind of other function or tool can I use to get the job done more speedily?
I've heard of tapply, lapply, etc. and by and aggregate.
Any kind of help is gladly appreciated.
Thanks,
Matthew
________________________________
From: "r-help-request@r-project.org"
<r-help-request@r-project.org>
To: r-help@r-project.org
Sent: Friday, April 6, 2012 5:00 AM
Subject: R-help Digest, Vol 110, Issue 7
Send R-help mailing list submissions to
r-help@r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request@r-project.org
You can reach the person managing the list at
r-help-owner@r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. Re: identify with mfcol=c(1,2) (Sarah Goslee)
2. xyplot with different pch and col in each panel and
additional line (Camarda, Carlo Giovanni)
3. Re: xyplot with different pch and col in each panel and
additional line (Jim Lemon)
4. Re: STAR Spatio Temporal AutoRegressive models (Roger Bivand)
5. Apply function to every 'nth' element of a vector (Michael Bach)
6. Re: using metafor for meta-analysis of before-after studies
(Michael Dewey)
7. Re: Apply function to every 'nth' element of a vector (Ista Zahn)
8. Re: using metafor for meta-analysis of before-after studies
(Viechtbauer Wolfgang (STAT))
9. Re: Apply function to every 'nth' element of a vector
(ken knoblauch)
10. Re: Apply function to every 'nth' element of a vector
(ken knoblauch)
11. Re: meta-analysis, outcome = OR associated with a continuous
independent variable (Viechtbauer Wolfgang (STAT))
12. Re: table: output: all variables in rows (Marion Wenty)
13. Re: meta-analysis, outcome = OR associated with a continuous
independent variable (ya)
14. Extended Nelson Siegel model SMC with R-POMP (arvanitis.christos)
15. GMM package error (arvanitis.christos)
16. Re: Bloomberg API functions BAddPeriods Binterpol
Bcountperiods in RBloomberg (John Laing)
17. need explaination on Rpart (santoshdvn)
18. help in paste command (arunkumar1111)
19. constrained optimization with vectors using the package
alabama (Bjoern Guse)
20. is parallel computing possible for 'rollapplyr' job? (Pam)
21. Re: using metafor for meta-analysis of before-after studies
(MP.Sylvestre@gmail.com)
22. Re: meta-analysis, outcome = OR associated with a continuous
independent variable (MP.Sylvestre@gmail.com)
23. reclaiming lost memory in R (Ramiro Barrantes)
24. Re: help in paste command (mlell08)
25. Re: Rgui maintains open file handles after Sweave error
(Alexander Shenkin)
26. Re: meta-analysis, outcome = OR associated with a continuous
independent variable (Viechtbauer Wolfgang (STAT))
27. Re: Apply function to every 'nth' element of a vector
(David Winsemius)
28. Re: help in paste command (David Winsemius)
29. Re: is parallel computing possible for 'rollapplyr' job?
(Gabor Grothendieck)
30. Re: help in paste command (arunkumar1111)
31. data normalize issue (uday)
32. Re: recover lost global function (Sam Steingold)
33. Re: recover lost global function (MacQueen, Don)
34. Re: Apply function to every 'nth' element of a vector
(Michael Bach)
35. Sum of sd between matrix cols vs spearman correlation between
them (ali_protocol)
36. Re: Subscript Error (z2.0)
37. help in match.fun (arunkumar1111)
38. Re: recover lost global function (luke-tierney@uiowa.edu)
39. Re: help in match.fun (Sarah Goslee)
40. Re: recover lost global function (Sam Steingold)
41. Best way to search r- functions and mailing list?
(Jonathan Greenberg)
42. Re: Imputing missing values using "LSmeans" (i.e., population
marginal means) - advice in R? (Liaw, Andy)
43. Re: unable to move temporary installation (Uwe Ligges)
44. ggplot2 error: arguments imply differing number of rows
(Debbie Smith)
45. Re: reclaiming lost memory in R (Drew Tyre)
46. Re: unable to move temporary installation (Drew Tyre)
47. Re: Best way to search r- functions and mailing list?
(R. Michael Weylandt)
48. Re: Fisher's LSD multiple comparisons in a two-way ANOVA
(Mendiburu, Felipe (CIP))
49. Re: API Baddperiods in RBloomberg (Prof Brian Ripley)
50. Re: Rgui maintains open file handles after Sweave error
(Duncan Murdoch)
51. Re: Rgui maintains open file handles after Sweave error
(Alexander Shenkin)
52. Re: Best way to search r- functions and mailing list?
(Kevin Wright)
53. Re: Best way to search r- functions and mailing list?
(Sarah Goslee)
54. Re: Sum of sd between matrix cols vs spearman correlation
between them (Petr Savicky)
55. how to do piecewise linear regression in R? (MANI)
56. Re: How to find best parameter values using deSolve n optim()
? (mhimanshu)
57. Re: random sample from list (Rui Barradas)
58. Re: random sample from list (David Winsemius)
59. Re: Best way to search r- functions and mailing list?
(Spencer Graves)
60. Re: ggplot2 error: arguments imply differing number of rows
(ux.seo)
61. Re: Best way to search r- functions and mailing list?
(Sarah Goslee)
62. Re: Rgui maintains open file handles after Sweave error
(Yihui Xie)
63. Multi part problem...array manipulation and sampling (Bcampbell99)
64. Re: Rgui maintains open file handles after Sweave error
(Alexander Shenkin)
65. Re: Best way to search r- functions and mailing list?
(Spencer Graves)
66. Re: Rgui maintains open file handles after Sweave error
(Yihui Xie)
67. Re: Rgui maintains open file handles after Sweave error
(Alexander Shenkin)
68. "too large for hashing" (Adam D. I. Kramer)
69. Re: "too large for hashing" (Duncan Murdoch)
70. Re: Rgui maintains open file handles after Sweave error
(Yihui Xie)
71. Re: API Baddperiods in RBloomberg (John Laing)
72. Re: Problem with NA data when computing standard error (apcoble)
73. Normalizing linear regression slope to intercept (Adam Harding)
74. Re: "too large for hashing" (Adam D. I. Kramer)
75. A kind of set operation in R (Julio Sergio)
76. Re: A kind of set operation in R (Richard M. Heiberger)
77. indexing data.frame columns (Peter Meilstrup)
78. Re: Difference in Kaplan-Meier estimates plus CI (array chip)
79. Re: A kind of set operation in R (Julio Sergio)
80. Re: A kind of set operation in R (Berend Hasselman)
81. Re: Histogram classwise (Greg Snow)
82. Appropriate method for sharing data across functions (John C Nash)
83. Re: Appropriate method for sharing data across functions
(Hadley Wickham)
84. producing vignettes (Erin Hodgess)
85. Re: identify with mfcol=c(1,2) (Greg Snow)
86. Re: indexing data.frame columns (ilai)
87. Re: meaning of sigma from LM, is it the same as RMSE (Greg Snow)
88. Re: indexing data.frame columns (Milan Bouchet-Valat)
89. Re: producing vignettes (R. Michael Weylandt)
90. Re: How does predict.loess work? (Greg Snow)
91. Re: A kind of set operation in R (Julio Sergio)
92. Re: A kind of set operation in R (Dirk Eddelbuettel)
93. Re: A kind of set operation in R (Bert Gunter)
94. R PMML Standard Support: BetteR than EveR! (MZ)
95. Re: how to compute a vector of min values ? (Rui Barradas)
96. Help Using Spreadsheets (Pedro Henrique)
97. count() function (Christopher R. Dolanc)
98. Inputing Excel data into R to make a map (Metametadata)
99. Warning message: Gamlss - Need help (Hanin Farah)
100. Function - simple question (flacerdams)
101. Re: Function - simple question (chuck.01)
102. how to compute a vector of min values ? (ikuzar)
103. integrate function - error -integration not occurring with
last few rows (Navin Goyal)
104. Re: rgl package broke with R 2.14.2 (Grimes Mark)
105. Re: [Q] Bayeisan Network with the "deal" package (kritikool)
106. Re: Bayesian Belief Networks (kritikool)
107. Re: Inputing Excel data into R to make a map (Michael Sumner)
108. Re: count() function (William Dunlap)
109. Re: Appropriate method for sharing data across functions
(William Dunlap)
110. Re: reclaiming lost memory in R (William Dunlap)
111. Re: identify with mfcol=c(1,2) (John Sorkin)
112. Re: meaning of sigma from LM, is it the same as RMSE
(John Sorkin)
113. Re: count() function (David Winsemius)
114. Help with gsub function or a similar function (ieatnapalm)
115. simulation (Christopher Kelvin)
116. Re: Help with gsub function or a similar function
(David Winsemius)
117. Re: simulation (David Winsemius)
118. Re: integrate function - error -integration not occurring
with last few rows (Berend Hasselman)
119. Re: how to compute a vector of min values ? (peter dalgaard)
120. Legend based on levels of a variable (Kumar Mainali)
121. Time series - year on year growth rate (jpm miao)
122. Re: Fisher's LSD multiple comparisons in a two-way ANOVA
(Jinsong Zhao)
123. Re: Legend based on levels of a variable (Petr PIKAL)
124. Hi: Help Using Spreadsheets (Petr PIKAL)
125. Re: Legend based on levels of a variable (mlell08)
126. Re: how to do piecewise linear regression in R? (Petr PIKAL)
127. Re: Time series - year on year growth rate (Berend Hasselman)
----------------------------------------------------------------------
Message: 1
Date: Thu, 5 Apr 2012 06:17:58 -0400
From: Sarah Goslee <sarah.goslee@gmail.com>
To: John Sorkin <JSorkin@grecc.umaryland.edu>
Cc: "<r-help@r-project.org>" <r-help@r-project.org>
Subject: Re: [R] identify with mfcol=c(1,2)
Message-ID: <25338AD6-6305-428F-A6CC-E25E245E6597@gmail.com>
Content-Type: text/plain; charset=us-ascii
Hi,
Some additional information from you would make it more likely that the list can
help you.
What's your sessionInfo?
Does the same thing occur if you don't wrap both plots in a single function?
Can you provide a small reproducible example so we can try it out?
Sarah
On Apr 4, 2012, at 7:48 PM, "John Sorkin"
<JSorkin@grecc.umaryland.edu> wrote:
> Please forgive my re-sending this question. I did not see any replies from
my prior post. My apologies if I missed something.
>
> I would like to have a figure with two graphs. This is easily accomplished
using mfcol:
>
> oldpar <- par(mfcol=c(1,2))
> plot(x,y)
> plot(z,x)
> par(oldpar)
>
> I run into trouble if I try to use identify with the two plots. If, after
identifying points on my first graph I hit the ESC key, or hitting stop menu bar
of my R session, the system stops the identification process, but fails to give
me my second graph. Is there a way to allow for the identification of points
when one is plotting to graphs in a single graph window? My code follows.
>
> plotter <- function(first,second) {
> # Allow for two plots in on graph window.
> oldpar<-par(mfcol=c(1,2))
>
> #Bland-Altman plot.
> plot((second+first)/2,second-first)
> abline(0,0)
> # Allow for indentification of extreme values.
> BAzap<-identify((second+first)/2,second-first,labels =
seq_along(data$Line))
> print(BAzap)
>
> # Plot second as a function of first value.
> plot(first,second,main="Limin vs. Limin",xlab="First
(cm^2)",ylab="Second (cm^3)")
> # Add identity line.
> abline(0,1,lty=2,col="red")
> # Allow for identification of extreme values.
> zap<-identify(first,second,labels = seq_along(data$Line))
> print(zap)
> # Add regression line.
> fit1<-lm(first~second)
> print(summary(fit1))
> abline(fit1)
> print(summary(fit1)$sigma)
>
> # reset par to default values.
> par(oldpar)
>
> }
> plotter(first,second)
>
>
> Thanks,
> John
>
>
>
>
>
>
> John David Sorkin M.D., Ph.D.
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for th...{{dropped:6}}
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 2
Date: Thu, 5 Apr 2012 12:41:36 +0200
From: "Camarda, Carlo Giovanni" <Camarda@demogr.mpg.de>
To: <r-help@stat.math.ethz.ch>
Subject: [R] xyplot with different pch and col in each panel and
additional line
Message-ID:
<5B2F2CD24AD2764D898AF5A7B9A2ABBF02C3688B@hermes.demogr.mpg.de>
Content-Type: text/plain
Dear R-users,
I'd like to use an xyplot(lattice) in which in each panel I have points with
different point-character and color, and additional lines with the same color.
Please find below a toy example in which I did not manage to change such
parameters, and the associated basic plot() in which I show what I aim to within
lattice.
Thanks for any help you could provide,
Giancarlo Camarda
library(lattice)
y <- seq(0,1, length=15)
yy <- y + rnorm(15,sd=0.05)
dataset <- data.frame(time = rep(1:5,times=3),
y = y,
yy = yy,
type = rep(LETTERS[1:3], each=5))
xyplot(yy+y ~ time | type, dataset,
layout=c(3,1),
type=c("p", "l"),
col=c(1,2),
panel = function(...) {
panel.superpose.2(...)
})
par(mfrow=c(1,3))
plot(1:5, yy[1:5], pch=1, col=3, ylim=c(0,1))
lines(1:5, y[1:5], col=2)
plot(1:5, yy[1:5+5], pch=2, col=4, ylim=c(0,1))
lines(1:5, y[1:5+5], col=2)
plot(1:5, yy[1:5+10], pch=3, col=5, ylim=c(0,1))
lines(1:5, y[1:5+10], col=2)
----------
This mail has been sent through the MPI for Demographic ...{{dropped:10}}
------------------------------
Message: 3
Date: Thu, 5 Apr 2012 20:50:25 +1000
From: Jim Lemon <jim@bitwrit.com.au>
To: "Camarda, Carlo Giovanni" <Camarda@demogr.mpg.de>
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] xyplot with different pch and col in each panel and
additional line
Message-ID: <4F7D78F1.3040107@bitwrit.com.au>
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
On 04/05/2012 08:41 PM, Camarda, Carlo Giovanni wrote:> Dear R-users,
>
> I'd like to use an xyplot(lattice) in which in each panel I have points
with different point-character and color, and additional lines with the same
color. Please find below a toy example in which I did not manage to change such
parameters, and the associated basic plot() in which I show what I aim to within
lattice.
>
> Thanks for any help you could provide,
> Giancarlo Camarda
>
> library(lattice)
> y<- seq(0,1, length=15)
> yy<- y + rnorm(15,sd=0.05)
> dataset<- data.frame(time = rep(1:5,times=3),
> y = y,
> yy = yy,
> type = rep(LETTERS[1:3], each=5))
>
> xyplot(yy+y ~ time | type, dataset,
> layout=c(3,1),
> type=c("p", "l"),
> col=c(1,2),
> panel = function(...) {
> panel.superpose.2(...)
> })
>
> par(mfrow=c(1,3))
> plot(1:5, yy[1:5], pch=1, col=3, ylim=c(0,1))
> lines(1:5, y[1:5], col=2)
> plot(1:5, yy[1:5+5], pch=2, col=4, ylim=c(0,1))
> lines(1:5, y[1:5+5], col=2)
> plot(1:5, yy[1:5+10], pch=3, col=5, ylim=c(0,1))
> lines(1:5, y[1:5+10], col=2)
>
>
Hi Carlo,
If all you want is the "look" of the panel plot, you might find that
the
"panes" function (plotrix) will give you that. See the examples.
Jim
------------------------------
Message: 4
Date: Thu, 5 Apr 2012 10:57:28 +0000
From: Roger Bivand <Roger.bivand@nhh.no>
To: <r-help@stat.math.ethz.ch>
Subject: Re: [R] STAR Spatio Temporal AutoRegressive models
Message-ID: <loom.20120405T125312-815@post.gmane.org>
Content-Type: text/plain; charset="us-ascii"
vasilis <vasilis.dakos <at> gmail.com> writes:
>
> Hi there,
> do you know if there is a package that fits spatio temporal autoregressive
> models in R?
If by spatio temporal autoregressive models, you mean spatial panel models,
please see the splm package on R-Forge:
https://r-forge.r-project.org/projects/splm/
> thanks
> vasilis
>
------------------------------
Message: 5
Date: Thu, 5 Apr 2012 13:01:51 +0200
From: Michael Bach <phaebz@gmail.com>
To: r-help@r-project.org
Subject: [R] Apply function to every 'nth' element of a vector
Message-ID:
<CAFbCY6gmx2J+_=Zg-TJUrAyNCgbao-unEugxSoBBR+ZHMPeQHw@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear R users,
how do I e.g. square each second element of a vector with an even
number of elements? Or more generally to apply a function to every
'nth' element of a vector. I looked into the apply functions, but
found no hint.
For example:
v <- c(1, 2, 3, 4)
mysquare <- function (x) { return (x*x) }
w <- applyfun(v, mysquare, 2)
then w should be c(1, 4, 3, 16)
Thanks for your time,
Michael Bach
------------------------------
Message: 6
Date: Thu, 05 Apr 2012 12:04:01 +0100
From: Michael Dewey <info@aghmed.fsnet.co.uk>
To: MP.Sylvestre@gmail.com, r-help@r-project.org
Subject: Re: [R] using metafor for meta-analysis of before-after
studies
Message-ID: <Zen-1SFkU0-0007xf-FW@smarthost04.mail.zen.net.uk>
Content-Type: text/plain; charset="us-ascii"; format=flowed
At 18:39 04/04/2012, MP.Sylvestre@gmail.com wrote:>Greetings,
>I wish to conduct a meta-analysis for which the outcome is a continuous
>variable measured on the same individuals before and after an intervention.
>Hence, the comparison is not made between two groups, but within groups, at
>diffrent times.
>
>Each study reports the mean outcome and SD before the intervention and the
>mean outcome and SD after the intervention. While p-values for paired
>t-test (or similar methods for paired data) are reported in the studies, no
>estimate of the variability of the individual differences are available.
If you know the p-value you can generate the t-value
If you know the t-value and the mean difference you can back
calculate the standard errors of the differences.
Having said that I am not absolutely sure what the design of the
primary studies you are analysing is so my answer may not apply
directly to your problem.
>Can metafor deal with this sort of meta-analysis? I know that I can
>technically run metafor on these data, assuming that the groups are
>independent but my inference is likely to be wrong. On the other hand, I
>have no idea of the correlation within individuals.
>
>Thanks in advance,
>MP
>
> [[alternative HTML version deleted]]
Michael Dewey
info@aghmed.fsnet.co.uk
http://www.aghmed.fsnet.co.uk/home.html
------------------------------
Message: 7
Date: Thu, 5 Apr 2012 07:15:40 -0400
From: Ista Zahn <istazahn@gmail.com>
To: Michael Bach <phaebz@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Apply function to every 'nth' element of a vector
Message-ID:
<CA+vqiLEXdX2AxvGY+MZf7tuvVTf3QtDiSQJTHKc8if9EFGv+Zw@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Good morning Michael,
On Thu, Apr 5, 2012 at 7:01 AM, Michael Bach <phaebz@gmail.com>
wrote:> Dear R users,
>
> how do I e.g. square each second element of a vector with an even
> number of elements? Or more generally to apply a function to every
> 'nth' element of a vector. I looked into the apply functions, but
> found no hint.
Use indexing, or ifelse.
>
> For example:
>
> v <- c(1, 2, 3, 4)
> mysquare <- function (x) { return (x*x) }
> w <- applyfun(v, mysquare, 2)
>
> then w should be c(1, 4, 3, 16)
Here is one way:
w <- ifelse(v %in% seq(1, length(v), 2), v, v^2)
Best,
Ista
>
> Thanks for your time,
> Michael Bach
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 8
Date: Thu, 5 Apr 2012 13:44:54 +0200
From: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>
To: "r-help@r-project.org" <r-help@r-project.org>,
"MP.Sylvestre@gmail.com" <MP.Sylvestre@gmail.com>,
Michael Dewey
<info@aghmed.fsnet.co.uk>
Subject: Re: [R] using metafor for meta-analysis of before-after
studies
Message-ID:
<077E31A57DA26E46AB0D493C9966AC730CAEABBAF3@UM-MAIL4112.unimaas.nl>
Content-Type: text/plain; charset="us-ascii"
To add to Michael's response:
There are several things you can do:
1) If the dependent variable is the same in each study, then you could conduct
the meta-analysis with the (raw) mean changes, i.e., yi = m1i - m2i, where m1i
and m2i are the means at time 1 and 2, respectively. The sampling variance of yi
is vi = sdi^2 / ni, where sdi = sqrt(sd1i^2 + sd2i^2 - 2*ri*sd1i*sd2i), sd1i and
sd2i are the standard deviations of the outcomes at time 1 and 2, respectively,
ri is the correlation between the outcomes at time 1 and time 2, and ni is the
sample size. So, sdi is the standard deviation of the change scores.
When sdi is not reported, you will have to back-calculate sdi based on what you
have. You say that the p-value for the paired samples t-test is reported.
Typically, this will be a two-sided p-value, so ti = qt(pval/2, df=ni-1,
lower.tail=FALSE) will give you the value of the test statistic.
And since ti = (m1i - m2i) * sqrt(ni) / sdi, you can back-calculate what sdi is
with sdi = (m1i - m2i) * sqrt(ni) / ti (you just have to make sure that the sign
of m1i - m2i and the sign of ti are matched up). And now, you can even
back-calculate what ri was by rearranging the equation for sdi.
2) Often, the dependent variable is not the same in each study. Then you will
have to resort to a standardized outcome measure. There are two options:
a) standardization based on the change score standard deviation
Then yi = (m1i - m2i) / sdi with sampling variance vi = 1/ni + yi^2 / (2*ni).
b) standardization based on the raw score standard deviation
Then yi = (m1i - m2i) / sd1i with sampling variance vi = 2*(1-ri)/ni + yi^2 /
(2*ni).
Note that we standardize based on sd1i (i.e., the SD at time 1). So, we do not
pool sd1i and sd2i. Also, since ri is typically not reported, you will have to
use the method described above to back-calculate what ri was.
Regardless of which approach you use, you can then proceed with the
meta-analysis using the yi and vi values. For example, with the metafor package,
if those values are in a data frame called dat,
rma(yi, vi, data=dat)
will fit a random-effects model.
Those three outcome measures described above will actually be implemented in an
upcoming version of the metafor package. For now, you will have to do the
computations of yi and vi yourself.
Best,
Wolfgang
--
Wolfgang Viechtbauer, Ph.D., Statistician
Department of Psychiatry and Psychology
School for Mental Health and Neuroscience
Faculty of Health, Medicine, and Life Sciences
Maastricht University, P.O. Box 616 (VIJV1)
6200 MD Maastricht, The Netherlands
+31 (43) 388-4170 | http://www.wvbauer.com
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org]
> On Behalf Of Michael Dewey
> Sent: Thursday, April 05, 2012 13:04
> To: MP.Sylvestre@gmail.com; r-help@r-project.org
> Subject: Re: [R] using metafor for meta-analysis of before-after studies
>
> At 18:39 04/04/2012, MP.Sylvestre@gmail.com wrote:
> >Greetings,
> >I wish to conduct a meta-analysis for which the outcome is a continuous
> >variable measured on the same individuals before and after an
> intervention.
> >Hence, the comparison is not made between two groups, but within
> >groups, at diffrent times.
> >
> >Each study reports the mean outcome and SD before the intervention and
> >the mean outcome and SD after the intervention. While p-values for
> >paired t-test (or similar methods for paired data) are reported in the
> >studies, no estimate of the variability of the individual differences
are
> available.
>
> If you know the p-value you can generate the t-value If you know the t-
> value and the mean difference you can back calculate the standard errors
> of the differences.
>
> Having said that I am not absolutely sure what the design of the primary
> studies you are analysing is so my answer may not apply directly to your
> problem.
>
>
> >Can metafor deal with this sort of meta-analysis? I know that I can
> >technically run metafor on these data, assuming that the groups are
> >independent but my inference is likely to be wrong. On the other hand,
> >I have no idea of the correlation within individuals.
> >
> >Thanks in advance,
> >MP
> >
> > [[alternative HTML version deleted]]
>
> Michael Dewey
> info@aghmed.fsnet.co.uk
> http://www.aghmed.fsnet.co.uk/home.html
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 9
Date: Thu, 5 Apr 2012 11:52:02 +0000
From: ken knoblauch <ken.knoblauch@inserm.fr>
To: <r-help@stat.math.ethz.ch>
Subject: Re: [R] Apply function to every 'nth' element of a vector
Message-ID: <loom.20120405T134958-940@post.gmane.org>
Content-Type: text/plain; charset="utf-8"
Michael Bach <phaebz <at> gmail.com> writes:> how do I e.g. square each second element of a
vector with an even> number of elements? Or more generally to
apply a function to every> 'nth' element of a vector. I looked into the
apply functions, but> found no hint.
> For example:
> v <- c(1, 2, 3, 4)
> mysquare <- function (x) { return (x*x) }
> w <- applyfun(v, mysquare, 2)
> then w should be c(1, 4, 3, 16)
> Michael Bach
Hi Michael,
v^(2 - seq_along(v) %% 2)
[1] 1 4 3 16
Ken
--
Ken Knoblauch
Inserm U846
Stem-cell and Brain Research Institute
Department of Integrative Neurosciences
18 avenue du Doyen L?pine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.sbri.fr/members/kenneth-knoblauch.html
------------------------------
Message: 10
Date: Thu, 5 Apr 2012 12:02:16 +0000
From: ken knoblauch <ken.knoblauch@inserm.fr>
To: <r-help@stat.math.ethz.ch>
Subject: Re: [R] Apply function to every 'nth' element of a vector
Message-ID: <loom.20120405T135939-271@post.gmane.org>
Content-Type: text/plain; charset="utf-8"
ken knoblauch <ken.knoblauch <at> inserm.fr> writes:
>
> Michael Bach <phaebz <at> gmail.com> writes:
> > how do I e.g. square each second element of a
> vector with an even
> > number of elements? Or more generally to
> apply a function to every
> > 'nth' element of a vector. I looked into the
> apply functions, but
> > found no hint.
> > For example:
> > v <- c(1, 2, 3, 4)
> > mysquare <- function (x) { return (x*x) }
> > w <- applyfun(v, mysquare, 2)
> > then w should be c(1, 4, 3, 16)
> > Michael Bach
>
> Hi Michael,
>
> v^(2 - seq_along(v) %% 2)
>
> [1] 1 4 3 16
>
> Ken
>
Actually, combining Ista and my responses,
a general response could be
something like
ifelse(v %% n, v, function(v){} )
where you have set n and define some
function
HTH,
Ken
--
Ken Knoblauch
Inserm U846
Stem-cell and Brain Research Institute
Department of Integrative Neurosciences
18 avenue du Doyen L?pine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.sbri.fr/members/kenneth-knoblauch.html
------------------------------
Message: 11
Date: Thu, 5 Apr 2012 14:03:53 +0200
From: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>
To: "r-help@r-project.org" <r-help@r-project.org>, Marie-Pierre
Sylvestre <mp.sylvestre@gmail.com>, Thomas Lumley
<tlumley@uw.edu>
Subject: Re: [R] meta-analysis, outcome = OR associated with a
continuous independent variable
Message-ID:
<077E31A57DA26E46AB0D493C9966AC730CAEABBB16@UM-MAIL4112.unimaas.nl>
Content-Type: text/plain; charset="iso-8859-1"
I do not see any major difficulties with this case either. Suppose you have OR =
1.5 (with 95% CI: 1.19 to 1.90) indicating that the odds of a particular outcome
(e.g., disease) is 1.5 times greater when the (continuous) exposure variable
increases by one unit. Then you can back-calculate the SE of log(OR) = .41 with
sei = (ln(ci.ub) - ln(ci.lb)) / (2*1.96),
which in this case is approximately 0.12. The sampling variance of log(OR) is
then vi = sei^2.
Now you have everything for the meta-analysis, using any of the packages
mentioned.
What Thomas already mentioned is that the 'one unit increase' must mean
the same thing in each study. Therefore, if the exposure variable is measured in
months in one study and in years in another study, then the odds ratios are
obviously not directly comparable. If the units are just multiples of each
other, then you can easily calculate what the OR would be when putting the
exposure variable on the same scale. For example, an OR of 1.5 for a one month
increase in exposure is the same as an OR of 1.5^12 = 129.75 for a one year
increase in exposure.
Best,
Wolfgang
--
Wolfgang Viechtbauer, Ph.D., Statistician
Department of Psychiatry and Psychology
School for Mental Health and Neuroscience
Faculty of Health, Medicine, and Life Sciences
Maastricht University, P.O. Box 616 (VIJV1)
6200 MD Maastricht, The Netherlands
+31 (43) 388-4170 | http://www.wvbauer.com
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org]
> On Behalf Of Thomas Lumley
> Sent: Wednesday, April 04, 2012 23:42
> To: Marie-Pierre Sylvestre
> Cc: r-help@r-project.org
> Subject: Re: [R] meta-analysis, outcome = OR associated with a continuous
> independent variable
>
> On Thu, Apr 5, 2012 at 8:24 AM, Marie-Pierre Sylvestre
> <mp.sylvestre@gmail.com> wrote:
> > Hello everyone,
> > I want to do a meta-analysis of case-control studies on which an OR
> > was computed based on a continuous exposure. I have found several
> > several packages (metafor, rmeta, meta) but unless I misunderstood
> > their main functions, ?it seems to me that they focus on two-group
> > comparisons (binary independent variable), and do not have the option
> > of using a continuous independent variable.
>
>
> There's no problem in using continuous exposures in meta.summaries() in
> the rmeta package. For each study, compute your log odds ratio and its
> standard error, and feed them in.
>
> You just need to make sure that the odds ratio is in the same units in
> each study, of course.
>
> -thomas
>
> --
> Thomas Lumley
> Professor of Biostatistics
> University of Auckland
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 12
Date: Thu, 5 Apr 2012 14:10:23 +0200
From: Marion Wenty <marion.wenty@gmail.com>
To: David Winsemius <dwinsemius@comcast.net>
Cc: r-help@r-project.org
Subject: Re: [R] table: output: all variables in rows
Message-ID:
<CA+=LhL3ddL06vC+BWi4vNy=q16Q7mAK_Opo7Nqd+6sFAO0Y85Q@mail.gmail.com>
Content-Type: text/plain
Hello David,
yes! this is what I was looking for!
Thank you very much for your help.
Marion
2012/4/2 David Winsemius <dwinsemius@comcast.net>
>
> On Apr 2, 2012, at 7:11 AM, Marion Wenty wrote:
>
> Dear people,
>>
>> I would like to create a table out of a data.frame.
>>
>> How can I determine, which variables are put in the rows and which in
the
>> columns?
>> I would like to get all the variables in the ROWS:
>>
>> I am including a simple example:
>>
>>
D<-data.frame(age=c(8,9,10),**county=c("B","W","W"))
>>
>> the output should have the following structure:
>>
>> 8 B 1
>>
>> 8 W 0
>>
>> 9 B 0
>>
>> 9 W 1
>>
>> 10 B 0
>>
>> 10 W 1
>>
>
> You can use the order() function with "[" to rearrange this to
suit you
> needs:
>
> > as.data.frame(xtabs(~age+**county, data=D))
> age county Freq
> 1 8 B 1
> 2 9 B 0
> 3 10 B 0
> 4 8 W 0
> 5 9 W 1
> 6 10 W 1
>
>>
>>
>> [[alternative HTML version deleted]]
>>
>
> Posting in HTML is considered impolite on Rhelp.
>
> --
>
> David Winsemius, MD
> West Hartford, CT
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 13
Date: Thu, 05 Apr 2012 15:13:33 +0300
From: ya <xinxi813@163.com>
To: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>
Cc: "r-help@r-project.org" <r-help@r-project.org>, Thomas
Lumley
<tlumley@uw.edu>, Marie-Pierre Sylvestre
<mp.sylvestre@gmail.com>
Subject: Re: [R] meta-analysis, outcome = OR associated with a
continuous independent variable
Message-ID: <4F7D8C6D.6090200@163.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
LIKE:)
On 2012-4-5 15:03, Viechtbauer Wolfgang (STAT) wrote:> I do not see any major difficulties with this case either. Suppose you have
OR = 1.5 (with 95% CI: 1.19 to 1.90) indicating that the odds of a particular
outcome (e.g., disease) is 1.5 times greater when the (continuous) exposure
variable increases by one unit. Then you can back-calculate the SE of log(OR) =
.41 with
>
> sei = (ln(ci.ub) - ln(ci.lb)) / (2*1.96),
>
> which in this case is approximately 0.12. The sampling variance of log(OR)
is then vi = sei^2.
>
> Now you have everything for the meta-analysis, using any of the packages
mentioned.
>
> What Thomas already mentioned is that the 'one unit increase' must
mean the same thing in each study. Therefore, if the exposure variable is
measured in months in one study and in years in another study, then the odds
ratios are obviously not directly comparable. If the units are just multiples of
each other, then you can easily calculate what the OR would be when putting the
exposure variable on the same scale. For example, an OR of 1.5 for a one month
increase in exposure is the same as an OR of 1.5^12 = 129.75 for a one year
increase in exposure.
>
> Best,
>
> Wolfgang
>
> --
> Wolfgang Viechtbauer, Ph.D., Statistician
> Department of Psychiatry and Psychology
> School for Mental Health and Neuroscience
> Faculty of Health, Medicine, and Life Sciences
> Maastricht University, P.O. Box 616 (VIJV1)
> 6200 MD Maastricht, The Netherlands
> +31 (43) 388-4170 | http://www.wvbauer.com
>
>
>> -----Original Message-----
>> From: r-help-bounces@r-project.org
[mailto:r-help-bounces@r-project.org]
>> On Behalf Of Thomas Lumley
>> Sent: Wednesday, April 04, 2012 23:42
>> To: Marie-Pierre Sylvestre
>> Cc: r-help@r-project.org
>> Subject: Re: [R] meta-analysis, outcome = OR associated with a
continuous
>> independent variable
>>
>> On Thu, Apr 5, 2012 at 8:24 AM, Marie-Pierre Sylvestre
>> <mp.sylvestre@gmail.com> wrote:
>>> Hello everyone,
>>> I want to do a meta-analysis of case-control studies on which an OR
>>> was computed based on a continuous exposure. I have found several
>>> several packages (metafor, rmeta, meta) but unless I misunderstood
>>> their main functions, it seems to me that they focus on two-group
>>> comparisons (binary independent variable), and do not have the
option
>>> of using a continuous independent variable.
>>
>> There's no problem in using continuous exposures in
meta.summaries() in
>> the rmeta package. For each study, compute your log odds ratio and its
>> standard error, and feed them in.
>>
>> You just need to make sure that the odds ratio is in the same units in
>> each study, of course.
>>
>> -thomas
>>
>> --
>> Thomas Lumley
>> Professor of Biostatistics
>> University of Auckland
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-
>> guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 14
Date: Thu, 5 Apr 2012 03:30:22 -0700 (PDT)
From: "arvanitis.christos" <arvanitisch@piraeusbank.gr>
To: r-help@r-project.org
Subject: [R] Extended Nelson Siegel model SMC with R-POMP
Message-ID: <1333621822661-4534457.post@n4.nabble.com>
Content-Type: text/plain; charset=UTF-8
Hi to everyone,
Recently I became aware of an R package for Sequential Monte Carlo
particle filtering called pomp [Check the CRAN site]
I have found hard to learn and understand the package at a first passage
and I was wondering if someone of you would like to share her/his experience
of a pomp implementation of a system of SDEs
In particular I am trying to implement in POMP an extended Nelson Siegel
Interest rate model
The system of discrete time equations is as follows
y_(t ) (T)=b_(0,t)+b_(1,t)*f_1 (l_t )+b_(2,t)*f_2 (l_t )+e_t*e^(h_t )
h_t=(1-Q^h )*?mu?^h+Q^h*h_(t-1)+v_t
lnl_t=(1-Q^l )*?mu?^l+Q^l*?lnl?_(t-1)+z_t
diag(b1,b2,b3)=B
B_t=(1-Q^B )*?mu?^B+Q^B* B_(t-1)+W_t*e^(D_t )
D_t=(1-Q^D )*?mu?^D+Q^D* D_(t-1)+u_t
I am attaching a word document with the above system of equations in a more
readable form please check out!!!
Is there anyone who could provide some guidance on implementing this system
in pomp??
Can you provide some R code to guide the implementation of the above system?
Tnks in advance for your help
christos
--
View this message in context:
http://r.789695.n4.nabble.com/Extended-Nelson-Siegel-model-SMC-with-R-POMP-tp4534457p4534457.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 15
Date: Thu, 5 Apr 2012 04:22:07 -0700 (PDT)
From: "arvanitis.christos" <arvanitisch@piraeusbank.gr>
To: r-help@r-project.org
Subject: [R] GMM package error
Message-ID: <1333624927338-4534556.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi to all
I am using GEL function of GMM package to fit a ts
I use gmm to feed initial values to GEL as:
temp3M=gmm(fgmm,rf3M[[i]],tet03M[[i]],wmatrix="ident")$coef
gel3M[[i]]=gel(g=fgmm, # Function GMM
x=rf3M[[i]], # Data Time
series
tet0=temp3M,
gradv=NULL,
smooth=TRUE,
type= c("EL","ET", "CUE",
"ETEL")[4],
kernel= c("Truncated", "Bartlett")[1],
approx = c("AR(1)","ARMA(1,1)")[2],
prewhite = 1,
optfct = c("optim", "optimize",
"nlminb")[3],
optlam=c("iter","numeric")[1]
)
and I am getting the error
Error in gmm(P$g, x, P$tet0, wmatrix = "ident") : object 'x'
not found
can you help out?
tnks in advance for your help
--
View this message in context:
http://r.789695.n4.nabble.com/GMM-package-error-tp4534556p4534556.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 16
Date: Thu, 5 Apr 2012 07:43:14 -0400
From: John Laing <john.laing@gmail.com>
To: "arvanitis.christos" <arvanitisch@piraeusbank.gr>
Cc: r-help@r-project.org
Subject: Re: [R] Bloomberg API functions BAddPeriods Binterpol
Bcountperiods in RBloomberg
Message-ID:
<CAA3Wa=v-E+T8T5knNkpJ3KCQBVOXwwUSX9iiG5GY5Nuv=3GVqA@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hello,
These functions are not available in RBloomberg. As far as I can tell,
Bloomberg does not expose these functions in the general API; they are
specific to the Excel version.
R has other mechanisms for performing date arithmetic, both in the
Date class and the POSIXct/POSIXlt classes. This won't help with
calendar conventions, but it's a start. Also, R's approx function can
be used for interpolation.
HTH,
John
On Thu, Apr 5, 2012 at 3:42 AM, arvanitis.christos
<arvanitisch@piraeusbank.gr> wrote:> Hi to all,
>
> Is there a way to use the API bloomberg functions BAddPeriods Binterpol
> Bcountperiods in RBloomberg?
> tnks
>
>
> --
> View this message in context:
http://r.789695.n4.nabble.com/Bloomberg-API-functions-BAddPeriods-Binterpol-Bcountperiods-in-RBloomberg-tp4534163p4534163.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 17
Date: Thu, 5 Apr 2012 04:47:36 -0700 (PDT)
From: santoshdvn <santoshdvn@gmail.com>
To: r-help@r-project.org
Subject: [R] need explaination on Rpart
Message-ID: <1333626456668-4534612.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I am learning decisions trees using R ..I used RPART to create decision
maps. when i saw the summary .. i got
CP nsplit rel error xerror xstd
Could anyone make me understand terms here CP,relerror xerror and xstand.
or could you please provide the detailed blog where i can learn,,
Thanks,
Santosh
--
View this message in context:
http://r.789695.n4.nabble.com/need-explaination-on-Rpart-tp4534612p4534612.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 18
Date: Thu, 5 Apr 2012 05:55:52 -0700 (PDT)
From: arunkumar1111 <akpbond007@gmail.com>
To: r-help@r-project.org
Subject: [R] help in paste command
Message-ID: <1333630552375-4534756.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
i have a character variable
tablename="DressMaterials"
var1=("red","blue","green","white")
My output should be like
select * from DressMaterialswhere colors in
("red","blue","green","white")
i'm not able to get the where part.
my code
paste(select * from ", tablename , " where colors in
",paste(var1,
collapse=","))
But i'm not able to get required result
-----
Thanks in Advance
Arun
--
View this message in context:
http://r.789695.n4.nabble.com/help-in-paste-command-tp4534756p4534756.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 19
Date: Thu, 05 Apr 2012 15:06:13 +0200
From: Bjoern Guse <bguse@hydrology.uni-kiel.de>
To: r-help@r-project.org
Subject: [R] constrained optimization with vectors using the package
alabama
Message-ID: <4F7D98C5.9060101@hydrology.uni-kiel.de>
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Hello everyone,
I want to estimate a function for the relationship between the distance
and the cross-correlation coefficients. This means that I have several
pairs of distance and cross-correlation coefficients.
Hence, I have a function which includes two vectors with the same length
(for distance and the cross-correlation coefficients) and the parameters
x[1] and x[2] which I want to optimize.
I can easily apply auglag or constrOptim.nl from the alabama package for
scalar values, but I did not find a way to include vectors.
Has anyone from you an idea?
thank you and best regards
Bj?rn
--
Bj?rn Guse
Dr. rer. nat.
Institut f?r Natur- und Ressourcenschutz
Abteilung Hydrologie und Wasserwirtschaft
Christian-Albrechts-Universit?t zu Kiel
Olshausenstr. 75
D-24118 Kiel
(+49) 0431-880-1224
www.hydrology.uni-kiel.de
------------------------------
Message: 20
Date: Thu, 5 Apr 2012 06:18:31 -0700 (PDT)
From: Pam <fkiraz11@yahoo.com>
To: "r-help@r-project.org" <r-help@r-project.org>
Subject: [R] is parallel computing possible for 'rollapplyr' job?
Message-ID:
<1333631911.29734.YahooMailNeo@web161501.mail.bf1.yahoo.com>
Content-Type: text/plain
Hi,
The code below does exactly what I want in sequential mode. But, it is slow and
I want to run it in parallel mode. I examined some windows version packages
(parallel, snow, snowfall,..) but could not solve my specific problem. As far as
I understood, either I have to write a new function like sfRollapplyr or I have
to change my code in a way that it utilizes lapply, or sapply instead of
'rollapplyr' first then use sfInit, sfExport, and sfLapply,.. for
parallel computing. I could not perform either so please help me :)
##
nc<-313
rs<-500000
ema<-10
h<-4
gomin1sd<-function (x,rho)
{
getOutliers(as.vector(x),rho=c(1,1))$limit[1]
}
dim(dt_l1_inp)
[1] 500000 312
dt_l1_min1<-matrix(nrow=rs, ncol=nc-1-(ema*h))
for (i in 1:rs)
{
dt_l1_min1[i,]<-rollapplyr(dt_l1_inp[i,], FUN=gomin1sd, width=ema*h+1)
}
##
[[alternative HTML version deleted]]
------------------------------
Message: 21
Date: Thu, 05 Apr 2012 13:19:13 +0000
From: MP.Sylvestre@gmail.com
To: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>,
"MP.Sylvestre@gmail.com" <MP.Sylvestre@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>, Michael
Dewey
<info@aghmed.fsnet.co.uk>
Subject: Re: [R] using metafor for meta-analysis of before-after
studies
Message-ID: <20cf307810a0d8865604bcee625d@google.com>
Content-Type: text/plain
Many thanks to both of you for the helpful responses to my post. The
outcomes are all measured with the same units and I can indeed calculate
the sampling variance from the 2 SDs I get from each study.
MP
Le , "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl> a écrit
:> To add to Michael's response:
> There are several things you can do:
> 1) If the dependent variable is the same in each study, then you could
> conduct the meta-analysis with the (raw) mean changes, ie, yi = m1i -
> m2i, where m1i and m2i are the means at time 1 and 2, respectively. The
> sampling variance of yi is vi = sdi^2 / ni, where sdi = sqrt(sd1i^2 +
> sd2i^2 - 2*ri*sd1i*sd2i), sd1i and sd2i are the standard deviations of
> the outcomes at time 1 and 2, respectively, ri is the correlation between
> the outcomes at time 1 and time 2, and ni is the sample size. So, sdi is
> the standard deviation of the change scores.
> When sdi is not reported, you will have to back-calculate sdi based on
> what you have. You say that the p-value for the paired samples t-test is
> reported. Typically, this will be a two-sided p-value, so ti = qt(pval/2,
> df=ni-1, lower.tail=FALSE) will give you the value of the test statistic.
> And since ti = (m1i - m2i) * sqrt(ni) / sdi, you can back-calculate what
> sdi is with sdi = (m1i - m2i) * sqrt(ni) / ti (you just have to make sure
> that the sign of m1i - m2i and the sign of ti are matched up). And now,
> you can even back-calculate what ri was by rearranging the equation for
> sdi.
> 2) Often, the dependent variable is not the same in each study. Then you
> will have to resort to a standardized outcome measure. There are two
> options:
> a) standardization based on the change score standard deviation
> Then yi = (m1i - m2i) / sdi with sampling variance vi = 1/ni + yi^2 /
> (2*ni).
> b) standardization based on the raw score standard deviation
> Then yi = (m1i - m2i) / sd1i with sampling variance vi = 2*(1-ri)/ni +
> yi^2 / (2*ni).
> Note that we standardize based on sd1i (ie, the SD at time 1). So, we do
> not pool sd1i and sd2i. Also, since ri is typically not reported, you
> will have to use the method described above to back-calculate what ri was.
> Regardless of which approach you use, you can then proceed with the
> meta-analysis using the yi and vi values. For example, with the metafor
> package, if those values are in a data frame called dat,
> rma(yi, vi, data=dat)
> will fit a random-effects model.
> Those three outcome measures described above will actually be implemented
> in an upcoming version of the metafor package. For now, you will have to
> do the computations of yi and vi yourself.
> Best,
> Wolfgang
> --
> Wolfgang Viechtbauer, Ph.D., Statistician
> Department of Psychiatry and Psychology
> School for Mental Health and Neuroscience
> Faculty of Health, Medicine, and Life Sciences
> Maastricht University, PO Box 616 (VIJV1)
> 6200 MD Maastricht, The Netherlands
> +31 (43) 388-4170 | http://www.wvbauer.com
> > -----Original Message-----
> > From: r-help-bounces@r-project.org
[mailto:r-help-bounces@r-project.org]
> > On Behalf Of Michael Dewey
> > Sent: Thursday, April 05, 2012 13:04
> > To: MP.Sylvestre@gmail.com; r-help@r-project.org
> > Subject: Re: [R] using metafor for meta-analysis of before-after
studies
> >
> > At 18:39 04/04/2012, MP.Sylvestre@gmail.com wrote:
> > >Greetings,
> > >I wish to conduct a meta-analysis for which the outcome is a
continuous
> > >variable measured on the same individuals before and after an
> > intervention.
> > >Hence, the comparison is not made between two groups, but within
> > >groups, at diffrent times.
> > >
> > >Each study reports the mean outcome and SD before the intervention
and
> > >the mean outcome and SD after the intervention. While p-values for
> > >paired t-test (or similar methods for paired data) are reported in
the
> > >studies, no estimate of the variability of the individual
differences
> are
> > available.
> >
> > If you know the p-value you can generate the t-value If you know the
t-
> > value and the mean difference you can back calculate the standard
errors
> > of the differences.
> >
> > Having said that I am not absolutely sure what the design of the
primary
> > studies you are analysing is so my answer may not apply directly to
your
> > problem.
> >
> >
> > >Can metafor deal with this sort of meta-analysis? I know that I
can
> > >technically run metafor on these data, assuming that the groups
are
> > >independent but my inference is likely to be wrong. On the other
hand,
> > >I have no idea of the correlation within individuals.
> > >
> > >Thanks in advance,
> > >MP
> > >
> > > [[alternative HTML version deleted]]
> >
> > Michael Dewey
> > info@aghmed.fsnet.co.uk
> > http://www.aghmed.fsnet.co.uk/home.html
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 22
Date: Thu, 05 Apr 2012 13:22:31 +0000
From: MP.Sylvestre@gmail.com
To: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>, Marie-Pierre
Sylvestre <mp.sylvestre@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>, Thomas
Lumley
<tlumley@uw.edu>
Subject: Re: [R] meta-analysis, outcome = OR associated with a
continuous independent variable
Message-ID: <20cf307d038ea3d97f04bcee6eca@google.com>
Content-Type: text/plain
For some reason I was under the false impression that these packages were
made for meta-analyses of RCT-like studies in which two groups are
compared. I am glad to see that I was wrong and that I can use one of these
packages.
All studies reported using the same units for the exposure so the OR are
comparable.
Thanks for your responses,
MP
Le , "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl> a écrit
:> I do not see any major difficulties with this case either. Suppose you
> have OR = 1.5 (with 95% CI: 1.19 to 1.90) indicating that the odds of a
> particular outcome (eg, disease) is 1.5 times greater when the
> (continuous) exposure variable increases by one unit. Then you can
> back-calculate the SE of log(OR) = .41 with
> sei = (ln(ci.ub) - ln(ci.lb)) / (2*1.96),
> which in this case is approximately 0.12. The sampling variance of
> log(OR) is then vi = sei^2.
> Now you have everything for the meta-analysis, using any of the packages
> mentioned.
> What Thomas already mentioned is that the 'one unit increase' must
mean
> the same thing in each study. Therefore, if the exposure variable is
> measured in months in one study and in years in another study, then the
> odds ratios are obviously not directly comparable. If the units are just
> multiples of each other, then you can easily calculate what the OR would
> be when putting the exposure variable on the same scale. For example, an
> OR of 1.5 for a one month increase in exposure is the same as an OR of
> 1.5^12 = 129.75 for a one year increase in exposure.
> Best,
> Wolfgang
> --
> Wolfgang Viechtbauer, Ph.D., Statistician
> Department of Psychiatry and Psychology
> School for Mental Health and Neuroscience
> Faculty of Health, Medicine, and Life Sciences
> Maastricht University, PO Box 616 (VIJV1)
> 6200 MD Maastricht, The Netherlands
> +31 (43) 388-4170 | http://www.wvbauer.com
> > -----Original Message-----
> > From: r-help-bounces@r-project.org
[mailto:r-help-bounces@r-project.org]
> > On Behalf Of Thomas Lumley
> > Sent: Wednesday, April 04, 2012 23:42
> > To: Marie-Pierre Sylvestre
> > Cc: r-help@r-project.org
> > Subject: Re: [R] meta-analysis, outcome = OR associated with a
> continuous
> > independent variable
> >
> > On Thu, Apr 5, 2012 at 8:24 AM, Marie-Pierre Sylvestre
> > mp.sylvestre@gmail.com> wrote:
> > > Hello everyone,
> > > I want to do a meta-analysis of case-control studies on which an
OR
> > > was computed based on a continuous exposure. I have found several
> > > several packages (metafor, rmeta, meta) but unless I
misunderstood
> > > their main functions, it seems to me that they focus on two-group
> > > comparisons (binary independent variable), and do not have the
option
> > > of using a continuous independent variable.
> >
> >
> > There's no problem in using continuous exposures in
meta.summaries() in
> > the rmeta package. For each study, compute your log odds ratio and its
> > standard error, and feed them in.
> >
> > You just need to make sure that the odds ratio is in the same units in
> > each study, of course.
> >
> > -thomas
> >
> > --
> > Thomas Lumley
> > Professor of Biostatistics
> > University of Auckland
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 23
Date: Thu, 5 Apr 2012 13:35:17 +0000
From: Ramiro Barrantes <ramiro@precisionbioassay.com>
To: "r-help@r-project.org" <r-help@r-project.org>
Subject: [R] reclaiming lost memory in R
Message-ID:
<C7338A7EFF31BB4D831BB06C008877890A4648@mbx023-w1-ca-7.exch023.domain.local>
Content-Type: text/plain
Dear list,
I am trying to reclaim what I think is lost memory in R, I have been using gc(),
rm() and also using Rprof to figure out where all the memory is going but I
might be missing something.
I have the following situation
basic loop which calls memoryHogFunction:
for i in (1:N) {
dataset <- generateDataset(i)
fit <- try( memoryHogFunction(dataset, otherParameters))
}
and within
memoryHogFunction <- function(dataset, params){
fit <- try(nlme(someinitialValues)
...
fit <- try(updatenlme(otherInitialValues)
...
fit <- try(updatenlme(otherInitialValues)
...
ret <- fit ( and other things)
return a result "ret"
}
The problem is that, memoryHogFunction uses a lot of memory, and at the end
returns a result (which is not big) but the memory used by the computation seems
to be still occupied. The original loop continues, but the memory used by the
program grows and grows after each call to memoryHogFunction.
I have been trying to do gc() after each run in the loop, and have even done:
in memoryHogFunction()
...
ret <- fit ( and other things)
rm(list=ls()[-match("ret",ls())])
return a result "ret"
}
???
A typical results from gc() after each loop iteration says:
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 326953 17.5 597831 32.0 597831 32.0
Vcells 1645892 12.6 3048985 23.3 3048985 23.3
Which doesn't reflect that 340mb (and 400+mb in virtual memory) that are
being used right now.
Even when I do:
print(sapply(ls(all.names=TRUE), function(x) object.size(get(x))))
the largest object is 8179808, which is what it should be.
THe only thing that looked suspicious was the following within Rprof (with
memory=stats option), the tot.duplications might be a problem???:
index: "with":"with.default"
vsize.small max.vsize.small vsize.large max.vsize.large
30841 63378 20642 660787
nodes max.nodes duplications tot.duplications
3446132 8115016 12395 61431787
samples
4956
Any suggestions? Is it something about the use of loops in R? Is it maybe the
try's???
Thanks in advance for any help,
Ramiro
[[alternative HTML version deleted]]
------------------------------
Message: 24
Date: Thu, 05 Apr 2012 15:46:05 +0200
From: mlell08 <mlell08@googlemail.com>
To: arunkumar1111 <akpbond007@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] help in paste command
Message-ID: <4F7DA21D.6020004@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hello Arun,
> paste("select * from ", tablename , " where colors in
(",paste(var1,collapse=","),")")
[1] "select * from DressMaterials where colors in (
red,blue,green,white )"
Regards!
------------------------------
Message: 25
Date: Thu, 05 Apr 2012 08:53:01 -0500
From: Alexander Shenkin <ashenkin@ufl.edu>
To: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID: <4F7DA3BD.3030205@ufl.edu>
Content-Type: text/plain; charset=ISO-8859-1
Here is Dr. Leisch's advice for dealing with open handles (and it works):
> On 4/5/2012 4:22 AM, Friedrich Leisch wrote:
> ...
> You need to close the pdf device, not an open connection:
>
> R> Sweave("test.Rnw")
> Writing to file test.tex
> Processing code chunks with options ...
> 1 : keep.source term verbatim pdf
>
> Error: chunk 1
> Error in plot.xy(xy, type, ...) : object 'foo' not found
> R> ls()
> [1] "col.yellowbg" "df"
> R> dev.list()
> pdf
> 2
> R> dev.off()
> null device
> 1
> Best,
> Fritz
On 4/4/2012 2:52 PM, Alexander Shenkin wrote:> Thanks for the reply, Henrik. Process Explorer still shows the file
> handle as being open, but R only shows the following:
>
>> showConnections(all=TRUE)
> description class mode text isopen can read can write
> 0 "stdin" "terminal" "r"
"text" "opened" "yes" "no"
> 1 "stdout" "terminal" "w"
"text" "opened" "no" "yes"
> 2 "stderr" "terminal" "w"
"text" "opened" "no" "yes"
>>
>
> On 4/4/2012 2:45 PM, Henrik Bengtsson wrote:
>> See ?closeAllConnections
>>
>> Suggestion to the maintainer of Sweave: "atomify" the figure
>> generation, e.g. use { pdf(); on.exit(dev.off()); {...}; } or similar,
>> instead of { pdf(); {...}; dev.off(); } possibly by leaving a copy of
>> the fault figure file for troubleshooting.
>>
>> /Henrik
>>
>> On Wed, Apr 4, 2012 at 12:25 PM, Alexander Shenkin
<ashenkin@ufl.edu> wrote:
>>> Hello Folks,
>>>
>>> When I run the document below through sweave, rgui.exe/rsession.exe
>>> leaves a file handle open to the sweave-001.pdf graphic (as
verified by
>>> process explorer). Pdflatex.exe then crashes (with a Permission
Denied
>>> error) because the graphic file is locked.
>>>
>>> This only seems to happen when there is an error in the sweave
document.
>>> When there are no errors, no file handles are left open. However,
once
>>> a file handle is stuck open, I can find no other way of closing it
save
>>> for quitting out of R.
>>>
>>> Any help would be greatly appreciated! It would be nice to be able
to
>>> write flawless sweave every time, but flawed as I am, I am having
to
>>> restart R continuously.
>>>
>>> Thanks,
>>> Allie
>>>
>>>
>>> OS: Windows 7 Pro x64 SP1
>>>
>>>
>>>> sessionInfo()
>>> R version 2.14.2 (2012-02-29)
>>> Platform: i386-pc-mingw32/i386 (32-bit)
>>>
>>>
>>> test.Rnw:
>>>
>>> \documentclass{article}
>>> \title {file handle test}
>>> \author{test author}
>>> \usepackage{Sweave}
>>> \begin {document}
>>> \maketitle
>>>
>>> \SweaveOpts{prefix.string=sweave}
>>>
>>> \begin{figure}
>>> \begin{center}
>>>
>>> <<fig=TRUE, echo=FALSE>>>>> df =
data.frame(a=rnorm(100), b=rnorm(100), group = c("g1",
>>> "g2", "g3", "g4"))
>>> plot(df$a, df$y, foo)
>>> @
>>>
>>> \caption{test figure one}
>>> \label{fig:one}
>>> \end{center}
>>> \end{figure}
>>> \end{document}
>>>
>>>
>>>
>>> Sweave command run:
>>>
>>> Sweave("test.Rnw",
syntax="SweaveSyntaxNoweb")
>>>
>>>
>>>
>>> Sweave.sty:
>>>
>>> \NeedsTeXFormat{LaTeX2e}
>>> \ProvidesPackage{Sweave}{}
>>>
>>> \RequirePackage{ifthen}
>>> \newboolean{Sweave@gin}
>>> \setboolean{Sweave@gin}{true}
>>> \newboolean{Sweave@ae}
>>> \setboolean{Sweave@ae}{true}
>>>
>>> \DeclareOption{nogin}{\setboolean{Sweave@gin}{false}}
>>> \DeclareOption{noae}{\setboolean{Sweave@ae}{false}}
>>> \ProcessOptions
>>>
>>> \RequirePackage{graphicx,fancyvrb}
>>> \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{}
>>>
>>>
\ifthenelse{\boolean{Sweave@gin}}{\setkeys{Gin}{width=0.8\textwidth}}{}%
>>> \ifthenelse{\boolean{Sweave@ae}}{%
>>> \RequirePackage[T1]{fontenc}
>>> \RequirePackage{ae}
>>> }{}%
>>>
>>> \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl}
>>> \DefineVerbatimEnvironment{Soutput}{Verbatim}{}
>>> \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl}
>>>
>>> \newenvironment{Schunk}{}{}
>>>
>>> \newcommand{\Sconcordance}[1]{%
>>> \ifx\pdfoutput\undefined%
>>> \csname newcount\endcsname\pdfoutput\fi%
>>> \ifcase\pdfoutput\special{#1}%
>>> \else\immediate\pdfobj{#1}\fi}
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 26
Date: Thu, 5 Apr 2012 15:53:50 +0200
From: "Viechtbauer Wolfgang (STAT)"
<wolfgang.viechtbauer@maastrichtuniversity.nl>
To: "MP.Sylvestre@gmail.com" <MP.Sylvestre@gmail.com>,
"r-help@r-project.org" <r-help@r-project.org>, Thomas
Lumley
<tlumley@uw.edu>
Subject: Re: [R] meta-analysis, outcome = OR associated with a
continuous independent variable
Message-ID:
<077E31A57DA26E46AB0D493C9966AC730CAEABBBF7@UM-MAIL4112.unimaas.nl>
Content-Type: text/plain; charset="us-ascii"
You can get an OR from a 2x2 table (which is equivalent to doing logistic
regression with a single dummy variable that indicates the group) or from some
continuous exposure (where the logistic regression model will then include that
continuous variable). The various packages are set up to accept counts for 2x2
tables and then will compute the OR (or rather, log(OR)) for you. If you have
ORs from the second case, you simply calculate the log(ORs) yourself and supply
them to the appropriate function from the packages.
Best,
Wolfgang
> -----Original Message-----
> From: MP.Sylvestre@gmail.com [mailto:MP.Sylvestre@gmail.com]
> Sent: Thursday, April 05, 2012 15:23
> To: Viechtbauer Wolfgang (STAT); Marie-Pierre Sylvestre
> Cc: r-help@r-project.org; Thomas Lumley
> Subject: Re: RE: [R] meta-analysis, outcome = OR associated with a
> continuous independent variable
>
> For some reason I was under the false impression that these packages were
> made for meta-analyses of RCT-like studies in which two groups are
> compared. I am glad to see that I was wrong and that I can use one of
> these packages.
>
> All studies reported using the same units for the exposure so the OR are
> comparable.
>
> Thanks for your responses,
> MP
------------------------------
Message: 27
Date: Thu, 5 Apr 2012 09:53:55 -0400
From: David Winsemius <dwinsemius@comcast.net>
To: Michael Bach <phaebz@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Apply function to every 'nth' element of a vector
Message-ID: <D64E31E1-42AF-4506-A093-2330130D2469@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 7:01 AM, Michael Bach wrote:
> Dear R users,
>
> how do I e.g. square each second element of a vector with an even
> number of elements? Or more generally to apply a function to every
> 'nth' element of a vector. I looked into the apply functions, but
> found no hint.
>
> For example:
>
> v <- c(1, 2, 3, 4)
> mysquare <- function (x) { return (x*x) }
> w <- applyfun(v, mysquare, 2)
>
> then w should be c(1, 4, 3, 16)
An alternate approach to the ifelse() solutions you have been offered
is to use identical indexing on both sides of an assignment function.
> v <- c(1, 2, 3, 4)
> w <-v
> w[seq(2, length(w), by =2)] <-w[seq(2, length(w), by =2)]^2
> w
[1] 1 4 3 16
If you still wanted to create a function that would square each
element "in place" you could do this with an indexing strategy:
v <- c(1, 2, 3, 4)
w <-v
mysqr <- function (x) { eval.parent(substitute(x <- x^2)) }
mysqr(w[seq(2, length(w), by =2)])
w
#[1] 1 4 3 16
(Credit: http://www.statisticsblog.com/2011/10/waiting-in-line-waiting-on-r/
and search on 'inc')
>
> Thanks for your time,
> Michael Bach
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 28
Date: Thu, 5 Apr 2012 10:00:36 -0400
From: David Winsemius <dwinsemius@comcast.net>
To: arunkumar1111 <akpbond007@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] help in paste command
Message-ID: <9B6CB010-8A31-4E62-9278-AE05DAA3D141@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 8:55 AM, arunkumar1111 wrote:
> i have a character variable
> tablename="DressMaterials"
> var1=("red","blue","green","white")
>
> My output should be like
>
> select * from DressMaterialswhere colors in
> ("red","blue","green","white")
>
> i'm not able to get the where part.
?match
>
> my code
>
> paste(select * from ", tablename , " where colors in
",paste(var1,
> collapse=","))
>
> But i'm not able to get required result
>
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 29
Date: Thu, 5 Apr 2012 10:00:26 -0400
From: Gabor Grothendieck <ggrothendieck@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] is parallel computing possible for 'rollapplyr' job?
Message-ID:
<CAP01uRnvgp7N76Czs=Q-6YBXKw+8cT0umCXGkqQbh1Nscg9YHQ@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> The code below does exactly what I want in sequential mode. But, it is slow
and I want to run it in parallel mode. I examined some windows?version?packages
(parallel,?snow, snowfall,..) but could not solve my specific problem. As far as
I understood, either I have to write a new function like sfRollapplyr or I have
to?change my code?in a way that it utilizes lapply, or sapply instead of
'rollapplyr' first then use sfInit, sfExport, and sfLapply,.. for
parallel computing. I could not perform either so please help me :)
>
> ##
> nc<-313
> rs<-500000
> ema<-10
> h<-4
> gomin1sd<-function (x,rho)
> {
> getOutliers(as.vector(x),rho=c(1,1))$limit[1]
> }
> dim(dt_l1_inp)
> [1] 500000 312
> dt_l1_min1<-matrix(nrow=rs, ncol=nc-1-(ema*h))
> for (i in 1:rs)
> {
> dt_l1_min1[i,]<-rollapplyr(dt_l1_inp[i,], FUN=gomin1sd, width=ema*h+1)
> }
Since rollapply, by default, applies the rolling calculation to each
column we can remove the loop like this (untested):
m <- t(dt_l1_inp)
w <- ema*h+1
rollapplyr(m, w, gomin1sd)
and that might also give you a small speedup.
To take advantage of multiple processors we can run
rollapplyr(m[, seq(k)], w, gomin1sd) on the first processor,
rollapplyr(m[, k+seq(k)], w, gmin1sd) on the second processor
and so on
for suitably chosen k.
--
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
------------------------------
Message: 30
Date: Thu, 5 Apr 2012 07:00:58 -0700 (PDT)
From: arunkumar1111 <akpbond007@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] help in paste command
Message-ID: <1333634458776-4534925.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks Lell
It worked well.
-----
Thanks in Advance
Arun
--
View this message in context:
http://r.789695.n4.nabble.com/help-in-paste-command-tp4534756p4534925.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 31
Date: Thu, 5 Apr 2012 06:57:51 -0700 (PDT)
From: uday <uday_143_4u@hotmail.com>
To: r-help@r-project.org
Subject: [R] data normalize issue
Message-ID: <1333634271100-4534914.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
http://r.789695.n4.nabble.com/file/n4534914/Rplot01.png
I have some dataset
ak[1:3,]
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[,8] [,9]
[1,] 0.3211745 0.4132568 0.5649930 0.6920562 0.7760113 0.8118568 0.8609301
0.9088819 0.9326736
[2,] 0.3159234 0.4071270 0.5579212 0.6844584 0.7684690 0.8243702 0.8677043
0.8931288 0.9261926
[3,] 0.3075260 0.3993699 0.5493242 0.6765600 0.7614591 0.8127050 0.8537816
0.8884786 0.9343690
[,10] [,11] [,12]
[1,] 0.9605178 1 1.003940
[2,] 0.9647617 1 1.012930
[3,] 0.9618874 1 1.007103
dim(ak[1:3,])
[1] 3 12
pre[1:3,]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[,9] [,10]
[1,] 10.34615 52.02116 146.1736 243.2864 347.4150 431.6711 521.4271 629.0045
729.9594 827.8628
[2,] 10.34615 52.02539 146.3670 244.3871 350.1785 454.6706 546.5499 638.3344
741.9849 842.5700
[3,] 10.34615 52.02754 146.4656 244.9480 351.5865 457.1768 550.1341 643.0880
748.1114 850.0670
[,11] [,12]
[1,] 921.5508 956.4445
[2,] 953.9648 995.8201
[3,] 951.6384 987.9105
dim(pre) 3 12
these are only sample values but when I plot then the length is different
for every single line.( please see attached figure)
So I would like to normalize these values.
so I should normalize values in above example.
--
View this message in context:
http://r.789695.n4.nabble.com/data-normalize-issue-tp4534914p4534914.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 32
Date: Thu, 05 Apr 2012 10:43:47 -0400
From: Sam Steingold <sds@gnu.org>
To: r-help@r-project.org, Duncan Murdoch <murdoch.duncan@gmail.com>
Subject: Re: [R] recover lost global function
Message-ID: <87k41ufdcs.fsf@gnu.org>
Content-Type: text/plain
> * Duncan Murdoch <zheqbpu.qhapna@tznvy.pbz> [2012-04-04 21:46:57
-0400]:
>
> On 12-04-04 5:15 PM, Sam Steingold wrote:
>>> * Duncan Murdoch<zheqbpu.qhapna@tznvy.pbz> [2012-04-04
17:00:32 -0400]:
>>>
>>> There's no warning when you mask a function with a non-function
at top
>>> level, and little need for one, because R does the right search
based on
>>> the fact that you're making a function call:
>>>
>>>> c
>>> [1] 1
>>>> c(1,2)
>>> [1] 1 2
>>
>> why then am I getting these warnings from cmpfile?
>
> You would have to tell me what you did before I could attempt to answer
> that.
all <- 1
cmpfile("foo.R")
where foo.R contains functions which call all()
>>
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>> Note: no visible global function definition for 'all'
>>
>> I did overwrite all to a data frame, but it only appears in a funtion
>> position all(...) in the file being compiled.
>>
>
--
Sam Steingold (http://sds.podval.org/) on Ubuntu 11.10 (oneiric) X 11.0.11004000
http://www.childpsy.net/ http://pmw.org.il http://camera.org
http://palestinefacts.org http://truepeace.org http://ffii.org
A language that does not change the way you think is not worth learning.
------------------------------
Message: 33
Date: Thu, 5 Apr 2012 07:48:16 -0700
From: "MacQueen, Don" <macqueen1@llnl.gov>
To: "sds@gnu.org" <sds@gnu.org>,
"r-help@r-project.org"
<r-help@r-project.org>
Subject: Re: [R] recover lost global function
Message-ID: <CBA2FDA6.91D0F%macqueen1@llnl.gov>
Content-Type: text/plain; charset="us-ascii"
To expand on Duncan's answer, you haven't replaced it. The following
should make that clear:
## starting in a fresh session> c
function (..., recursive = FALSE)
.Primitive("c")> find('c')
[1] "package:base"> c <- 1
> find('c')
[1] ".GlobalEnv" "package:base"> c
[1] 1> rm(c)
> find('c')
[1] "package:base"> c
function (..., recursive = FALSE) .Primitive("c")
The one provided by R, and the one you created, are not in the same
namespace.
To "recover" R's version, get rid of the one you created.Also,
take a look
at the search() and conflicts() functions.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
On 4/4/12 1:52 PM, "Sam Steingold" <sds@gnu.org> wrote:
>Since R has the same namespace for functions and variables,
>> c <- 1
>kills the global function, which can be restored by
>> c <- get("c",mode="function")
>
>Is there a way to prevent R from overriding globals
>or at least warning when I do that
>or at least warning when I replace a functional value with non-functional?
>
>thanks.
>
>--
>Sam Steingold (http://sds.podval.org/) on Ubuntu 11.10 (oneiric) X
>11.0.11004000
>http://www.childpsy.net/ http://iris.org.il http://camera.org
>http://ffii.org
>http://dhimmi.com http://mideasttruth.com http://pmw.org.il
>Garbage In, Gospel Out
>
>______________________________________________
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 34
Date: Thu, 5 Apr 2012 16:49:43 +0200
From: Michael Bach <phaebz@gmail.com>
To: David Winsemius <dwinsemius@comcast.net>, istazahn@gmail.com
Cc: r-help@r-project.org
Subject: Re: [R] Apply function to every 'nth' element of a vector
Message-ID:
<CAFbCY6jfm6z+HpxSBghS9axtSBdAa6hNy6Bn0OZJKkPwjanpZA@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Thank you very much for your comments Ista and David! I will
experiment and see which one serves my needs best.
------------------------------
Message: 35
Date: Thu, 5 Apr 2012 07:46:52 -0700 (PDT)
From: ali_protocol <mohammadianalimohammadian@gmail.com>
To: r-help@r-project.org
Subject: [R] Sum of sd between matrix cols vs spearman correlation
between them
Message-ID: <1333637212101-4535057.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi all,
I have a matrix (n*2), I want to compare 2 operators (2 normalization for
array results) on these matrix.
The 2 columns should ideally become the same after operations
(normalization). So to compare operations,
I do this for each normalization:
s= sum (apply (normalized.matrix, 2,sd))
c= cor (normalized[,1],normalized [,2], method='pearson')
I expect that if normalization 1 is superior, s should be less and c greater
than normalization2, but both s and c change in 1 direstion. Is this
possible or am I doing something wrong?
Thank you in advance.
--
View this message in context:
http://r.789695.n4.nabble.com/Sum-of-sd-between-matrix-cols-vs-spearman-correlation-between-them-tp4535057p4535057.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 36
Date: Thu, 5 Apr 2012 07:45:05 -0700 (PDT)
From: "z2.0" <zack.abrahamson@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] Subscript Error
Message-ID: <1333637105152-4535054.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks to you both. Calling recover (an option hitherto unknown to me) helped
me identify the problem.
For the record, the error occurred in the geom_path() line, not the list
concatenation, as I had previously thought. It was a logic problem: when
typeof == NULL the function jumped, but i remained incrementing, forcing
geom_path to call data from a list element that didn't exist.
Thanks again.
--
View this message in context:
http://r.789695.n4.nabble.com/Subscript-Error-tp4533219p4535054.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 37
Date: Thu, 5 Apr 2012 07:52:48 -0700 (PDT)
From: arunkumar1111 <akpbond007@gmail.com>
To: r-help@r-project.org
Subject: [R] help in match.fun
Message-ID: <1333637568092-4535071.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
hi
I have a dataframe and a parameter
the parameter can have any one value min max mean sum hist
i'm using the function match.fun
fun=match.fun(input)
fun(dataset)
but if input is hist the plot pops up. is there any method to avoid it. else
should use only if condition for histogram
-----
Thanks in Advance
Arun
--
View this message in context:
http://r.789695.n4.nabble.com/help-in-match-fun-tp4535071p4535071.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 38
Date: Thu, 5 Apr 2012 09:58:24 -0500
From: <luke-tierney@uiowa.edu>
To: Sam Steingold <sds@gnu.org>
Cc: r-help@r-project.org
Subject: Re: [R] recover lost global function
Message-ID:
<alpine.LFD.2.02.1204050956070.2018@nokomis.stat.uiowa.edu>
Content-Type: text/plain; charset="US-ASCII"; format=flowed
The compiler doesn't currently look beyond the first definition found
(the generated code does the right thing, but the compiler won't
optimize calls to functions masked by non-functions). I'll look into
whether thechecking can be made to take this into account; it may be
more trouble than it is worth though.
luke
On Thu, 5 Apr 2012, Sam Steingold wrote:
>> * Duncan Murdoch <zheqbpu.qhapna@tznvy.pbz> [2012-04-04 21:46:57
-0400]:
>>
>> On 12-04-04 5:15 PM, Sam Steingold wrote:
>>>> * Duncan Murdoch<zheqbpu.qhapna@tznvy.pbz> [2012-04-04
17:00:32 -0400]:
>>>>
>>>> There's no warning when you mask a function with a
non-function at top
>>>> level, and little need for one, because R does the right search
based on
>>>> the fact that you're making a function call:
>>>>
>>>>> c
>>>> [1] 1
>>>>> c(1,2)
>>>> [1] 1 2
>>>
>>> why then am I getting these warnings from cmpfile?
>>
>> You would have to tell me what you did before I could attempt to answer
>> that.
>
> all <- 1
> cmpfile("foo.R")
>
> where foo.R contains functions which call all()
>
>
>
>>>
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>> Note: no visible global function definition for 'all'
>>>
>>> I did overwrite all to a data frame, but it only appears in a
funtion
>>> position all(...) in the file being compiled.
>>>
>>
>
>
--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa Phone: 319-335-3386
Department of Statistics and Fax: 319-335-3017
Actuarial Science
241 Schaeffer Hall email: luke-tierney@uiowa.edu
Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu
------------------------------
Message: 39
Date: Thu, 5 Apr 2012 11:01:51 -0400
From: Sarah Goslee <sarah.goslee@gmail.com>
To: arunkumar1111 <akpbond007@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] help in match.fun
Message-ID:
<CAM_vjuk6pr6HWrfvu4ADkSpVNtE_TGawaKcAHsvX8gk8R_3o3Q@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Adding plot=FALSE to the hist() call will prevent it from being plotted.
On Thu, Apr 5, 2012 at 10:52 AM, arunkumar1111 <akpbond007@gmail.com>
wrote:> hi
>
> I have a dataframe and ?a parameter
>
> the parameter can have any one value min max mean sum hist
>
> i'm using the function ?match.fun
>
> ?fun=match.fun(input)
> fun(dataset)
>
> but if input is hist the plot pops up. is there any method to avoid it.
else
> should use only if condition for histogram
>
> -----
> Thanks in Advance
> ? ? ? ?Arun
> --
--
Sarah Goslee
http://www.functionaldiversity.org
------------------------------
Message: 40
Date: Thu, 05 Apr 2012 11:19:39 -0400
From: Sam Steingold <sds@gnu.org>
To: <luke-tierney@uiowa.edu>
Cc: r-help@r-project.org
Subject: Re: [R] recover lost global function
Message-ID: <877gxufbp0.fsf@gnu.org>
Content-Type: text/plain
> * <yhxr-gvrearl@hvbjn.rqh> [2012-04-05 09:58:24 -0500]:
>
> I'll look into whether thechecking can be made to take this into
> account; it may be more trouble than it is worth though.
Just to clarify: it would be nice if R noticed "stupid mistakes" like
overriding functions in packages from the top-level and either prevented
that or, at least, issued warnings.
--
Sam Steingold (http://sds.podval.org/) on Ubuntu 11.10 (oneiric) X 11.0.11004000
http://www.childpsy.net/ http://www.memritv.org
http://americancensorship.org http://memri.org
Just because you're paranoid doesn't mean they AREN'T after you.
------------------------------
Message: 41
Date: Thu, 5 Apr 2012 10:36:00 -0500
From: Jonathan Greenberg <jgrn@illinois.edu>
To: r-help <r-help@r-project.org>
Subject: [R] Best way to search r- functions and mailing list?
Message-ID:
<CABG0rfuyv3OXjvMRAATtnD7tFPELPEoom_t73XaFpFVoXat7Cw@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
R-helpers:
It looks like http://finzi.psych.upenn.edu/search.html has stopped
spidering the mailing lists -- this used to be my go-to site for
searching for R solutions. Are there any good replacements for this?
I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]
--j
--
Jonathan A. Greenberg, PhD
Assistant Professor
Department of Geography
University of Illinois at Urbana-Champaign
607 South Mathews Avenue, MC 150
Urbana, IL 61801
Phone: 415-763-5476
AIM: jgrn307, MSN: jgrn307@hotmail.com, Gchat: jgrn307, Skype: jgrn3007
http://www.geog.illinois.edu/people/JonathanGreenberg.html
------------------------------
Message: 42
Date: Thu, 5 Apr 2012 11:40:04 -0400
From: "Liaw, Andy" <andy_liaw@merck.com>
To: "'Jenn Barrett'" <jsbarret@sfu.ca>,
"r-help@r-project.org"
<r-help@r-project.org>
Subject: Re: [R] Imputing missing values using "LSmeans" (i.e.,
population marginal means) - advice in R?
Message-ID:
<D5FA03935F7418419332B61CA255F65F677103DEE6@USCTMXP51012.merck.com>
Content-Type: text/plain; charset="us-ascii"
Don't know how you searched, but perhaps this might help:
https://stat.ethz.ch/pipermail/r-help/2007-March/128064.html
> -----Original Message-----
> From: r-help-bounces@r-project.org
> [mailto:r-help-bounces@r-project.org] On Behalf Of Jenn Barrett
> Sent: Tuesday, April 03, 2012 1:23 AM
> To: r-help@r-project.org
> Subject: [R] Imputing missing values using "LSmeans" (i.e.,
> population marginal means) - advice in R?
>
> Hi folks,
>
> I have a dataset that consists of counts over a ~30 year
> period at multiple (>200) sites. Only one count is conducted
> at each site in each year; however, not all sites are
> surveyed in all years. I need to impute the missing values
> because I need an estimate of the total population size
> (i.e., sum of counts across all sites) in each year as input
> to another model.
>
> > head(newdat,40)
> SITE YEAR COUNT
> 1 1 1975 12620
> 2 1 1976 13499
> 3 1 1977 45575
> 4 1 1978 21919
> 5 1 1979 33423
> ...
> 37 2 1975 40000
> 38 2 1978 40322
> 39 2 1979 70000
> 40 2 1980 16244
>
>
> It was suggested to me by a statistician to use LSmeans to do
> this; however, I do not have SAS, nor do I know anything much
> about SAS. I have spent DAYS reading about these "LSmeans"
> and while (I think) I understand what they are, I have
> absolutely no idea how to a) calculate them in R and b) how
> to use them to impute my missing values in R. Again, I've
> searched the mail lists, internet and literature and have not
> found any documentation to advise on how to do this - I'm lost.
>
> I've looked at popMeans, but have no clue how to use this
> with predict() - if this is even the route to go. Any advice
> would be much appreciated. Note that YEAR will be treated as
> a factor and not a linear variable (i.e., the relationship
> between COUNT and YEAR is not linear - rather there are highs
> and lows about every 10 or so years).
>
> One thought I did have was to just set up a loop to calculate
> the least-squares estimates as:
>
> Yij = (IYi + JYj - Y)/[(I-1)(J-1)]
> where I = number of treatments and J = number of blocks (so
> I = sites and J = years). I found this formula in some stats
> lecture handouts by UC Davis on unbalanced data and
> LSMeans...but does it yield the same thing as using the
> LSmeans estimates? Does it make any sense? Thoughts?
>
> Many thanks in advance.
>
> Jenn
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Notice: This e-mail message, together with any attachme...{{dropped:11}}
------------------------------
Message: 43
Date: Thu, 05 Apr 2012 17:46:25 +0200
From: Uwe Ligges <ligges@statistik.tu-dortmund.de>
To: Drew Tyre <atyre2@unl.edu>
Cc: r-help@r-project.org
Subject: Re: [R] unable to move temporary installation
Message-ID: <4F7DBE51.9010008@statistik.tu-dortmund.de>
Content-Type: text/plain; charset=windows-1252; format=flowed
On 05.04.2012 17:40, Drew Tyre wrote:> A final, final followup. Uwe, your suggestion is spot on - disabling the
> virus scanner fixes the problem. UNL recently changed virus scanning
> software, so this issue arose with Windows XP and Symantec Endpoint
> Protection. It can be readily disabled and reenabled from the system tray,
[[elided Yahoo spam]]
OK, problem is probably that the virus scanner locks the file and hence
R cannot move them.
Best,
Uwe Ligges
> thank you all for your various suggestions and assistance.
>
> 2012/4/4 Uwe Ligges<ligges@statistik.tu-dortmund.de>
>
>>
>>
>> On 03.04.2012 19:43, Drew Tyre wrote:
>>
>>> A final followup. I have identified a rather extreme workaround.
The
>>> problem arises when the function utils:::unpackPkgZip uses
>>> file.rename(...)
>>> to move the unzipped binary package from the temporary directory
that it
>>> was unpacked into into the proper directory in the library tree. If
one
>>> does
>>> debug(utils:::unpackPkgZip)
>>> and then steps through the function line by line, it works.
>>>
>>
>> Then check your virus scanner ot the speed of IO if this is a remote
file
>> system.
>>
>> Uwe Ligges
>>
>>
>>
>> Thank you.
>>>
>>> On Mon, Apr 2, 2012 at 12:06 PM, Drew Tyre<atyre2@unl.edu>
wrote:
>>>
>>> OK - so I followed the following steps, which I think rule out
those
>>>> causes
>>>>
>>>> 1) I uninstalled all remaining versions of R, and then deleted
all the
>>>> directories in c:\progra~1\R
>>>> 2) I restarted the computer
>>>> 3) I installed 2.14.2, and attempted to install the Rcmdr
package. Same
>>>> error message for both the cars package and the Rcmdr package.
>>>> 4) I then exited and confirmed that I have write permission to
>>>> C:\progra~1\R\R-2.14.2\**libraries both by looking at the
permissions,
>>>> and by
>>>> creating a directory in there. I appear to have full control,
and I could
>>>> create the directory. note that R is able to create the
temporary
>>>> directory
>>>> to install the package, but not the correct, final directory.
>>>> 5) I then uninstalled 2.14.2, and installed 2.15.0, hoping for
a fix. No
>>>> luck. Same error message.
>>>> 6) I then tried installing the packages to a different
directory, one
>>>> that
>>>> I created, c:\test, using
>>>> install.packages("Rcmdr","c:\\**test")
>>>> This time, the car package installed correctly, but Rcmdr still
had the
>>>> same warning message
>>>>
>>>> Warning: unable to move temporary installation
>>>> ?c:\test\file136c67c337b3\**Rcmdr? to ?c:\test\Rcmdr?
>>>>
>>>> There is clearly something messed up on this computer, but
I'm at a loss
>>>> for how to get around it. Thanks for the suggestions, and I
guess I have
>>>> to
[[elided Yahoo spam]]>>>>
>>>> 2012/3/31 Uwe
Ligges<ligges@statistik.tu-**dortmund.de<ligges@statistik.tu-dortmund.de>
>>>>>
>>>>
>>>> On 31.03.2012 16:15, Drew Tyre wrote:
>>>>>
>>>>> Hi all,
>>>>>>
>>>>>> I'm having a strange error that prevents me from
installing new
>>>>>>
>>>>> packages,
>>>>
>>>>> or updating packages after reinstalling. The error message
is
>>>>>> Warning: unable to move temporary installation
?C:\Program
>>>>>> Files\R\R-2.14.2\library\****file15045004ac2\sandwich?
to ?C:\Program
>>>>>> Files\R\R-2.14.2\library\****sandwich?
>>>>>> for one of the packages that is failing to
install/update. This error
>>>>>> started happening after I attempted installing
lme4Eigen from the
>>>>>>
>>>>> R-Forge
>>>>
>>>>> repositories - that installation failed too.
>>>>>>
>>>>>> Any suggestions for fixes welcome. I don't want to
upgrade to 2.15 just
>>>>>> yet
>>>>>> because I'm in the middle of a project (although if
that's the solution
>>>>>>
>>>>> I
>>>>
>>>>> guess I'll have to do it).
>>>>>>
>>>>>>
>>>>> Probably the package is in use by another instance of R.
Otherwise,
>>>>> check
>>>>> permissions.
>>>>>
>>>>> Best,
>>>>> Uwe Ligges
>>>>>
>>>>>
>>>>> R version 2.14.2 (2012-02-29)
>>>>>
>>>>>> Platform: i386-pc-mingw32/i386 (32-bit)
>>>>>>
>>>>>> locale:
>>>>>> [1] LC_COLLATE=English_United States.1252
>>>>>> [2] LC_CTYPE=English_United States.1252
>>>>>> [3] LC_MONETARY=English_United States.1252
>>>>>> [4] LC_NUMERIC=C
>>>>>> [5] LC_TIME=English_United States.1252
>>>>>>
>>>>>> attached base packages:
>>>>>> [1] stats graphics grDevices utils datasets
methods base
>>>>>>
>>>>>> loaded via a namespace (and not attached):
>>>>>> [1] tools_2.14.2
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ______________________________****________________
>>>>>> R-help@r-project.org mailing list
>>>>>>
https://stat.ethz.ch/mailman/****listinfo/r-help<https://stat.ethz.ch/mailman/**listinfo/r-help>
>>>>>> <
>>>>>>
>>>>>
https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>>>>>
>>>>
>>>>> PLEASE do read the posting guide
http://www.R-project.org/**
>>>>>>
posting-guide.html<http://www.**R-project.org/posting-guide.**html<http://www.R-project.org/posting-guide.html>
>>>>>>>
>>>>>> and provide commented, minimal, self-contained,
reproducible code.
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Drew Tyre
>>>>
>>>> School of Natural Resources
>>>> University of Nebraska-Lincoln
>>>> 416 Hardin Hall, East Campus
>>>> 3310 Holdrege Street
>>>> Lincoln, NE 68583-0974
>>>>
>>>> phone: +1 402 472 4054
>>>> fax: +1 402 472 2946
>>>> email: atyre2@unl.edu
>>>> http://snr.unl.edu/tyre
>>>>
http://aminpractice.blogspot.**com<http://aminpractice.blogspot.com>
>>>>
http://www.flickr.com/photos/**atiretoo<http://www.flickr.com/photos/atiretoo>
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>>
>>>> ______________________________**________________
>>>> R-help@r-project.org mailing list
>>>>
https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>>>> PLEASE do read the posting guide
>>>>
http://www.R-project.org/**posting-guide.html<http://www.R-project.org/posting-guide.html>
>>>> and provide commented, minimal, self-contained, reproducible
code.
>>>>
>>>>
>>>>
>>>
>>>
>
>
------------------------------
Message: 44
Date: Thu, 5 Apr 2012 11:08:22 -0400
From: Debbie Smith <dsmithuser@gmail.com>
To: r-help@r-project.org
Subject: [R] ggplot2 error: arguments imply differing number of rows
Message-ID:
<CAHrgPOO=HaEfNpPGD8T5nOc2SW4UxTUJRJv7-Emb7LS0avkJmA@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
This example is from "The R Book" by Michael J. Crawley.
d=read.table(
"http://www.bio.ic.ac.uk/research/mjcraw/therbook/data/diminish.txt"
,header=TRUE)
p=qplot(xv,yv,data=d); p
m1=lm(yv~xv,data=d)
p1=p + geom_abline(intercept=coefficients(m1)[1],
slope=coefficients(m1)[2] ); p1
m2=lm(yv~xv + I(xv^2),data=d)
x=seq(min(d$xv),max(d$xv),length=100)
p1 + geom_line(aes(x=x,y=predict(m2,list(xv=x))), color="red")
I can run the above codes without any problems in an older version of
R(R 2.10.1). But if I run the codes in a newer version of R(R 2.15, R
2.14), I got the error:> p1 + geom_line(aes(x=x,y=predict(m2,list(xv=x))), color="red")
Error in data.frame(evaled, PANEL = data$PANEL) :
arguments imply differing number of rows: 100, 18
[[elided Yahoo spam]]
Debbie
------------------------------
Message: 45
Date: Thu, 5 Apr 2012 10:34:32 -0500
From: Drew Tyre <atyre2@unl.edu>
To: Ramiro Barrantes <ramiro@precisionbioassay.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] reclaiming lost memory in R
Message-ID:
<CAE-6tvyEmU+DZ9HE7z_rdG6N9aBBh_=C8hiXAEvzHBaFfshxKA@mail.gmail.com>
Content-Type: text/plain
Ramiro
I think the problem is the loop - R doesn't release memory allocated inside
an expression until the expression completes. A for loop is an expression,
so it duplicates fit and dataset on every iteration. An alternative
approach that I have found successful in similar circumstances is to use
sapply(), like this
fits <- list()
sapply(1:N,function(i){
dataset <- generateDataset(i)
fit[[i]] <- try( memoryHogFunction(dataset, otherParameters))
})
I'm assuming above that you want to save the result of memoryHogFunction
from each iteration.
hth
Drew
On Thu, Apr 5, 2012 at 8:35 AM, Ramiro Barrantes <
ramiro@precisionbioassay.com> wrote:
> Dear list,
>
> I am trying to reclaim what I think is lost memory in R, I have been using
> gc(), rm() and also using Rprof to figure out where all the memory is going
> but I might be missing something.
>
> I have the following situation
>
> basic loop which calls memoryHogFunction:
>
> for i in (1:N) {
> dataset <- generateDataset(i)
> fit <- try( memoryHogFunction(dataset, otherParameters))
> }
>
> and within
>
> memoryHogFunction <- function(dataset, params){
>
> fit <- try(nlme(someinitialValues)
> ...
> fit <- try(updatenlme(otherInitialValues)
> ...
> fit <- try(updatenlme(otherInitialValues)
> ...
> ret <- fit ( and other things)
> return a result "ret"
> }
>
> The problem is that, memoryHogFunction uses a lot of memory, and at the
> end returns a result (which is not big) but the memory used by the
> computation seems to be still occupied. The original loop continues, but
> the memory used by the program grows and grows after each call to
> memoryHogFunction.
>
> I have been trying to do gc() after each run in the loop, and have even
> done:
>
> in memoryHogFunction()
> ...
> ret <- fit ( and other things)
> rm(list=ls()[-match("ret",ls())])
> return a result "ret"
> }
>
> ???
>
> A typical results from gc() after each loop iteration says:
> used (Mb) gc trigger (Mb) max used (Mb)
> Ncells 326953 17.5 597831 32.0 597831 32.0
> Vcells 1645892 12.6 3048985 23.3 3048985 23.3
>
> Which doesn't reflect that 340mb (and 400+mb in virtual memory) that
are
> being used right now.
>
> Even when I do:
>
> print(sapply(ls(all.names=TRUE), function(x) object.size(get(x))))
>
> the largest object is 8179808, which is what it should be.
>
> THe only thing that looked suspicious was the following within Rprof (with
> memory=stats option), the tot.duplications might be a problem???:
>
> index: "with":"with.default"
> vsize.small max.vsize.small vsize.large max.vsize.large
> 30841 63378 20642 660787
> nodes max.nodes duplications tot.duplications
> 3446132 8115016 12395 61431787
> samples
> 4956
>
> Any suggestions? Is it something about the use of loops in R? Is it
> maybe the try's???
>
> Thanks in advance for any help,
>
> Ramiro
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Drew Tyre
School of Natural Resources
University of Nebraska-Lincoln
416 Hardin Hall, East Campus
3310 Holdrege Street
Lincoln, NE 68583-0974
phone: +1 402 472 4054
fax: +1 402 472 2946
email: atyre2@unl.edu
http://snr.unl.edu/tyre
http://aminpractice.blogspot.com
[[alternative HTML version deleted]]
------------------------------
Message: 46
Date: Thu, 5 Apr 2012 10:40:08 -0500
From: Drew Tyre <atyre2@unl.edu>
To: Uwe Ligges <ligges@statistik.tu-dortmund.de>
Cc: r-help@r-project.org
Subject: Re: [R] unable to move temporary installation
Message-ID:
<CAE-6tvzRKan=Ugm_mi+5y0QFDXzYA0bQb5W8fsBTZqMEj7h-Mw@mail.gmail.com>
Content-Type: text/plain
A final, final followup. Uwe, your suggestion is spot on - disabling the
virus scanner fixes the problem. UNL recently changed virus scanning
software, so this issue arose with Windows XP and Symantec Endpoint
Protection. It can be readily disabled and reenabled from the system tray,
[[elided Yahoo spam]]
thank you all for your various suggestions and assistance.
2012/4/4 Uwe Ligges <ligges@statistik.tu-dortmund.de>
>
>
> On 03.04.2012 19:43, Drew Tyre wrote:
>
>> A final followup. I have identified a rather extreme workaround. The
>> problem arises when the function utils:::unpackPkgZip uses
>> file.rename(...)
>> to move the unzipped binary package from the temporary directory that
it
>> was unpacked into into the proper directory in the library tree. If one
>> does
>> debug(utils:::unpackPkgZip)
>> and then steps through the function line by line, it works.
>>
>
> Then check your virus scanner ot the speed of IO if this is a remote file
> system.
>
> Uwe Ligges
>
>
>
> Thank you.
>>
>> On Mon, Apr 2, 2012 at 12:06 PM, Drew Tyre<atyre2@unl.edu>
wrote:
>>
>> OK - so I followed the following steps, which I think rule out those
>>> causes
>>>
>>> 1) I uninstalled all remaining versions of R, and then deleted all
the
>>> directories in c:\progra~1\R
>>> 2) I restarted the computer
>>> 3) I installed 2.14.2, and attempted to install the Rcmdr package.
Same
>>> error message for both the cars package and the Rcmdr package.
>>> 4) I then exited and confirmed that I have write permission to
>>> C:\progra~1\R\R-2.14.2\**libraries both by looking at the
permissions,
>>> and by
>>> creating a directory in there. I appear to have full control, and I
could
>>> create the directory. note that R is able to create the temporary
>>> directory
>>> to install the package, but not the correct, final directory.
>>> 5) I then uninstalled 2.14.2, and installed 2.15.0, hoping for a
fix. No
>>> luck. Same error message.
>>> 6) I then tried installing the packages to a different directory,
one
>>> that
>>> I created, c:\test, using
>>> install.packages("Rcmdr","c:\\**test")
>>> This time, the car package installed correctly, but Rcmdr still had
the
>>> same warning message
>>>
>>> Warning: unable to move temporary installation
>>> ‘c:\test\file136c67c337b3\**Rcmdr’ to ‘c:\test\Rcmdr’
>>>
>>> There is clearly something messed up on this computer, but I'm
at a loss
>>> for how to get around it. Thanks for the suggestions, and I guess I
have
>>> to
[[elided Yahoo spam]]>>>
>>> 2012/3/31 Uwe
Ligges<ligges@statistik.tu-**dortmund.de<ligges@statistik.tu-dortmund.de>
>>> >
>>>
>>> On 31.03.2012 16:15, Drew Tyre wrote:
>>>>
>>>> Hi all,
>>>>>
>>>>> I'm having a strange error that prevents me from
installing new
>>>>>
>>>> packages,
>>>
>>>> or updating packages after reinstalling. The error message is
>>>>> Warning: unable to move temporary installation ‘C:\Program
>>>>> Files\R\R-2.14.2\library\****file15045004ac2\sandwich’ to
‘C:\Program
>>>>> Files\R\R-2.14.2\library\****sandwich’
>>>>> for one of the packages that is failing to install/update.
This error
>>>>> started happening after I attempted installing lme4Eigen
from the
>>>>>
>>>> R-Forge
>>>
>>>> repositories - that installation failed too.
>>>>>
>>>>> Any suggestions for fixes welcome. I don't want to
upgrade to 2.15 just
>>>>> yet
>>>>> because I'm in the middle of a project (although if
that's the solution
>>>>>
>>>> I
>>>
>>>> guess I'll have to do it).
>>>>>
>>>>>
>>>> Probably the package is in use by another instance of R.
Otherwise,
>>>> check
>>>> permissions.
>>>>
>>>> Best,
>>>> Uwe Ligges
>>>>
>>>>
>>>> R version 2.14.2 (2012-02-29)
>>>>
>>>>> Platform: i386-pc-mingw32/i386 (32-bit)
>>>>>
>>>>> locale:
>>>>> [1] LC_COLLATE=English_United States.1252
>>>>> [2] LC_CTYPE=English_United States.1252
>>>>> [3] LC_MONETARY=English_United States.1252
>>>>> [4] LC_NUMERIC=C
>>>>> [5] LC_TIME=English_United States.1252
>>>>>
>>>>> attached base packages:
>>>>> [1] stats graphics grDevices utils datasets
methods base
>>>>>
>>>>> loaded via a namespace (and not attached):
>>>>> [1] tools_2.14.2
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ______________________________****________________
>>>>> R-help@r-project.org mailing list
>>>>>
https://stat.ethz.ch/mailman/****listinfo/r-help<https://stat.ethz.ch/mailman/**listinfo/r-help>
>>>>> <
>>>>>
>>>>
https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>>> >
>>>
>>>> PLEASE do read the posting guide http://www.R-project.org/**
>>>>>
posting-guide.html<http://www.**R-project.org/posting-guide.**html<http://www.R-project.org/posting-guide.html>
>>>>> >
>>>>> and provide commented, minimal, self-contained,
reproducible code.
>>>>>
>>>>>
>>>>
>>>
>>> --
>>> Drew Tyre
>>>
>>> School of Natural Resources
>>> University of Nebraska-Lincoln
>>> 416 Hardin Hall, East Campus
>>> 3310 Holdrege Street
>>> Lincoln, NE 68583-0974
>>>
>>> phone: +1 402 472 4054
>>> fax: +1 402 472 2946
>>> email: atyre2@unl.edu
>>> http://snr.unl.edu/tyre
>>> http://aminpractice.blogspot.**com
<http://aminpractice.blogspot.com>
>>>
http://www.flickr.com/photos/**atiretoo<http://www.flickr.com/photos/atiretoo>
>>>
>>> [[alternative HTML version deleted]]
>>>
>>>
>>> ______________________________**________________
>>> R-help@r-project.org mailing list
>>>
https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>>> PLEASE do read the posting guide
>>>
http://www.R-project.org/**posting-guide.html<http://www.R-project.org/posting-guide.html>
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>>
>>
>>
--
Drew Tyre
School of Natural Resources
University of Nebraska-Lincoln
416 Hardin Hall, East Campus
3310 Holdrege Street
Lincoln, NE 68583-0974
phone: +1 402 472 4054
fax: +1 402 472 2946
email: atyre2@unl.edu
http://snr.unl.edu/tyre
http://aminpractice.blogspot.com
[[alternative HTML version deleted]]
------------------------------
Message: 47
Date: Thu, 5 Apr 2012 11:57:04 -0400
From: "R. Michael Weylandt" <michael.weylandt@gmail.com>
To: Jonathan Greenberg <jgrn@illinois.edu>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID:
<CAAmySGM9_b=S9AisuRB3NexO6HnBEDY5cmaJB3m81+Bk20g7tg@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
http://www.rseek.org/ perhaps. [Take a look at the tabs on the RHS
after you do a search]
Michael
On Thu, Apr 5, 2012 at 11:36 AM, Jonathan Greenberg <jgrn@illinois.edu>
wrote:> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-to site for
> searching for R solutions. ?Are there any good replacements for this?
> I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]>
> --j
>
> --
> Jonathan A. Greenberg, PhD
> Assistant Professor
> Department of Geography
> University of Illinois at Urbana-Champaign
> 607 South Mathews Avenue, MC 150
> Urbana, IL 61801
> Phone: 415-763-5476
> AIM: jgrn307, MSN: jgrn307@hotmail.com, Gchat: jgrn307, Skype: jgrn3007
> http://www.geog.illinois.edu/people/JonathanGreenberg.html
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 48
Date: Thu, 5 Apr 2012 10:55:30 -0500
From: "Mendiburu, Felipe (CIP)" <F.MENDIBURU@CGIAR.ORG>
To: "Richard M. Heiberger" <rmh@temple.edu>, "Jinsong
Zhao"
<jszhao@yeah.net>
Cc: "r-help@R-project.org" <r-help@r-project.org>
Subject: Re: [R] Fisher's LSD multiple comparisons in a two-way ANOVA
Message-ID:
<05E392101B4E5240A32F792F6F2C82322EB165@webmail.cip.cgiar.org>
Content-Type: text/plain
Dear Richard and Jinsong,
Others output with library agricolae. See manual.
##
library(agricolae)
comp1 <- LSD.test(x.aov,"a", group=FALSE)
comp2 <- LSD.test(x.aov,"b", group=TRUE)
# interaction ab
# Tukey's test
comp3 <- HSD.test(xi.aov,"ab")
# graphics
par(mfrow=c(2,2))
bar.err(comp1,ylim=c(0,100), col="yellow")
bar.group(comp2,ylim=c(0,100),density=4,col="blue")
bar.group(comp3,ylim=c(0,100), col="brown",las=2)
bar.err(comp3,ylim=c(0,100),col=0,las=2)
Regards,
Felipe de Mendiburu
Statistician
-----Original Message-----
From: r-help-bounces@r-project.org on behalf of Richard M. Heiberger
Sent: Wed 4/4/2012 9:49 PM
To: Jinsong Zhao
Cc: r-help@R-project.org
Subject: Re: [R] Fisher's LSD multiple comparisons in a two-way ANOVA
Here is your example. The table you displayed in gigawiz ignored the
two-way factor structure
and interpreted the data as a single factor with 6 levels. I created the
interaction of
a and b to get that behavior.
## your example, with data stored in a data.frame
tmp <- data.frame(x=c(76, 84, 78, 80, 82, 70, 62, 72,
71, 69, 72, 74, 66, 74, 68, 66,
69, 72, 72, 78, 74, 71, 73, 67,
86, 67, 72, 85, 87, 74, 83, 86,
66, 68, 70, 76, 78, 76, 69, 74,
72, 72, 76, 69, 69, 82, 79, 81),
a=factor(rep(c("A1", "A2"), each = 24)),
b=factor(rep(c("B1", "B2",
"B3"), each=8, times=2)))
x.aov <- aov(x ~ a*b, data=tmp)
summary(x.aov)
## your request
require(multcomp)
tmp$ab <- with(tmp, interaction(a, b))
xi.aov <- aov(x ~ ab, data=tmp)
summary(xi.aov)
xi.glht <- glht(xi.aov, linfct=mcp(ab="Tukey"))
confint(xi.glht)
## graphs
## boxplots
require(lattice)
bwplot(x ~ ab, data=tmp)
## interaction plot
## install.packages("HH") ## if you don't have HH yet
require(HH)
interaction2wt(x ~ a*b, data=tmp)
On Wed, Apr 4, 2012 at 1:10 PM, Jinsong Zhao <jszhao@yeah.net> wrote:
> On 2012-04-03 20:03, Rmh wrote:
>
>> yes. See ?glht in the multcomp package, and the examples using glht in
>> ?MMC in the HH package.
>>
>> Sent from my iPhone
>>
>>
> Thank you very much for the clues. However, I can't figure out how to
> construct the linfct in glht.
>
> I also tried to inverse the computation based on:
>
http://www.gigawiz.com/**images12/twowayrmposthoc.jpg<http://www.gigawiz.com/images12/twowayrmposthoc.jpg>
> However, I can't catch the MSE used in the above figure.
>
> Regards,
> Jinsong
>
>
[[alternative HTML version deleted]]
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 49
Date: Thu, 05 Apr 2012 17:18:07 +0100
From: Prof Brian Ripley <ripley@stats.ox.ac.uk>
To: r-help@r-project.org
Subject: Re: [R] API Baddperiods in RBloomberg
Message-ID: <4F7DC5BF.9050109@stats.ox.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 05/04/2012 08:54, arvanitis.christos wrote:> Hi to all,
>
> Do you know how I can use Baddperiods from RBloomberg
Most of us cannot even use 'RBloomberg': it has been removed at the
request of Bloomberg's lawyers.
--
Brian D. Ripley, ripley@stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 50
Date: Thu, 05 Apr 2012 12:22:40 -0400
From: Duncan Murdoch <murdoch.duncan@gmail.com>
To: Alexander Shenkin <ashenkin@ufl.edu>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID: <4F7DC6D0.5050906@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 04/04/2012 3:25 PM, Alexander Shenkin wrote:> Hello Folks,
>
> When I run the document below through sweave, rgui.exe/rsession.exe
> leaves a file handle open to the sweave-001.pdf graphic (as verified by
> process explorer). Pdflatex.exe then crashes (with a Permission Denied
> error) because the graphic file is locked.
>
> This only seems to happen when there is an error in the sweave document.
> When there are no errors, no file handles are left open. However, once
> a file handle is stuck open, I can find no other way of closing it save
> for quitting out of R.
>
> Any help would be greatly appreciated! It would be nice to be able to
> write flawless sweave every time, but flawed as I am, I am having to
> restart R continuously.
I'd suggest a different workflow, in which you run a new copy of R every
time you want to Sweave a document. The files will be closed when that
copy dies, and the results are less likely to be affected by the current
state of your workspace (assuming you don't load an old workspace in the
new copy).
For example, when I'm working on a Sweave document, I spend my time in
my text editor, and get it to run R to process the file whenever I want
to see what the output looks like.
The only real disadvantages to this approach that I can think of are
that you need to figure out how to tell your text editor to run R (and
that might be hard if you're using a poor editor like Windows Notebook,
but is usually easy), and it will run a tiny bit slower because you need
to start up R every time.
Duncan Murdoch
> Thanks,
> Allie
>
>
> OS: Windows 7 Pro x64 SP1
>
>
> > sessionInfo()
> R version 2.14.2 (2012-02-29)
> Platform: i386-pc-mingw32/i386 (32-bit)
>
>
> test.Rnw:
>
> \documentclass{article}
> \title {file handle test}
> \author{test author}
> \usepackage{Sweave}
> \begin {document}
> \maketitle
>
> \SweaveOpts{prefix.string=sweave}
>
> \begin{figure}
> \begin{center}
>
> <<fig=TRUE, echo=FALSE>>> df =
data.frame(a=rnorm(100), b=rnorm(100), group = c("g1",
> "g2", "g3", "g4"))
> plot(df$a, df$y, foo)
> @
>
> \caption{test figure one}
> \label{fig:one}
> \end{center}
> \end{figure}
> \end{document}
>
>
>
> Sweave command run:
>
> Sweave("test.Rnw", syntax="SweaveSyntaxNoweb")
>
>
>
> Sweave.sty:
>
> \NeedsTeXFormat{LaTeX2e}
> \ProvidesPackage{Sweave}{}
>
> \RequirePackage{ifthen}
> \newboolean{Sweave@gin}
> \setboolean{Sweave@gin}{true}
> \newboolean{Sweave@ae}
> \setboolean{Sweave@ae}{true}
>
> \DeclareOption{nogin}{\setboolean{Sweave@gin}{false}}
> \DeclareOption{noae}{\setboolean{Sweave@ae}{false}}
> \ProcessOptions
>
> \RequirePackage{graphicx,fancyvrb}
> \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{}
>
>
\ifthenelse{\boolean{Sweave@gin}}{\setkeys{Gin}{width=0.8\textwidth}}{}%
> \ifthenelse{\boolean{Sweave@ae}}{%
> \RequirePackage[T1]{fontenc}
> \RequirePackage{ae}
> }{}%
>
> \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl}
> \DefineVerbatimEnvironment{Soutput}{Verbatim}{}
> \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl}
>
> \newenvironment{Schunk}{}{}
>
> \newcommand{\Sconcordance}[1]{%
> \ifx\pdfoutput\undefined%
> \csname newcount\endcsname\pdfoutput\fi%
> \ifcase\pdfoutput\special{#1}%
> \else\immediate\pdfobj{#1}\fi}
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 51
Date: Thu, 05 Apr 2012 11:31:53 -0500
From: Alexander Shenkin <ashenkin@ufl.edu>
To: Duncan Murdoch <murdoch.duncan@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID: <4F7DC8F9.2000409@ufl.edu>
Content-Type: text/plain; charset=ISO-8859-1
Thanks for the nice ideas, Duncan. I think that would work nicely in
most cases. The major issue with that workflow in my case is that the
scripts to set up my workspace take around a half-hour to run (I really
wish CUDA was working with my setup!), so running R each time in that
case is time-consuming.
Perhaps I should be working more with intermediate files, or perhaps
writing the workspace out to an .Rdata file and reading that in the
sweave document instead of running the entire data-prep script.
Thanks,
Allie
On 4/5/2012 11:22 AM, Duncan Murdoch wrote:> On 04/04/2012 3:25 PM, Alexander Shenkin wrote:
>> Hello Folks,
>>
>> When I run the document below through sweave, rgui.exe/rsession.exe
>> leaves a file handle open to the sweave-001.pdf graphic (as verified by
>> process explorer). Pdflatex.exe then crashes (with a Permission Denied
>> error) because the graphic file is locked.
>>
>> This only seems to happen when there is an error in the sweave
document.
>> When there are no errors, no file handles are left open. However,
once
>> a file handle is stuck open, I can find no other way of closing it save
>> for quitting out of R.
>>
>> Any help would be greatly appreciated! It would be nice to be able to
>> write flawless sweave every time, but flawed as I am, I am having to
>> restart R continuously.
>
> I'd suggest a different workflow, in which you run a new copy of R
every
> time you want to Sweave a document. The files will be closed when that
> copy dies, and the results are less likely to be affected by the current
> state of your workspace (assuming you don't load an old workspace in
the
> new copy).
>
> For example, when I'm working on a Sweave document, I spend my time in
> my text editor, and get it to run R to process the file whenever I want
> to see what the output looks like.
>
> The only real disadvantages to this approach that I can think of are
> that you need to figure out how to tell your text editor to run R (and
> that might be hard if you're using a poor editor like Windows Notebook,
> but is usually easy), and it will run a tiny bit slower because you need
> to start up R every time.
>
> Duncan Murdoch
>
>> Thanks,
>> Allie
>>
>>
>> OS: Windows 7 Pro x64 SP1
>>
>>
>> > sessionInfo()
>> R version 2.14.2 (2012-02-29)
>> Platform: i386-pc-mingw32/i386 (32-bit)
>>
>>
>> test.Rnw:
>>
>> \documentclass{article}
>> \title {file handle test}
>> \author{test author}
>> \usepackage{Sweave}
>> \begin {document}
>> \maketitle
>>
>> \SweaveOpts{prefix.string=sweave}
>>
>> \begin{figure}
>> \begin{center}
>>
>> <<fig=TRUE, echo=FALSE>>>> df =
data.frame(a=rnorm(100), b=rnorm(100), group = c("g1",
>> "g2", "g3", "g4"))
>> plot(df$a, df$y, foo)
>> @
>>
>> \caption{test figure one}
>> \label{fig:one}
>> \end{center}
>> \end{figure}
>> \end{document}
>>
>>
>>
>> Sweave command run:
>>
>> Sweave("test.Rnw", syntax="SweaveSyntaxNoweb")
>>
>>
>>
>> Sweave.sty:
>>
>> \NeedsTeXFormat{LaTeX2e}
>> \ProvidesPackage{Sweave}{}
>>
>> \RequirePackage{ifthen}
>> \newboolean{Sweave@gin}
>> \setboolean{Sweave@gin}{true}
>> \newboolean{Sweave@ae}
>> \setboolean{Sweave@ae}{true}
>>
>> \DeclareOption{nogin}{\setboolean{Sweave@gin}{false}}
>> \DeclareOption{noae}{\setboolean{Sweave@ae}{false}}
>> \ProcessOptions
>>
>> \RequirePackage{graphicx,fancyvrb}
>> \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{}
>>
>>
>>
\ifthenelse{\boolean{Sweave@gin}}{\setkeys{Gin}{width=0.8\textwidth}}{}%
>> \ifthenelse{\boolean{Sweave@ae}}{%
>> \RequirePackage[T1]{fontenc}
>> \RequirePackage{ae}
>> }{}%
>>
>> \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl}
>> \DefineVerbatimEnvironment{Soutput}{Verbatim}{}
>> \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl}
>>
>> \newenvironment{Schunk}{}{}
>>
>> \newcommand{\Sconcordance}[1]{%
>> \ifx\pdfoutput\undefined%
>> \csname newcount\endcsname\pdfoutput\fi%
>> \ifcase\pdfoutput\special{#1}%
>> \else\immediate\pdfobj{#1}\fi}
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 52
Date: Thu, 5 Apr 2012 11:39:09 -0500
From: Kevin Wright <kw.stat@gmail.com>
To: Jonathan Greenberg <jgrn@illinois.edu>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID:
<CAKFxdiTdtXot9gAY-AmupCpJCXnZJgGb88BGUJurKkqeowJyMQ@mail.gmail.com>
Content-Type: text/plain
Use rseek.org
On Thu, Apr 5, 2012 at 10:36 AM, Jonathan Greenberg
<jgrn@illinois.edu>wrote:
> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-to site for
> searching for R solutions. Are there any good replacements for this?
> I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]>
> --j
>
> --
> Jonathan A. Greenberg, PhD
> Assistant Professor
> Department of Geography
> University of Illinois at Urbana-Champaign
> 607 South Mathews Avenue, MC 150
> Urbana, IL 61801
> Phone: 415-763-5476
> AIM: jgrn307, MSN: jgrn307@hotmail.com, Gchat: jgrn307, Skype: jgrn3007
> http://www.geog.illinois.edu/people/JonathanGreenberg.html
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Kevin Wright
[[alternative HTML version deleted]]
------------------------------
Message: 53
Date: Thu, 5 Apr 2012 12:40:23 -0400
From: Sarah Goslee <sarah.goslee@gmail.com>
To: Jonathan Greenberg <jgrn@illinois.edu>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID:
<CAM_vjumfa4pbZu1C26VmA2F9NAkhb1c_An0ASsYnO-1UQMqC+g@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I usually use http://www.rseek.org
On Thu, Apr 5, 2012 at 11:36 AM, Jonathan Greenberg <jgrn@illinois.edu>
wrote:> R-helpers:
>
> It looks like http://finzi.psych.upenn.edu/search.html has stopped
> spidering the mailing lists -- this used to be my go-to site for
> searching for R solutions. ?Are there any good replacements for this?
> I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]>
> --j
>
> --
--
Sarah Goslee
http://www.functionaldiversity.org
------------------------------
Message: 54
Date: Thu, 5 Apr 2012 18:44:31 +0200
From: Petr Savicky <savicky@cs.cas.cz>
To: r-help@r-project.org
Subject: Re: [R] Sum of sd between matrix cols vs spearman correlation
between them
Message-ID: <20120405164431.GA19676@cs.cas.cz>
Content-Type: text/plain; charset=utf-8
On Thu, Apr 05, 2012 at 07:46:52AM -0700, ali_protocol
wrote:> Hi all,
>
> I have a matrix (n*2), I want to compare 2 operators (2 normalization for
> array results) on these matrix.
> The 2 columns should ideally become the same after operations
> (normalization). So to compare operations,
> I do this for each normalization:
>
> s= sum (apply (normalized.matrix, 2,sd))
> c= cor (normalized[,1],normalized [,2], method='pearson')
>
>
> I expect that if normalization 1 is superior, s should be less and c
greater
> than normalization2, but both s and c change in 1 direstion. Is this
> possible or am I doing something wrong?
Hi.
Is "normalized.matrix" and "normalized" the same matrix?
Can you specify, which operators you use for normalization?
Without having this information, i guess that comparing the
correlations alone can be used, since it does not depend on
the scaling the numbers. By an appropriate scaling factor,
the sd may be changed to any value, but this does not say
much about the amount of information in the data. On the other
hand, the correlation does not change by scaling, so it may
be a more reliable measure.
Note that
apply(normalized.matrix, 2, sd)
is the same as
sd(normalized.matrix)
Hope this helps.
Petr Savicky.
------------------------------
Message: 55
Date: Thu, 5 Apr 2012 16:27:57 +0000
From: MANI <mani_hku@hotmail.com>
To: <r-help@r-project.org>
Subject: [R] how to do piecewise linear regression in R?
Message-ID: <COL124-W539BD220AC76A25D44BAAEED330@phx.gbl>
Content-Type: text/plain
Dear all,
I want to do piecewise CAPM linear regression in R:
RRiskArb−Rf = (1−δ)[αMktLow+βMktLow(RMkt−Rf)] + δ[αMkt High +βMkt
High(RMkt −Rf )]
where δ is a dummy variable if the excess return on the value-weighted CRSP
index is above a threshold level and zero otherwise. and at the same time add
the restriction:
αMkt Low + βMkt Low · Threshold = αMkt High + βMkt High · Threshold
to ensure continuity.
But I do not know how to add this restriction in R, could you help me on this?
Thanks a lot!
Eunice
[[alternative HTML version deleted]]
------------------------------
Message: 56
Date: Thu, 5 Apr 2012 09:32:32 -0700 (PDT)
From: mhimanshu <bioinfo.himanshu@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] How to find best parameter values using deSolve n
optim() ?
Message-ID: <1333643552040-4535368.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi Thomas,
Thank you so much for your suggestion.
I tried your code and it is working fine. Now when I change the values of Y
in yobs I am getting so many warnings.
say,
yobs <- data.frame(
time = 0:7,
Y = c(0.00, 3.40, 4.60 ,5.80, 5.80, 6.00, 6.00 ),
Z = c(0.1, 0.11, 0.119, 0.128, 0.136, 0.145, 0.153, 0.16)
)
So when i fit the model with the same code that you have written, i got the
following warnings:
DLSODA- Warning..Internal T (=R1) and H (=R2) are
such that in the machine, T + H = T on the next step
(H = step size). Solver will continue anyway.
In above, R1 = 0.1484502806322D+01 R2 = 0.2264549048113D-16
and I have got so many such warnings.
Can you explain me why this is happening?? and Secondly, I dont understand
why i am getting parameters values in negatives after fitting?? Can you
please help me out in this... :)
Thanks
Himanshu
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-find-best-parameter-values-using-deSolve-n-optim-tp4506042p4535368.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 57
Date: Thu, 5 Apr 2012 09:42:16 -0700 (PDT)
From: Rui Barradas <rui1174@sapo.pt>
To: r-help@r-project.org
Subject: Re: [R] random sample from list
Message-ID: <1333644136475-4535397.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hello,
>
> #Here is how I have tried to sample but it is not sampling from the right
> part of the list
>
> bg<- z_nonna[sample(1:length(z_nonna), 5000, replace=FALSE)]
>
You are sampling from the length of z_nonna, with no guarantee that they are
indices to unique list elements.
Try this.
# First, create some fake data.
n <- 1000
z <- list()
set.seed(1234)
for(i in 1:n) z[[i]] <- sample(letters, 2)
# Now sample some unique elements from it.
iz <- which(!duplicated(z))
iz <- sample(iz, 100) # sample from the non-duplicate indices.
z[iz]
Hope this helps,
Rui Barradas
--
View this message in context:
http://r.789695.n4.nabble.com/random-sample-from-list-tp4533936p4535397.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 58
Date: Thu, 5 Apr 2012 12:57:23 -0400
From: David Winsemius <dwinsemius@comcast.net>
To: Daisy Englert Duursma <daisy.duursma@gmail.com>
Cc: "r-help@R-project.org" <r-help@r-project.org>
Subject: Re: [R] random sample from list
Message-ID: <94E38464-8BF3-4B24-97BA-5606235680D2@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 12:00 AM, Daisy Englert Duursma wrote:
> random selection of cells in raster based on distance from xy
> locations
>
> Hi,
>
> I am trying to sample a raster for random cells that occur within a
> specific distance of point locations. I have successfully found
> multiple
> ways of doing this but have memory issues with very large datasets. To
> overcome this problem I am working with lists. I am now stuck on how
> to
> randomlly sample the correct elements of a list. Here is an example
> of my
> code and an example dataset.
>
> rm(list = ls())
>
> #load libraries
>
> library(dismo)
> library(raster)
>
> ##example data
> #load map of land
>
files<-list.files(path=paste(system.file(package="dismo"),"/
> ex",sep=""),pattern="grd",full.names=TRUE)
> mask <- raster(files[[9]])
> #make point data
> pts<-randomPoints(mask,100)
> #extract the unique cell numbers within a 800km buffer of the points,
> remove NA cells
> z <- extract(mask, pts, buffer=800000,cellnumbers=T)
> z_nonna <- lapply(z, na.omit)
>
> ###########PROBLEM AREA##########
> ##If I convert this to a dataframe and find the unique "cells"
values
> #z2<-as.data.frame(do.call(rbind,z_nonna))
> #z_unique<-unique(z2[,1])
> ##I can tell there there are 9763 unique "cells" values
> #How do I randomely sample the **LIST** NOT THE DATAFRAME for 5000
> unique
> values from "cells". I am working with huge datasets and the
data
> needs to
> stay as a list due to memory issues
>
> #Here is how I have tried to sample but it is not sampling from the
> right
> part of the list
>
> bg<- z_nonna[sample(1:length(z_nonna), 5000, replace=FALSE)]
You should have gotten an error from sample() because z_nonna is only
of length 100.
--
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 59
Date: Thu, 05 Apr 2012 09:57:22 -0700
From: Spencer Graves <spencer.graves@structuremonitoring.com>
To: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID: <4F7DCEF2.1010202@structuremonitoring.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
The "sos" package is designed to search help pages only and sort the
results by package. It includes a vignette describing how to get the
results as an Excel file giving an efficient summary of which packages
contain help pages of interest including the latest date updated, etc.
I designed the package to be the quickest lit search for anything
statistical, and I don't know of anything better. I may want to look
elsewhere later, but I always start a lit search there. I suspect that
others have not found it that useful or someone else would have
mentioned it earlier on this thread ;-) Spencer
On 4/5/2012 9:40 AM, Sarah Goslee wrote:> I usually use http://www.rseek.org
>
> On Thu, Apr 5, 2012 at 11:36 AM, Jonathan
Greenberg<jgrn@illinois.edu> wrote:
>> R-helpers:
>>
>> It looks like http://finzi.psych.upenn.edu/search.html has stopped
>> spidering the mailing lists -- this used to be my go-to site for
>> searching for R solutions. Are there any good replacements for this?
>> I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]>>
>> --j
>>
>> --
--
Spencer Graves, PE, PhD
President and Chief Technology Officer
Structure Inspection and Monitoring, Inc.
751 Emerson Ct.
San Jos?, CA 95126
ph: 408-655-4567
web: www.structuremonitoring.com
------------------------------
Message: 60
Date: Thu, 5 Apr 2012 09:56:35 -0700 (PDT)
From: "ux.seo" <oscar815@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] ggplot2 error: arguments imply differing number of
rows
Message-ID: <1333644995610-4535432.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
As you got the error message, to use ggplot function, you had better make a
data.frame with your data "d".
for example, d[ n x p], n : observations, p : variables
n = dim(d)
dd = data.frame(x=d[,2:n[2]], y=d[,1])
then, you may get the better result after apply "dd" to the ggplot
function.
> p1 + geom_line(aes(x=x,y=predict(m2,list(xv=x))), color="red")
Error in data.frame(evaled, PANEL = data$PANEL) :
arguments imply differing number of rows: 100, 18
This example is from "The R Book" by Michael J. Crawley.
--
View this message in context:
http://r.789695.n4.nabble.com/ggplot2-error-arguments-imply-differing-number-of-rows-tp4535261p4535432.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 61
Date: Thu, 5 Apr 2012 13:11:21 -0400
From: Sarah Goslee <sarah.goslee@gmail.com>
To: Spencer Graves <spencer.graves@prodsyse.com>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID:
<CAM_vju=8qk39trjcQHtKQ-bXvW446FD=suSgdyLL_8X7Ni0bgg@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
sos is a great way to search help pages, agreed. But the question is
about functions AND mailing list archives, which requires an online
solution. (See subject line.)
Sarah
On Thu, Apr 5, 2012 at 12:56 PM, Spencer Graves
<spencer.graves@prodsyse.com> wrote:> The "sos" package is designed to search help pages only and sort
the results
> by package. ?It includes a vignette describing how to get the results as an
> Excel file giving an efficient summary of which packages contain help pages
> of interest including the latest date updated, etc. ?I designed the package
> to be the quickest lit search for anything statistical, and I don't
know of
> anything better. ?I may want to look elsewhere later, but I always start a
> lit search there. ?I suspect that others have not found it that useful or
> someone else would have mentioned it earlier on this thread ;-) ?Spencer
>
>
> On 4/5/2012 9:40 AM, Sarah Goslee wrote:
>>
>> I usually use http://www.rseek.org
>>
>> On Thu, Apr 5, 2012 at 11:36 AM, Jonathan
Greenberg<jgrn@illinois.edu>
>> ?wrote:
>>
>>> R-helpers:
>>>
>>> It looks like http://finzi.psych.upenn.edu/search.html has stopped
>>> spidering the mailing lists -- this used to be my go-to site for
>>> searching for R solutions. ?Are there any good replacements for
this?
>>> I want to be able to search both functions and mailing lists at the
[[elided Yahoo spam]]>>>
>>> --j
>
--
Sarah Goslee
http://www.functionaldiversity.org
------------------------------
Message: 62
Date: Thu, 5 Apr 2012 12:12:25 -0500
From: Yihui Xie <xie@yihui.name>
To: Duncan Murdoch <murdoch.duncan@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID:
<CANROs4ff4zxEJ7WmdybxtUsUVovjpTegK+wS0yd+tvMzy3GAaw@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
In terms of editors, I think RStudio is pretty good
(http://www.rstudio.org/download/preview). Or LyX
(http://yihui.name/knitr/demo/lyx/), or TeXmaker, WinEdit
(http://yihui.name/knitr/demo/editors/)... All of them start a new R
session when weaving the document, and all support one-click
compilation. In all, anything but Windows Notepad.
Regards,
Yihui
--
Yihui Xie <xieyihui@gmail.com>
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA
On Thu, Apr 5, 2012 at 11:22 AM, Duncan Murdoch
<murdoch.duncan@gmail.com> wrote:> On 04/04/2012 3:25 PM, Alexander Shenkin wrote:
>>
>> Hello Folks,
>>
>> When I run the document below through sweave, rgui.exe/rsession.exe
>> leaves a file handle open to the sweave-001.pdf graphic (as verified by
>> process explorer). ?Pdflatex.exe then crashes (with a Permission Denied
>> error) because the graphic file is locked.
>>
>> This only seems to happen when there is an error in the sweave
document.
>> ?When there are no errors, no file handles are left open. ?However,
once
>> a file handle is stuck open, I can find no other way of closing it save
>> for quitting out of R.
>>
>> Any help would be greatly appreciated! ?It would be nice to be able to
>> write flawless sweave every time, but flawed as I am, I am having to
>> restart R continuously.
>
>
> I'd suggest a different workflow, in which you run a new copy of R
every
> time you want to Sweave a document. ?The files will be closed when that
copy
> dies, and the results are less likely to be affected by the current state
of
> your workspace (assuming you don't load an old workspace in the new
copy).
>
> For example, when I'm working on a Sweave document, I spend my time in
my
> text editor, and get it to run R to process the file whenever I want to see
> what the output looks like.
>
> The only real disadvantages to this approach that I can think of are that
> you need to figure out how to tell your text editor to run R (and that
might
> be hard if you're using a poor editor like Windows Notebook, but is
usually
> easy), and it will run a tiny bit slower because you need to start up R
> every time.
>
> Duncan Murdoch
>
>
>> Thanks,
>> Allie
>>
>>
>> OS: Windows 7 Pro x64 SP1
>>
>>
>> > ?sessionInfo()
>> R version 2.14.2 (2012-02-29)
>> Platform: i386-pc-mingw32/i386 (32-bit)
>>
>>
>> test.Rnw:
>>
>> ? ? \documentclass{article}
>> ? ? \title {file handle test}
>> ? ? \author{test author}
>> ? ? \usepackage{Sweave}
>> ? ? \begin {document}
>> ? ? \maketitle
>>
>> ? ? \SweaveOpts{prefix.string=sweave}
>>
>> ? ? \begin{figure}
>> ? ? \begin{center}
>>
>> ? ? <<fig=TRUE, echo=FALSE>>>> ? ? ? ? df =
data.frame(a=rnorm(100), b=rnorm(100), group = c("g1",
>> "g2", "g3", "g4"))
>> ? ? ? ? plot(df$a, df$y, foo)
>> ? ? @
>>
>> ? ? \caption{test figure one}
>> ? ? \label{fig:one}
>> ? ? \end{center}
>> ? ? \end{figure}
>> ? ? \end{document}
>>
>>
>>
>> Sweave command run:
>>
>> ? ? Sweave("test.Rnw", syntax="SweaveSyntaxNoweb")
>>
>>
>>
>> Sweave.sty:
>>
>> ? ? \NeedsTeXFormat{LaTeX2e}
>> ? ? \ProvidesPackage{Sweave}{}
>>
>> ? ? \RequirePackage{ifthen}
>> ? ? \newboolean{Sweave@gin}
>> ? ? \setboolean{Sweave@gin}{true}
>> ? ? \newboolean{Sweave@ae}
>> ? ? \setboolean{Sweave@ae}{true}
>>
>> ? ? \DeclareOption{nogin}{\setboolean{Sweave@gin}{false}}
>> ? ? \DeclareOption{noae}{\setboolean{Sweave@ae}{false}}
>> ? ? \ProcessOptions
>>
>> ? ? \RequirePackage{graphicx,fancyvrb}
>> ? ? \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{}
>>
>>
>>
\ifthenelse{\boolean{Sweave@gin}}{\setkeys{Gin}{width=0.8\textwidth}}{}%
>> ? ? \ifthenelse{\boolean{Sweave@ae}}{%
>> ? ? ? \RequirePackage[T1]{fontenc}
>> ? ? ? \RequirePackage{ae}
>> ? ? }{}%
>>
>> ? ? \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl}
>> ? ? \DefineVerbatimEnvironment{Soutput}{Verbatim}{}
>> ? ? \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl}
>>
>> ? ? \newenvironment{Schunk}{}{}
>>
>> ? ? \newcommand{\Sconcordance}[1]{%
>> ? ? ? \ifx\pdfoutput\undefined%
>> ? ? ? \csname newcount\endcsname\pdfoutput\fi%
>> ? ? ? \ifcase\pdfoutput\special{#1}%
>> ? ? ? \else\immediate\pdfobj{#1}\fi}
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 63
Date: Thu, 5 Apr 2012 10:13:15 -0700 (PDT)
From: Bcampbell99 <BrianD.Campbell@ec.gc.ca>
To: r-help@r-project.org
Subject: [R] Multi part problem...array manipulation and sampling
Message-ID: <1333645995921-4535476.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Ok, I have a new, multipart problem that I need help figuring out.
Part 1. I have a three dimensional array (species, sites, repeat counts
within sites). Sampling effort per site varies so the array should be
ragged.
Maximum number of visits at any site = 22
Number of species = 161
Number of sites = 56
I generated the array first by;
mydata<-tapply(spdata1$NTOTAL,list(spdata1$COUNT,spdata1$SPECIES,spdata1$SITESURVEY),sum)
where spdata1$NTOTAL = number of detections
spdata1$COUNT = repeat visit per site (max = 22)
spdata1$SITESURVEY = site and survey period (=56)
This gives me an array with dim (22,161,56), which is populated by either a
number (NTOTAL), or an NA.
I then converted the NAs to zeroes;
mydata[is.na(mydata)] <- 0
To get the total number of records per sitesurvey i used;
array.dat1<-array(0,dim=c(22,1,56))
for (i in 1:56){
array.dat1[,,i]<-rowSums(mydata1[,,i],1)
}
array.dat1[which(array.dat1==0)] = NA
SO when array.dat1=0, that represents that there was no survey.
To combine this information with the original matrix i used
new.mydata<-array(0,dim=c(22,162,56))
new.mydata[1:22,1:161,1:56]<-mydata
new.mydata[1:22,162,1:56]<-array.dat1
where [,162,] is the index that represents whether a survey had been
conducted or not...sitesurvey total
OK, here is where I need help;
Step 1) I need to extract from new.mydata1 the rows where [,162,] !NA....this
will create a ragged array where the number of counts per
sitesurvey varies
outdata (dim1 varies from 1,22...ragged, 162,56).
Step 2) from the extracted list, I would like to randomly select n rows (say
for example, 3)
outdata (dim1=2 or 3 counts(samples), dim2=162, dim3=56)
Step 3) I need to sum the values from the columns of dimension 1 from step 2
s
outdata (dim1=1 (column sums), dim2=162, dim3=56)
Not sure how clear I've explained this, but any suggestions would be really
helpful.
Most appreciatively:
Brian Campbell
--
View this message in context:
http://r.789695.n4.nabble.com/Multi-part-problem-array-manipulation-and-sampling-tp4535476p4535476.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 64
Date: Thu, 05 Apr 2012 12:26:38 -0500
From: Alexander Shenkin <ashenkin@ufl.edu>
To: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID: <4F7DD5CE.1090502@ufl.edu>
Content-Type: text/plain; charset=ISO-8859-1
Yep, I'm using RStudio, and have used Tinn-R in the past. RStudio does
start a new R session when processing a sweave document via the RStudio
GUI. In my case, this presented a problem for the reasons I stated
before (i.e. that I need to run sweave in the main environment, not a
new one). Hence, I'm running Sweave from the command line, and
processing the tex doc via TeXworks (from MiKTeX). I sent the RStudio
team a feature request to add an option to be able to run Sweave in the
main environment (perhaps while maintaining the default of running it in
a separate environment).
Allie
On 4/5/2012 12:12 PM, Yihui Xie wrote:> In terms of editors, I think RStudio is pretty good
> (http://www.rstudio.org/download/preview). Or LyX
> (http://yihui.name/knitr/demo/lyx/), or TeXmaker, WinEdit
> (http://yihui.name/knitr/demo/editors/)... All of them start a new R
> session when weaving the document, and all support one-click
> compilation. In all, anything but Windows Notepad.
>
> Regards,
> Yihui
> --
> Yihui Xie <xieyihui@gmail.com>
> Phone: 515-294-2465 Web: http://yihui.name
> Department of Statistics, Iowa State University
> 2215 Snedecor Hall, Ames, IA
>
>
>
> On Thu, Apr 5, 2012 at 11:22 AM, Duncan Murdoch
> <murdoch.duncan@gmail.com> wrote:
>> On 04/04/2012 3:25 PM, Alexander Shenkin wrote:
>>>
>>> Hello Folks,
>>>
>>> When I run the document below through sweave, rgui.exe/rsession.exe
>>> leaves a file handle open to the sweave-001.pdf graphic (as
verified by
>>> process explorer). Pdflatex.exe then crashes (with a Permission
Denied
>>> error) because the graphic file is locked.
>>>
>>> This only seems to happen when there is an error in the sweave
document.
>>> When there are no errors, no file handles are left open. However,
once
>>> a file handle is stuck open, I can find no other way of closing it
save
>>> for quitting out of R.
>>>
>>> Any help would be greatly appreciated! It would be nice to be able
to
>>> write flawless sweave every time, but flawed as I am, I am having
to
>>> restart R continuously.
>>
>>
>> I'd suggest a different workflow, in which you run a new copy of R
every
>> time you want to Sweave a document. The files will be closed when that
copy
>> dies, and the results are less likely to be affected by the current
state of
>> your workspace (assuming you don't load an old workspace in the new
copy).
>>
>> For example, when I'm working on a Sweave document, I spend my time
in my
>> text editor, and get it to run R to process the file whenever I want to
see
>> what the output looks like.
>>
>> The only real disadvantages to this approach that I can think of are
that
>> you need to figure out how to tell your text editor to run R (and that
might
>> be hard if you're using a poor editor like Windows Notebook, but is
usually
>> easy), and it will run a tiny bit slower because you need to start up R
>> every time.
>>
>> Duncan Murdoch
>>
>>
>>> Thanks,
>>> Allie
>>>
>>>
>>> OS: Windows 7 Pro x64 SP1
>>>
>>>
>>>> sessionInfo()
>>> R version 2.14.2 (2012-02-29)
>>> Platform: i386-pc-mingw32/i386 (32-bit)
>>>
>>>
>>> test.Rnw:
>>>
>>> \documentclass{article}
>>> \title {file handle test}
>>> \author{test author}
>>> \usepackage{Sweave}
>>> \begin {document}
>>> \maketitle
>>>
>>> \SweaveOpts{prefix.string=sweave}
>>>
>>> \begin{figure}
>>> \begin{center}
>>>
>>> <<fig=TRUE, echo=FALSE>>>>> df =
data.frame(a=rnorm(100), b=rnorm(100), group = c("g1",
>>> "g2", "g3", "g4"))
>>> plot(df$a, df$y, foo)
>>> @
>>>
>>> \caption{test figure one}
>>> \label{fig:one}
>>> \end{center}
>>> \end{figure}
>>> \end{document}
>>>
>>>
>>>
>>> Sweave command run:
>>>
>>> Sweave("test.Rnw",
syntax="SweaveSyntaxNoweb")
>>>
>>>
>>>
>>> Sweave.sty:
>>>
>>> \NeedsTeXFormat{LaTeX2e}
>>> \ProvidesPackage{Sweave}{}
>>>
>>> \RequirePackage{ifthen}
>>> \newboolean{Sweave@gin}
>>> \setboolean{Sweave@gin}{true}
>>> \newboolean{Sweave@ae}
>>> \setboolean{Sweave@ae}{true}
>>>
>>> \DeclareOption{nogin}{\setboolean{Sweave@gin}{false}}
>>> \DeclareOption{noae}{\setboolean{Sweave@ae}{false}}
>>> \ProcessOptions
>>>
>>> \RequirePackage{graphicx,fancyvrb}
>>> \IfFileExists{upquote.sty}{\RequirePackage{upquote}}{}
>>>
>>>
>>>
\ifthenelse{\boolean{Sweave@gin}}{\setkeys{Gin}{width=0.8\textwidth}}{}%
>>> \ifthenelse{\boolean{Sweave@ae}}{%
>>> \RequirePackage[T1]{fontenc}
>>> \RequirePackage{ae}
>>> }{}%
>>>
>>> \DefineVerbatimEnvironment{Sinput}{Verbatim}{fontshape=sl}
>>> \DefineVerbatimEnvironment{Soutput}{Verbatim}{}
>>> \DefineVerbatimEnvironment{Scode}{Verbatim}{fontshape=sl}
>>>
>>> \newenvironment{Schunk}{}{}
>>>
>>> \newcommand{\Sconcordance}[1]{%
>>> \ifx\pdfoutput\undefined%
>>> \csname newcount\endcsname\pdfoutput\fi%
>>> \ifcase\pdfoutput\special{#1}%
>>> \else\immediate\pdfobj{#1}\fi}
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 65
Date: Thu, 05 Apr 2012 10:28:42 -0700
From: Spencer Graves <spencer.graves@structuremonitoring.com>
To: Sarah Goslee <sarah.goslee@gmail.com>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] Best way to search r- functions and mailing list?
Message-ID: <4F7DD64A.1070005@structuremonitoring.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi, Sarah: You were correct: I failed to read the question with
sufficient care. Thanks for your original reply and for the
correction. Spencer
On 4/5/2012 10:11 AM, Sarah Goslee wrote:> sos is a great way to search help pages, agreed. But the question is
> about functions AND mailing list archives, which requires an online
> solution. (See subject line.)
>
> Sarah
>
> On Thu, Apr 5, 2012 at 12:56 PM, Spencer Graves
> <spencer.graves@prodsyse.com> wrote:
>> The "sos" package is designed to search help pages only and
sort the results
>> by package. It includes a vignette describing how to get the results
as an
>> Excel file giving an efficient summary of which packages contain help
pages
>> of interest including the latest date updated, etc. I designed the
package
>> to be the quickest lit search for anything statistical, and I don't
know of
>> anything better. I may want to look elsewhere later, but I always
start a
>> lit search there. I suspect that others have not found it that useful
or
>> someone else would have mentioned it earlier on this thread ;-)
Spencer
>>
>>
>> On 4/5/2012 9:40 AM, Sarah Goslee wrote:
>>> I usually use http://www.rseek.org
>>>
>>> On Thu, Apr 5, 2012 at 11:36 AM, Jonathan
Greenberg<jgrn@illinois.edu>
>>> wrote:
>>>
>>>> R-helpers:
>>>>
>>>> It looks like http://finzi.psych.upenn.edu/search.html has
stopped
>>>> spidering the mailing lists -- this used to be my go-to site
for
>>>> searching for R solutions. Are there any good replacements for
this?
>>>> I want to be able to search both functions and mailing lists at
the
[[elided Yahoo spam]]>>>>
>>>> --j
>>>>
------------------------------
Message: 66
Date: Thu, 5 Apr 2012 12:34:40 -0500
From: Yihui Xie <xie@yihui.name>
To: Alexander Shenkin <ashenkin@ufl.edu>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID:
<CANROs4ctVvMt175in2Oy3Ax4L8h-ozNVDv6M9Ep_AXc1-6qeKQ@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Well, I do not think it is a good practice (in terms of reproducible
research) to keep on running Sweave in the same R session, because
your previous run and your current workspace could "pollute" your next
run. To make sure a document compiles on its own, it is better always
to start a new clean R session. If you run into speed issues (e.g. you
have chunks which require intensive computing), you can consider using
cache, which the knitr package has reasonably good support.
Regards,
Yihui
--
Yihui Xie <xieyihui@gmail.com>
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA
On Thu, Apr 5, 2012 at 12:26 PM, Alexander Shenkin <ashenkin@ufl.edu>
wrote:> Yep, I'm using RStudio, and have used Tinn-R in the past. ?RStudio does
> start a new R session when processing a sweave document via the RStudio
> GUI. ?In my case, this presented a problem for the reasons I stated
> before (i.e. that I need to run sweave in the main environment, not a
> new one). ?Hence, I'm running Sweave from the command line, and
> processing the tex doc via TeXworks (from MiKTeX). ?I sent the RStudio
> team a feature request to add an option to be able to run Sweave in the
> main environment (perhaps while maintaining the default of running it in
> a separate environment).
>
> Allie
>
------------------------------
Message: 67
Date: Thu, 05 Apr 2012 12:47:57 -0500
From: Alexander Shenkin <ashenkin@ufl.edu>
To: Yihui Xie <xie@yihui.name>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID: <4F7DDACD.7050307@ufl.edu>
Content-Type: text/plain; charset=ISO-8859-1
Reproducibility is important, and as I mentioned in a previous email,
there are probably ways I could avoid running the entire script over and
over again with each sweave compilation. Still, relying on saved
workspaces, temporary files or caches still has some of the issues that
working in the main environment does. Specifically, you're not working
with the base data all the way through the final analyses each time you
run the sweave doc. To produce those workspace, files or caches
requires a run of scripts. If those scripts have changed, or if your
data have changed, then your workspace, files and/or cache is then just
as out of date as your workspace. Saving workspaces, files and/or
caches still requires care that they're not saved after having been
modified by the command line, etc. I think that working with saved
workspaces, files and/or caches is probably less prone to "pollution"
than working in the main environment, but it's far from failsafe.
As long as the final sweave doc is run with scripts from beginning to
end, getting the sweave doc up to snuff by working more quickly in the
main environment is acceptable IMHO. So is working with the other
methods above.
Best,
Allie
On 4/5/2012 12:34 PM, Yihui Xie wrote:> Well, I do not think it is a good practice (in terms of reproducible
> research) to keep on running Sweave in the same R session, because
> your previous run and your current workspace could "pollute" your
next
> run. To make sure a document compiles on its own, it is better always
> to start a new clean R session. If you run into speed issues (e.g. you
> have chunks which require intensive computing), you can consider using
> cache, which the knitr package has reasonably good support.
>
> Regards,
> Yihui
> --
> Yihui Xie <xieyihui@gmail.com>
> Phone: 515-294-2465 Web: http://yihui.name
> Department of Statistics, Iowa State University
> 2215 Snedecor Hall, Ames, IA
>
>
>
> On Thu, Apr 5, 2012 at 12:26 PM, Alexander Shenkin <ashenkin@ufl.edu>
wrote:
>> Yep, I'm using RStudio, and have used Tinn-R in the past. RStudio
does
>> start a new R session when processing a sweave document via the RStudio
>> GUI. In my case, this presented a problem for the reasons I stated
>> before (i.e. that I need to run sweave in the main environment, not a
>> new one). Hence, I'm running Sweave from the command line, and
>> processing the tex doc via TeXworks (from MiKTeX). I sent the RStudio
>> team a feature request to add an option to be able to run Sweave in the
>> main environment (perhaps while maintaining the default of running it
in
>> a separate environment).
>>
>> Allie
>>
------------------------------
Message: 68
Date: Thu, 5 Apr 2012 11:03:00 -0700 (PDT)
From: "Adam D. I. Kramer" <adik-rhelp@ilovebacon.org>
To: r-help@r-project.org
Subject: [R] "too large for hashing"
Message-ID:
<alpine.DEB.2.00.1204051059400.25203@parser.ilovebacon.org>
Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII
Hello,
I'm doing some analysis on a rather large data set. In this case,
some simple commands are failing. For example, this one:
> x$eventtype <- factor(x$eventtype)
Error in unique.default(x) : length 1093574297 is too large for hashing
...I think this is a bug, because "hashing" should not be required for
the
"factor" function. Am I right? The whole column does not need to be
hashed,
only the unique keys. Sure, there is the potential to overflow the key
register, but this error should be thrown only if that occurs, no?
Cordially,
Adam D. I. Kramer, Ph.D.
Data Scientist, Facebook, Inc.
akramer@fb.com
------------------------------
Message: 69
Date: Thu, 05 Apr 2012 14:22:25 -0400
From: Duncan Murdoch <murdoch.duncan@gmail.com>
To: adik@ilovebacon.org
Cc: r-help@r-project.org
Subject: Re: [R] "too large for hashing"
Message-ID: <4F7DE2E1.1090000@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 05/04/2012 2:03 PM, Adam D. I. Kramer wrote:> Hello,
>
> I'm doing some analysis on a rather large data set. In this case,
> some simple commands are failing. For example, this one:
>
> > x$eventtype<- factor(x$eventtype)
> Error in unique.default(x) : length 1093574297 is too large for hashing
>
> ...I think this is a bug, because "hashing" should not be
required for the
> "factor" function. Am I right? The whole column does not need to
be hashed,
> only the unique keys. Sure, there is the potential to overflow the key
> register, but this error should be thrown only if that occurs, no?
It looks as though the error is coming when unique() tries to determine
the unique levels in the argument, but really there's no way to answer
your question without more information. What type of object is
x$eventtype? It is really 1093574297 elements long? How many unique
values does it have?
Duncan Murdoch
------------------------------
Message: 70
Date: Thu, 5 Apr 2012 13:22:48 -0500
From: Yihui Xie <xie@yihui.name>
To: Alexander Shenkin <ashenkin@ufl.edu>
Cc: r-help@r-project.org
Subject: Re: [R] Rgui maintains open file handles after Sweave error
Message-ID:
<CANROs4eZFw1hjsvQw20ozuZfscBdO3fK5pbff3cpX-dyNjF5Fw@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Things are not that gory with knitr. You only need to use the option
cache=TRUE and it will take care of most of the things you mentioned.
For example, objects in a chunk are automatically saved and lazy
loaded; when code is modified, old cache will be automatically removed
and new cache will be built.
You can take a look at the Cache section in the manual:
https://github.com/downloads/yihui/knitr/knitr-manual.pdf And more
explanations for using cache safely here:
http://yihui.name/knitr/demo/cache/
Regards,
Yihui
--
Yihui Xie <xieyihui@gmail.com>
Phone: 515-294-2465 Web: http://yihui.name
Department of Statistics, Iowa State University
2215 Snedecor Hall, Ames, IA
On Thu, Apr 5, 2012 at 12:47 PM, Alexander Shenkin <ashenkin@ufl.edu>
wrote:> Reproducibility is important, and as I mentioned in a previous email,
> there are probably ways I could avoid running the entire script over and
> over again with each sweave compilation. ?Still, relying on saved
> workspaces, temporary files or caches still has some of the issues that
> working in the main environment does. ?Specifically, you're not working
> with the base data all the way through the final analyses each time you
> run the sweave doc. ?To produce those workspace, files or caches
> requires a run of scripts. ?If those scripts have changed, or if your
> data have changed, then your workspace, files and/or cache is then just
> as out of date as your workspace. ?Saving workspaces, files and/or
> caches still requires care that they're not saved after having been
> modified by the command line, etc. ?I think that working with saved
> workspaces, files and/or caches is probably less prone to
"pollution"
> than working in the main environment, but it's far from failsafe.
>
> As long as the final sweave doc is run with scripts from beginning to
> end, getting the sweave doc up to snuff by working more quickly in the
> main environment is acceptable IMHO. ?So is working with the other
> methods above.
>
> Best,
> Allie
>
------------------------------
Message: 71
Date: Thu, 5 Apr 2012 14:34:51 -0400
From: John Laing <john.laing@gmail.com>
To: Prof Brian Ripley <ripley@stats.ox.ac.uk>
Cc: r-help@r-project.org
Subject: Re: [R] API Baddperiods in RBloomberg
Message-ID:
<CAA3Wa=tpf7e5YnCBJadJtzND+YdpM=iTB+zYGGdrXgciy-hPAA@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
The binaries for RBloomberg are hosted on findata.org. The package is
only useful in combination with a Bloomberg terminal, but users who
have access to one should not be deterred by its absence from CRAN.
John
On Thu, Apr 5, 2012 at 12:18 PM, Prof Brian Ripley
<ripley@stats.ox.ac.uk> wrote:> On 05/04/2012 08:54, arvanitis.christos wrote:
>>
>> Hi to all,
>>
>> Do you know how I can use Baddperiods from RBloomberg
>
>
> Most of us cannot even use 'RBloomberg': it has been removed at the
request
> of Bloomberg's lawyers.
>
>
> --
> Brian D. Ripley, ? ? ? ? ? ? ? ? ?ripley@stats.ox.ac.uk
> Professor of Applied Statistics, ?http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, ? ? ? ? ? ? Tel: ?+44 1865 272861 (self)
> 1 South Parks Road, ? ? ? ? ? ? ? ? ? ? +44 1865 272866 (PA)
> Oxford OX1 3TG, UK ? ? ? ? ? ? ? ?Fax: ?+44 1865 272595
>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 72
Date: Thu, 5 Apr 2012 11:32:28 -0700 (PDT)
From: apcoble <coble.adam@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] Problem with NA data when computing standard error
Message-ID: <1333650748471-4535672.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I found a rather easy solution that circumvents this problem by:
1) creating your own length function using na.omit function
2) calculating variance using tapply
3) calculating length using new length function
4) calculating square root of variance by length
*Code from LeCzar:*
object1<-as.data.frame.table(tapply(Data[Year=="1999"],na.rm=T,list(Group[Year=="1999"],Season[Year=="1999"]),mean))
object2<-as.data.frame.table(tapply(Data[Year=="1999"],na.rm=T,list(Group[Year=="1999"],Season[Year=="1999"]),se))
*Recommended code and round-about way to calculate se:*
1)
length.NA<-function(x)length(na.omit(x))
2)
object2.var<-as.data.frame.table(tapply(Data[Year=="1999"],na.rm=T,list(Group[Year=="1999"],Season[Year=="1999"]),var,na.rm=T))
3)
object2.length<-as.data.frame.table(tapply(Data[Year=="1999"],na.rm=T,list(Group[Year=="1999"],Season[Year=="1999"]),length.NA))
4)
sqrt(object2.var/object2.length)
This should give you SE for your parameters. For some reason, the revised
length function doesn't work when I created an SE function, but it worked
well when I calculated length.
--
View this message in context:
http://r.789695.n4.nabble.com/Problem-with-NA-data-when-computing-standard-error-tp855227p4535672.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 73
Date: Thu, 5 Apr 2012 14:41:41 -0300
From: Adam Harding <harding@dal.ca>
To: r-help@r-project.org
Subject: [R] Normalizing linear regression slope to intercept
Message-ID: <B01F19A7-2093-4BBB-9B63-8501F5CD576A@dal.ca>
Content-Type: text/plain
I am wondering if it possible to normalize the slope of a linear regression to
its intercept to allow for valid between-group comparisons.
Here is the scenario:
I need to compare the slopes of biomass increase among NAFO divisions of
Northwest Atlantic cod. However, the initial division biomass is a confounding
factor that may influence the slope of the regression model. How can I normalize
the slope to the initial biomass of a given division (e.g. 2J3KL)? Normalizing
will allow me to compare the slope of one division to another (e.g. 3NO) with
respect to distance between the divisions without the confounding effect of
division size.
Thanks
Adam Harding
--
Adam Harding
Marine Biology & Oceanography
Dalhousie University | Halifax NS
[[alternative HTML version deleted]]
------------------------------
Message: 74
Date: Thu, 5 Apr 2012 12:07:15 -0700 (PDT)
From: "Adam D. I. Kramer" <adik-rhelp@ilovebacon.org>
To: Duncan Murdoch <murdoch.duncan@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] "too large for hashing"
Message-ID:
<alpine.DEB.2.00.1204051203220.25203@parser.ilovebacon.org>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Thanks for your response, Duncan.
x$eventtype is a "character" vector (because the same hashing error
occurred when I tried to read.table() in the first place specifying
colClasses = c(..., "factor", ...).
x really is that long:
> dim(x)
[1] 1093574297 12
...the x$eventtype field has three unique values.
(I'm currently using a workaround of making a numeric column based on a
string of ifelse() and then setting class() <- factor and then setting the
labels manually.)
--Adam
On Thu, 5 Apr 2012, Duncan Murdoch wrote:
> On 05/04/2012 2:03 PM, Adam D. I. Kramer wrote:
>> Hello,
>>
>> I'm doing some analysis on a rather large data set. In this
case,
>> some simple commands are failing. For example, this one:
>>
>> > x$eventtype<- factor(x$eventtype)
>> Error in unique.default(x) : length 1093574297 is too large for hashing
>>
>> ...I think this is a bug, because "hashing" should not be
required for the
>> "factor" function. Am I right? The whole column does not need
to be hashed,
>> only the unique keys. Sure, there is the potential to overflow the key
>> register, but this error should be thrown only if that occurs, no?
>
> It looks as though the error is coming when unique() tries to determine the
> unique levels in the argument, but really there's no way to answer your
> question without more information. What type of object is x$eventtype? It
> is really 1093574297 elements long? How many unique values does it have?
>
> Duncan Murdoch
>
------------------------------
Message: 75
Date: Thu, 5 Apr 2012 19:32:20 +0000
From: Julio Sergio <juliosergio@gmail.com>
To: <r-help@stat.math.ethz.ch>
Subject: [R] A kind of set operation in R
Message-ID: <loom.20120405T211829-990@post.gmane.org>
Content-Type: text/plain; charset="us-ascii"
I have an ordered "set" of numbers, represented by a vector, say
> X <- c(10:13, 17,18)
> X
[1] 10 11 12 13 17 18
then I have a "sub-set" of X, say
> Y <- c(11,12,17,18)
Is there a simple way in R to have a logical vector (parallel to X) indicating
what elements of X are in Y, i.e.,
> Inclusion
[1] FALSE TRUE TRUE FALSE TRUE TRUE
I'm trying to avoid looping over both vectors to produce the tests. Do you
have
any comments on this?
Thanks,
--Sergio.
------------------------------
Message: 76
Date: Thu, 5 Apr 2012 15:37:05 -0400
From: "Richard M. Heiberger" <rmh@temple.edu>
To: Julio Sergio <juliosergio@gmail.com>
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] A kind of set operation in R
Message-ID:
<CAGx1TMBGEqQRiaPqXPD3fPCLEVzaXqyOFi_4RojZTiLcB0-25w@mail.gmail.com>
Content-Type: text/plain
At least two ways
> (!is.na(match(X, Y)))
[1] FALSE TRUE TRUE FALSE TRUE TRUE> X %in% Y
[1] FALSE TRUE TRUE FALSE TRUE TRUE>
On Thu, Apr 5, 2012 at 3:32 PM, Julio Sergio <juliosergio@gmail.com>
wrote:
> I have an ordered "set" of numbers, represented by a vector, say
>
> > X <- c(10:13, 17,18)
> > X
> [1] 10 11 12 13 17 18
>
> then I have a "sub-set" of X, say
>
> > Y <- c(11,12,17,18)
>
> Is there a simple way in R to have a logical vector (parallel to X)
> indicating
> what elements of X are in Y, i.e.,
>
> > Inclusion
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
>
> I'm trying to avoid looping over both vectors to produce the tests. Do
you
> have
> any comments on this?
>
> Thanks,
>
> --Sergio.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 77
Date: Thu, 5 Apr 2012 12:40:52 -0700
From: Peter Meilstrup <peter.meilstrup@gmail.com>
To: r-help@r-project.org
Subject: [R] indexing data.frame columns
Message-ID:
<CAJoaRhYh9buVC+mV8VkGygY0Z=4YDNDv-ga_pmNoQd8+9DxKDw@mail.gmail.com>
Content-Type: text/plain
Consider the data.frame:
df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C
c(2,7,5,2,7,4,5), index =
c("A","B","A","C","B","B","C"))
I want to select the column specified in 'index' for every row of
'df', to
get
goal <- c(1, 7, 2, 2, 3, 5, 5)
This sounds a lot like the indexing-by-a-matrix you can do with arrays;
df[cbind(1:nrow(df), df$index)]
but this returns me values that are all characters where I want numbers.
(it seems that indexing by an array isn't well supported for data.frames.)
What is a better way to perform this selection operation?
Peter
[[alternative HTML version deleted]]
------------------------------
Message: 78
Date: Thu, 5 Apr 2012 12:41:19 -0700 (PDT)
To: Thomas Lumley <tlumley@uw.edu>, Jason Connor
<jconnor@alumni.cmu.edu>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] Difference in Kaplan-Meier estimates plus CI
Message-ID:
<1333654879.27292.YahooMailNeo@web125806.mail.ne1.yahoo.com>
Content-Type: text/plain
Hi, I have the same question as Jason on how to estimate the standard error and
construct CI around S_1(t) - S_2(t). From summary.survfit(obj), how can I
combine the 2 survival estimates and the associated standard errors, to get an
estimate of standard error for the difference / then calculate CI?
For example, my summary(obj) gave me:
treatment=0
time n.risk n.event survival std.err lower 95% CI
upper 95% CI
10.0000 69.0000 28.0000 0.7313 0.0438
0.6504 0.8223
treatment=1
time n.risk n.event survival std.err lower 95% CI
upper 95% CI
10.0000 86.0000 10.0000 0.9055 0.0285
0.8514 0.9631
The S_1(t=10) - S_2(t=1) = 0.9055-0.7313 = 0.1742. how to calculate the standard
error this difference based on SD_1 = 0.0438 and SD_2 = 0.0285, and then the 95%
CI for the difference?
Thanks
John
________________________________
From: Thomas Lumley <tlumley@uw.edu>
To: Jason Connor <jconnor@alumni.cmu.edu>
Cc: r-help@r-project.org
Sent: Wednesday, March 7, 2012 10:58 AM
Subject: Re: [R] Difference in Kaplan-Meier estimates plus CI
On Thu, Mar 8, 2012 at 4:50 AM, Jason Connor <jconnor@alumni.cmu.edu>
wrote:> I thought this would be trivial, but I can't find a package or function
> that does this.
>
> I'm hoping someone can guide me to one.
>
> Imagine a simple case with two survival curves (e.g. treatment &
control).
>
> I just want to calculate the difference in KM estimates at a specific time
> point (e.g. 1 year) plus the estimate's 95% CI. The former is
> straightforward, but the estimates not so much.
>
> I know methods exist such as Parzen, Wei, and Ying, but was surprised not
> to find a package that included this.
>
> Before I code it up, I thought I'd ask if I was just missing it
somewhere.
summary.survfit() in the survival package will give you the point
estimate and standard error, and then combining these into a
difference and confidence interval for the difference is easy.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 79
Date: Thu, 5 Apr 2012 19:42:34 +0000
From: Julio Sergio <juliosergio@gmail.com>
To: <r-help@stat.math.ethz.ch>
Subject: Re: [R] A kind of set operation in R
Message-ID: <loom.20120405T214151-335@post.gmane.org>
Content-Type: text/plain; charset="us-ascii"
Richard M. Heiberger <rmh <at> temple.edu> writes:
>
> At least two ways
>
> > (!is.na(match(X, Y)))
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
> > X %in% Y
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
Thanks Richard!
--Sergio.
------------------------------
Message: 80
Date: Thu, 5 Apr 2012 21:48:47 +0200
From: Berend Hasselman <bhh@xs4all.nl>
To: Julio Sergio <juliosergio@gmail.com>
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] A kind of set operation in R
Message-ID: <9054AE00-34ED-41C6-B050-F38F20A9CC3C@xs4all.nl>
Content-Type: text/plain; charset="us-ascii"
On 05-04-2012, at 21:32, Julio Sergio wrote:
> I have an ordered "set" of numbers, represented by a vector, say
>
>> X <- c(10:13, 17,18)
>> X
> [1] 10 11 12 13 17 18
>
> then I have a "sub-set" of X, say
>
>> Y <- c(11,12,17,18)
>
> Is there a simple way in R to have a logical vector (parallel to X)
indicating
> what elements of X are in Y, i.e.,
>
>> Inclusion
> [1] FALSE TRUE TRUE FALSE TRUE TRUE
>
> I'm trying to avoid looping over both vectors to produce the tests. Do
you have
> any comments on this?
Try
X %in% Y
You could also have a look at match
Berend
------------------------------
Message: 81
Date: Thu, 5 Apr 2012 14:00:30 -0600
From: Greg Snow <538280@gmail.com>
To: arunkumar1111 <akpbond007@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Histogram classwise
Message-ID:
<CAFEqCdxVpxMHe4RbGRs9djowucyzgqijW8vw7=whJLCqyogx_A@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
You might want to look at the lattice or ggplot2 packages, both of
which can create a graph for each of the classes.
On Tue, Apr 3, 2012 at 6:20 AM, arunkumar1111 <akpbond007@gmail.com>
wrote:> Hi
> I have a data class wise. I want to create a histogram class wise without
> using for loop as it takes a long time
> my data looks like this
>
> x ? ? ? class
> 27 ? ? ?1
> 93 ? ? ?3
> 65 ? ? ?5
> 1 ? ? ? 2
> 69 ? ? ?5
> 2 ? ? ? 1
> 92 ? ? ?4
> 49 ? ? ?5
> 55 ? ? ?4
> 46 ? ? ?1
> 51 ? ? ?3
> 100 ? ? 4
>
>
>
>
> -----
> Thanks in Advance
> ? ? ? ?Arun
> --
> View this message in context:
http://r.789695.n4.nabble.com/Histogram-classwise-tp4528624p4528624.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Gregory (Greg) L. Snow Ph.D.
538280@gmail.com
------------------------------
Message: 82
Date: Thu, 05 Apr 2012 16:20:08 -0400
From: John C Nash <nashjc@uottawa.ca>
To: r-help@R-project.org
Subject: [R] Appropriate method for sharing data across functions
Message-ID: <4F7DFE78.3090904@uottawa.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
In trying to streamline various optimization functions, I would like to have a
scratch pad
of working data that is shared across a number of functions. These can be called
from
different levels within some wrapper functions for maximum likelihood and other
such
computations. I'm sure there are other applications that could benefit from
this.
Below are two approaches. One uses the <<- assignment to a structure I
call OPCON. The
other attempts to create an environment with this name, but fails. Though I have
looked at
a number of references, I have so far not found an adequate description of how
to specify
where the OPCON environment is located. (Both the green and blue books do not
cover this
topic, at least not under "environment" in the index.)
Is there a recommended approach to this? I realize I could use argument lists,
but they
get long and tedious with the number of items I may need to pass, though passing
the OPCON
structure in and out might be the proper way. An onAttach() approach was
suggested by Paul
Gilbert and tried, but it has so far not succeeded and, unfortunately, does not
seem to be
usable from source() i.e., cannot be interpreted but must be built first.
JN
Example using <<-
rm(list=ls())
optstart<-function(npar){ # create structure for optimization computations
# npar is number of parameters ?? test??
OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
KFN=0, KGR=0, KHESS=0)
# may be other stuff
ls(OPCON)
}
add1<-function(){
OPCON$KFN<<-1+OPCON$KFN
test<-OPCON$KFN
}
OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
KFN=0, KGR=0, KHESS=0)
ls(OPCON)
print(add1())
print(add1())
print(ls.str())
rm(OPCON) # Try to remove the scratchpad
print(ls())
tmp<-readline("Now try from within a function")
setup<-optstart(4) # Need to sort out how to set this up appropriately
cat("setup =")
print(setup)
print(add1())
print(add1())
rm(OPCON) # Try to remove the scratchpad
=====================Example (failing) using new.env:
rm(list=ls())
optstart<-function(npar){ # create structure for optimization computations
# npar is number of parameters ?? test??
OPCON<-new.env(parent=globalenv())
OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
KFN=0, KGR=0, KHESS=0)
# may be other stuff
ls(OPCON)
}
add1<-function(){
OPCON$KFN<-1+OPCON$KFN
test<-OPCON$KFN
}
OPCON<-new.env(parent=globalenv())
OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
KFN=0, KGR=0, KHESS=0)
ls(OPCON)
print(add1())
print(add1())
print(ls.str())
rm(OPCON) # Try to remove the scratchpad
print(ls())
tmp<-readline("Now try from within a function")
setup<-optstart(4) # Need to sort out how to set this up appropriately
cat("setup =")
print(setup)
print(add1())
print(add1())
rm(OPCON) # Try to remove the scratchpad
------------------------------
Message: 83
Date: Thu, 5 Apr 2012 15:32:16 -0500
From: Hadley Wickham <hadley@rice.edu>
To: nashjc@uottawa.ca
Cc: r-help@r-project.org
Subject: Re: [R] Appropriate method for sharing data across functions
Message-ID:
<CABdHhvHwYBrWSQAGkP58_VEMy+DK1zD1q0dbiOedYCLnrB4udQ@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Why not pass around a reference class?
Hadley
On Thu, Apr 5, 2012 at 3:20 PM, John C Nash <nashjc@uottawa.ca>
wrote:> In trying to streamline various optimization functions, I would like to
have
> a scratch pad of working data that is shared across a number of functions.
> These can be called from different levels within some wrapper functions for
> maximum likelihood and other such computations. I'm sure there are
other
> applications that could benefit from this.
>
> Below are two approaches. One uses the <<- assignment to a structure
I call
> OPCON. The other attempts to create an environment with this name, but
> fails. Though I have looked at a number of references, I have so far not
> found an adequate description of how to specify where the OPCON environment
> is located. (Both the green and blue books do not cover this topic, at
least
> not under "environment" in the index.)
>
> Is there a recommended approach to this? I realize I could use argument
> lists, but they get long and tedious with the number of items I may need to
> pass, though passing the OPCON structure in and out might be the proper
way.
> An onAttach() approach was suggested by Paul Gilbert and tried, but it has
> so far not succeeded and, unfortunately, does not seem to be usable from
> source() i.e., cannot be interpreted but must be built first.
>
> JN
>
> Example using <<-
>
> rm(list=ls())
> optstart<-function(npar){ # create structure for optimization
computations
> ? # npar is number of parameters ?? test??
> ? OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
> ? ? ? KFN=0, KGR=0, KHESS=0)
> ? # may be other stuff
> ? ls(OPCON)
> }
>
> add1<-function(){
> ? OPCON$KFN<<-1+OPCON$KFN
> ? test<-OPCON$KFN
> }
>
> OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
> ? ? ? KFN=0, KGR=0, KHESS=0)
> ls(OPCON)
> print(add1())
> print(add1())
> print(ls.str())
>
> rm(OPCON) # Try to remove the scratchpad
> print(ls())
>
> tmp<-readline("Now try from within a function")
> setup<-optstart(4) # Need to sort out how to set this up appropriately
> cat("setup =")
> print(setup)
>
> print(add1())
> print(add1())
>
> rm(OPCON) # Try to remove the scratchpad
>
> =====================> Example (failing) using new.env:
>
> rm(list=ls())
> optstart<-function(npar){ # create structure for optimization
computations
> ? # npar is number of parameters ?? test??
> ? OPCON<-new.env(parent=globalenv())
> ? OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
> ? ? ? KFN=0, KGR=0, KHESS=0)
> ? # may be other stuff
> ? ls(OPCON)
> }
>
> add1<-function(){
> ? OPCON$KFN<-1+OPCON$KFN
> ? test<-OPCON$KFN
> }
>
> OPCON<-new.env(parent=globalenv())
> OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
> ? ? ? KFN=0, KGR=0, KHESS=0)
> ls(OPCON)
> print(add1())
> print(add1())
> print(ls.str())
>
> rm(OPCON) # Try to remove the scratchpad
> print(ls())
>
> tmp<-readline("Now try from within a function")
> setup<-optstart(4) # Need to sort out how to set this up appropriately
> cat("setup =")
> print(setup)
>
> print(add1())
> print(add1())
>
> rm(OPCON) # Try to remove the scratchpad
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/
------------------------------
Message: 84
Date: Thu, 5 Apr 2012 15:33:17 -0500
From: Erin Hodgess <erinm.hodgess@gmail.com>
To: R help <r-help@stat.math.ethz.ch>
Subject: [R] producing vignettes
Message-ID:
<CACxE24nMGebe+YOgZc=MLHD=iX+FbF9J0BO=RkV2gVqDi=78sA@mail.gmail.com>
Content-Type: text/plain; charset="ISO-8859-1"
Hi R People:
What is the best way to learn how to produce vignettes, please?
Any help much appreciated.
Thanks,
Erin
--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: erinm.hodgess@gmail.com
------------------------------
Message: 85
Date: Thu, 5 Apr 2012 14:37:39 -0600
From: Greg Snow <538280@gmail.com>
To: John Sorkin <jsorkin@grecc.umaryland.edu>
Cc: r-help@r-project.org
Subject: Re: [R] identify with mfcol=c(1,2)
Message-ID:
<CAFEqCdybX=44_jr8uOrCzX5B0M0u4rVHbxe6rvqoWeXdhbHyCg@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I tried your code, first I removed the reference to the global
variable data$Line, then it works if I finish identifying by either
right clicking (I am in windows) and choosing stop, or using the stop
menu. It does as you say if I press escape or use the stop sign
button (both stop the whole evaluation rather than just the
identifying).
On Tue, Apr 3, 2012 at 8:52 AM, John Sorkin <jsorkin@grecc.umaryland.edu>
wrote:> I would like to have a figure with two graphs. This is easily accomplished
using mfcol:
>
> oldpar <- par(mfcol=c(1,2))
> plot(x,y)
> plot(z,x)
> par(oldpar)
>
> I run into trouble if I try to use identify with the two plots. If, after
identifying points on my first graph I hit the ESC key, or hitting stop menu bar
of my R session, the system stops the identification process, but fails to give
me my second graph. Is there a way to allow for the identification of points
when one is plotting to graphs in a single graph window? My code follows.
>
> plotter <- function(first,second) {
> ?# Allow for two plots in on graph window.
> ?oldpar<-par(mfcol=c(1,2))
>
> ?#Bland-Altman plot.
> ?plot((second+first)/2,second-first)
> ?abline(0,0)
> ?# Allow for indentification of extreme values.
> ?BAzap<-identify((second+first)/2,second-first,labels =
seq_along(data$Line))
> ?print(BAzap)
>
> ?# Plot second as a function of first value.
> ?plot(first,second,main="Limin vs. Limin",xlab="First
(cm^2)",ylab="Second (cm^3)")
> ?# Add identity line.
> ?abline(0,1,lty=2,col="red")
> ?# Allow for identification of extreme values.
> ?zap<-identify(first,second,labels = seq_along(data$Line))
> ?print(zap)
> ?# Add regression line.
> ?fit1<-lm(first~second)
> ?print(summary(fit1))
> ?abline(fit1)
> ?print(summary(fit1)$sigma)
>
> ?# reset par to default values.
> ?par(oldpar)
>
> }
> plotter(first,second)
>
>
> Thanks,
> John
>
>
>
>
>
>
> John David Sorkin M.D., Ph.D.
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:15}}
------------------------------
Message: 86
Date: Thu, 5 Apr 2012 14:41:31 -0600
From: ilai <keren@math.montana.edu>
To: Peter Meilstrup <peter.meilstrup@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] indexing data.frame columns
Message-ID:
<CAK4FJ1B7TxNOLM=e==UAeju+jP_MwqzGhsR-0u-AznAtBpFNWQ@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Thu, Apr 5, 2012 at 1:40 PM, Peter Meilstrup
<peter.meilstrup@gmail.com> wrote:> Consider the data.frame:
>
> df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C >
c(2,7,5,2,7,4,5), index =
c("A","B","A","C","B","B","C"))
>
> I want to select the column specified in 'index' for every row of
'df', to
> get
>
> goal <- c(1, 7, 2, 2, 3, 5, 5)
>
> This sounds a lot like the indexing-by-a-matrix you can do with arrays;
>
> df[cbind(1:nrow(df), df$index)]
>
> but this returns me values that are all characters where I want numbers.
str(df[,-4][cbind(1:nrow(df),df$index)])
num [1:7] 1 7 2 2 3 5 5
> (it seems that indexing by an array isn't well supported for
data.frames.)
No, it's just that the index column in df is a factor so as.matrix(df)
return a matrix of characters
>
> What is a better way to perform this selection operation?
>
Not that I know of
Cheers
> Peter
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 87
Date: Thu, 5 Apr 2012 14:42:29 -0600
From: Greg Snow <538280@gmail.com>
To: John Sorkin <jsorkin@grecc.umaryland.edu>
Cc: r-help@r-project.org
Subject: Re: [R] meaning of sigma from LM, is it the same as RMSE
Message-ID:
<CAFEqCdzESo032vipoJcu-1O9DE6r-9ngyTV=CoUrq0jPkuioxg@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
If you look at the code for summary.lm the line for the value of sigma is:
ans$sigma <- sqrt(resvar)
and above that we can see that resvar is defined as:
resvar <- rss/rdf
If that is not sufficient you can find how rss and rdf are computed in
the code as well.
On Tue, Apr 3, 2012 at 8:56 AM, John Sorkin <jsorkin@grecc.umaryland.edu>
wrote:> Is the sigma from a lm, i.e.
>
> fit1 <- lm(y~x)
> summary(fit1)
> summary(fit1)$sigma
>
> the RMSE (root mean square error)
>
> Thanks,
> John
>
> John David Sorkin M.D., Ph.D.
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:14}}
------------------------------
Message: 88
Date: Thu, 05 Apr 2012 22:42:44 +0200
From: Milan Bouchet-Valat <nalimilan@club.fr>
To: Peter Meilstrup <peter.meilstrup@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] indexing data.frame columns
Message-ID: <1333658564.26072.72.camel@milan>
Content-Type: text/plain; charset="UTF-8"
Le jeudi 05 avril 2012 ? 12:40 -0700, Peter Meilstrup a ?crit
:> Consider the data.frame:
>
> df <- data.frame(A = c(1,4,2,6,7,3,6), B= c(3,7,2,7,3,5,4), C >
c(2,7,5,2,7,4,5), index =
c("A","B","A","C","B","B","C"))
>
> I want to select the column specified in 'index' for every row of
'df', to
> get
>
> goal <- c(1, 7, 2, 2, 3, 5, 5)
>
> This sounds a lot like the indexing-by-a-matrix you can do with arrays;
>
> df[cbind(1:nrow(df), df$index)]
>
> but this returns me values that are all characters where I want numbers.
> (it seems that indexing by an array isn't well supported for
data.frames.)
>
> What is a better way to perform this selection operation?
I think the problem is that the data frame is converted to a matrix
under the hood, so numeric values are converted to characters (since the
reverse is not possible). You can either do:
as.numeric(df[cbind(1:nrow(df), df$index)])
[1] 1 7 2 2 3 5 5
Or avoid the conversion by excluding the character column beforehand:
df[-ncol(df)][cbind(1:nrow(df), df$index)]
[1] 1 7 2 2 3 5 5
Regards
------------------------------
Message: 89
Date: Thu, 5 Apr 2012 16:44:33 -0400
From: "R. Michael Weylandt" <michael.weylandt@gmail.com>
To: Erin Hodgess <erinm.hodgess@gmail.com>
Cc: R help <r-help@stat.math.ethz.ch>
Subject: Re: [R] producing vignettes
Message-ID:
<CAAmySGOVHD5_W83vrb-9USOZ3VwX9eizw1vLrOJw1kn25LMApw@mail.gmail.com>
Content-Type: text/plain; charset="ISO-8859-1"
Probably to pull down the source of one and study it directly: if you
already know LaTeX and R, Sweave isn't much more to master: zoo does
vignettes nicely, but any package with vignettes should be pretty
good.
Michael
On Thu, Apr 5, 2012 at 4:33 PM, Erin Hodgess <erinm.hodgess@gmail.com>
wrote:> Hi R People:
>
> What is the best way to learn how to produce vignettes, please?
>
> Any help much appreciated.
>
> Thanks,
> Erin
>
>
> --
> Erin Hodgess
> Associate Professor
> Department of Computer and Mathematical Sciences
> University of Houston - Downtown
> mailto: erinm.hodgess@gmail.com
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 90
Date: Thu, 5 Apr 2012 14:57:56 -0600
From: Greg Snow <538280@gmail.com>
To: Recher She <rrrecher.she@gmail.com>
Cc: r-help <r-help@r-project.org>
Subject: Re: [R] How does predict.loess work?
Message-ID:
<CAFEqCdxEVCnq82F16J84uFxo-pd-PqekB3fahFMG1Mqyu3RJnA@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Run the examples for the "loess.demo" function in the TeachingDemos
package to get a better understanding of what goes into the loess
predictions.
On Tue, Apr 3, 2012 at 2:12 PM, Recher She <rrrecher.she@gmail.com>
wrote:> Dear R community,
>
> I am trying to understand how the predict function, specifically, the
> predict.loess function works.
>
> I understand that the loess function calculates regression parameters at
> each data point in 'data'.
>
> lo <- loess ( y~x, data)
>
> p <- predict (lo, newdata)
>
> I understand that the predict function predicts values for
'newdata'
> according to the loess regression parameters. How does predict.loess do
> this in the case that 'newdata' is different from the original data
x? How
> does the interpolation take place?
>
> Thank you.
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Gregory (Greg) L. Snow Ph.D.
538280@gmail.com
------------------------------
Message: 91
Date: Thu, 5 Apr 2012 21:33:54 +0000
From: Julio Sergio <juliosergio@gmail.com>
To: <r-help@stat.math.ethz.ch>
Subject: Re: [R] A kind of set operation in R
Message-ID: <loom.20120405T233311-987@post.gmane.org>
Content-Type: text/plain; charset="us-ascii"
Berend Hasselman <bhh <at> xs4all.nl> writes:
> Try
>
> X %in% Y
>
> You could also have a look at match
>
> Berend
>
>
Thanks Berend!
--Sergio.
------------------------------
Message: 92
Date: Thu, 5 Apr 2012 17:07:57 -0500
From: Dirk Eddelbuettel <edd@debian.org>
To: Julio Sergio <juliosergio@gmail.com>
Cc: r-help@stat.math.ethz.ch
Subject: Re: [R] A kind of set operation in R
Message-ID: <20350.6077.191565.917039@max.nulle.part>
Content-Type: text/plain; charset="us-ascii"
Julio,
Nobody mentioned the _set_ operations union(), intersect(), setdiff(),
which are described under 'help(union)' (and the other names, of course)
R> X <- c(10:13, 17,18)
R> Y <- c(11,12,17,18)
R> intersect(X, Y) # gives the actual values
[1] 11 12 17 18
R> X %in% intersect(X, Y) # use %in% to map to bool as desired
[1] FALSE TRUE TRUE FALSE TRUE TRUE
R>
Oh, and no need to reply on-list to thousands of list subscribers if you
found this helpful. These one-line 'thanks' message are not that
universally
useful, even though they are very polite :)
Dirk
--
R/Finance 2012 Conference on May 11 and 12, 2012 at UIC in Chicago, IL
See agenda, registration details and more at http://www.RinFinance.com
------------------------------
Message: 93
Date: Thu, 5 Apr 2012 15:42:32 -0700
From: Bert Gunter <gunter.berton@gene.com>
To: Dirk Eddelbuettel <edd@debian.org>
Cc: Julio Sergio <juliosergio@gmail.com>, r-help@stat.math.ethz.ch
Subject: Re: [R] A kind of set operation in R
Message-ID:
<CACk-te00ETCC1w-RVj3CQ9oMq_fFVh0e8DgiOPXk3DM8YvvAOA@mail.gmail.com>
Content-Type: text/plain; charset="ISO-8859-1"
On Thu, Apr 5, 2012 at 3:07 PM, Dirk Eddelbuettel <edd@debian.org>
wrote:>
> Julio,
>
> Nobody mentioned the _set_ operations union(), intersect(), setdiff(),
> which are described under 'help(union)' (and the other names, of
course)
... but which are basically wrappers for match().
-- Bert
>
> ?R> X <- c(10:13, 17,18)
> ?R> Y <- c(11,12,17,18)
> ?R> intersect(X, Y) ? ? ? ? ? ?# gives the actual values
> ?[1] 11 12 17 18
> ?R> X %in% intersect(X, Y) ? ? # use %in% to map to bool as desired
> ?[1] FALSE ?TRUE ?TRUE FALSE ?TRUE ?TRUE
> ?R>
>
> Oh, and no need to reply on-list to thousands of list subscribers if you
> found this helpful. ?These one-line 'thanks' message are not that
universally
> useful, even though they are very polite :)
>
> Dirk
>
> --
> R/Finance 2012 Conference on May 11 and 12, 2012 at UIC in Chicago, IL
> See agenda, registration details and more at http://www.RinFinance.com
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Bert Gunter
Genentech Nonclinical Biostatistics
Internal Contact Info:
Phone: 467-7374
Website:
http://pharmadevelopment.roche.com/index/pdb/pdb-functional-groups/pdb-biostatistics/pdb-ncb-home.htm
------------------------------
Message: 94
Date: Thu, 5 Apr 2012 15:46:29 -0700 (PDT)
From: MZ <zeller.michael@gmail.com>
To: r-help@r-project.org
[[elided Yahoo spam]]
Message-ID:
<89dba7bd-096f-4fd0-9ee6-faa9e2380279@v7g2000pbs.googlegroups.com>
Content-Type: text/plain; charset=windows-1252
As a vendor-neutral standard, the Predictive Model Markup Language
(PMML) enables the interchange of data mining models among different
tools and environments ? open source and commercial - avoiding
proprietary issues and incompatibilities.
Please see the Zementis newsletter below for details about the updated
R PMML package. It also links to a recorded webinar on PMML, Rattle,
and ADAPA.
http://www.zementis.com/Newsletter/Deploy20.htm
------------------------------
Message: 95
Date: Thu, 5 Apr 2012 15:51:19 -0700 (PDT)
From: Rui Barradas <rui1174@sapo.pt>
To: r-help@r-project.org
Subject: Re: [R] how to compute a vector of min values ?
Message-ID: <1333666279231-4536275.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hello,
Try
apply(df, 2, min)
(By the way, 'df' is the name of a R function, avoid it, 'DF' is
better.)
Hope this helps,
Rui Barradas
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-compute-a-vector-of-min-values-tp4536224p4536275.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 96
Date: Thu, 5 Apr 2012 20:01:55 -0300
From: "Pedro Henrique" <lamarao@superig.com.br>
To: <r-help@r-project.org>
Subject: [R] Help Using Spreadsheets
Message-ID: <C4C598CBD2DE4112B930AC71A992954B@PedroLamaroPC>
Content-Type: text/plain
Hello,
I am a new user of R and I am trying to use the data I am reading from a
spreadsheet.
I installed the xlsReadWrite package and I am able to read data from this files,
but how can I assign the colums into values?
E.g:
as I read a spreadsheet like this one:
A B
1 2
4 9
I manually assign the values:
A<-c(1,4)
B<-c(2,9)
to plot it on a graph:
plot(A,B)
or make histograms:
hist(A)
But actualy I am using very large colums, does exist any other way to do it
automatically?
Best Regards,
Lämarão
[[alternative HTML version deleted]]
------------------------------
Message: 97
Date: Thu, 05 Apr 2012 12:15:48 -0700
From: "Christopher R. Dolanc" <crdolanc@ucdavis.edu>
To: r-help@r-project.org
Subject: [R] count() function
Message-ID: <4F7DEF64.5050001@ucdavis.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
I keep expecting R to have something analogous to the =count function in
Excel, but I can't find anything. I simply want to count the data for a
given category.
I've been using the ddply() function in the plyr package to summarize
means and st dev of my data, with this code:
ddply(NZ_Conifers,.(ElevCat, DataSource, SizeClass), summarise,
avgDensity=mean(Density), sdDensity=sd(Density), n=sum(Density))
and that gives me results that look like this:
ElevCat DataSource SizeClass avgDensity sdDensity n
1 Elev1 FIA Class1 38.67768 46.6673478 734.87598
2 Elev1 FIA Class2 27.34096 23.3232470 820.22879
3 Elev1 FIA Class3 15.38758 0.7088432 76.93790
4 Elev1 VTM Class1 66.37897 70.2050817 24958.49284
5 Elev1 VTM Class2 39.40786 34.9343269 11782.95152
6 Elev1 VTM Class3 21.17839 12.3487600 1461.30895
But, instead of "sum(Density)", I'd really like counts of
"Density", so
that I know the sample size of each group. Any suggestions?
--
Christopher R. Dolanc
Post-doctoral Researcher
University of California, Davis
------------------------------
Message: 98
Date: Thu, 5 Apr 2012 13:24:10 -0700 (PDT)
From: Metametadata <Jeffrey064@gmail.com>
To: r-help@r-project.org
Subject: [R] Inputing Excel data into R to make a map
Message-ID: <1333657450200-4535941.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi everyone,
I'm trying to input an excel datasheet with city names and lat+longs, that
has already been converted to a .csv file and make a map using R with my
data. My datasheet is 30 cities, with their lat+long, temp, elevation. So
far all I'm able to do is load the datasheet into R, I installed the map+
maptools packages so I can see a map of the US in R, but I don't know how I
can make my data can show up on the map. I've read around that I need to
scale something but my data is so different than most peoples online, so I
don't know if its just the way I set up my data in Excel. I want to post my
datasheet on the forum, but I don't know how to. I've tried using
"ggplot2"
library to plot my points and to put the code in the map that we used, which
is this
library(maps)
map("state", interior = FALSE)
map("state", boundary = FALSE, col="gray", add = TRUE)
So ultimately all I need to figure out how to scale my points so that it
fits with my map, and also once they are scaled how do I combined them? If
[[elided Yahoo spam]]
Cheers!
--
View this message in context:
http://r.789695.n4.nabble.com/Inputing-Excel-data-into-R-to-make-a-map-tp4535941p4535941.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 99
Date: Thu, 5 Apr 2012 21:25:51 +0000
From: Hanin Farah <hanin.farah@abe.kth.se>
To: "r-help@r-project.org" <r-help@r-project.org>
Subject: [R] Warning message: Gamlss - Need help
Message-ID: <98DAE2A5D79B434BADFB3AEBBAABC1A93C19157A@EXDB4.ug.kth.se>
Content-Type: text/plain
Hi,
I am running a negative binomial model using Gamlss and when I try to include
random effect, I get the following message:
Warning messages:
1: In vcov.gamlss(object, "all") :
addive terms exists in the mu formula standard errors for the linear terms
maybe are not appropriate
2: In vcov.gamlss(object, "all") :
addive terms exists in the sigma formula standard errors for the linear
terms maybe are not appropriate
3: In summary.gamlss(model1) :
summary: vcov has failed, option qr is used instead
I still get the output for the fixed effects but not for the random effects. Can
you recommend on a solution for this problem?
Below is the code that I wrote:
install.packages("gamlss")
library(utils)
library(MASS)
library (gamlss)
Data_Reduced = read.table(file.choose(), header=TRUE, sep="\t")
model1<-gamlss(Events~ factor(Solo_Time) + factor(Fleet)+ Father_SS +
Mother_SS + random(factor(Driver_ID)) + offset(logDuration),
sigma.fo=~random(factor(Driver_ID)), data = na.omit(Data_Reduced), family=NBI,
na.action = "na.omit")
summary(model1)
Thanks in advance.
Best regards,
Haneen
--------------------------------------------------------------------------------------
Haneen Farah, Ph.D.
Department of Transport Sciences
School of Architecture & Build Environment
KTH- Royal Institute of Technology
Teknikringen 72, SE-100 44 Stockholm
Tel: +46-8-7907977
fhanin@kth.se<mailto:fhanin@kth.se>
http://home.abe.kth.se/~fhanin/
[[alternative HTML version deleted]]
------------------------------
Message: 100
Date: Thu, 5 Apr 2012 14:42:29 -0700 (PDT)
From: flacerdams <flacerdams@gmail.com>
To: r-help@r-project.org
Subject: [R] Function - simple question
Message-ID: <1333662149944-4536162.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Dear all,
Suppose I have a dataset with two variables:
X = c(0, 1, 2)
Y = c(1, 1, 1)
DS = data.frame(X, Y)
Now, I want to create a new variable Z with 3 observations, but I want its
values to be the result of a function. I want to create a function that
compares X and Y, and if X = Y, then Z value = 3. If X value differs from Y
value, Z value = 4. So, I'd have the following values for Z: 4, 3, 4.
How can I create a function like that? (Sorry, I know it's a dumb question,
I began to use R two days ago)
Thank you very much,
Lacerda
--
View this message in context:
http://r.789695.n4.nabble.com/Function-simple-question-tp4536162p4536162.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 101
Date: Thu, 5 Apr 2012 14:59:20 -0700 (PDT)
From: "chuck.01" <CharlieTheBrown77@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] Function - simple question
Message-ID: <1333663160024-4536189.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
This is one way:
f <- function(x, y){
Z <- ifelse(x==y, 3, 4)
return(Z)
}
DS[3] <- with(DS, f(X,Y))
colnames(DS)[3] <- "Z"
But you don't really need a function to do that.
DS[3] <- with(DS, ifelse(X==Y, 3, 4)) # this works just fine
I'm glad you've decided to use R; eventually you will need to read some
intro R manuals.
Cheers.
flacerdams wrote>
> Dear all,
>
> Suppose I have a dataset with two variables:
>
> X = c(0, 1, 2)
> Y = c(1, 1, 1)
> DS = data.frame(X, Y)
>
> Now, I want to create a new variable Z with 3 observations, but I want its
> values to be the result of a function. I want to create a function that
> compares X and Y, and if X = Y, then Z value = 3. If X value differs from
> Y value, Z value = 4. So, I'd have the following values for Z: 4, 3, 4.
>
> How can I create a function like that? (Sorry, I know it's a dumb
> question, I began to use R two days ago)
>
> Thank you very much,
>
> Lacerda
>
--
View this message in context:
http://r.789695.n4.nabble.com/Function-simple-question-tp4536162p4536189.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 102
Date: Thu, 5 Apr 2012 15:25:45 -0700 (PDT)
From: ikuzar <razuki@hotmail.fr>
To: r-help@r-project.org
Subject: [R] how to compute a vector of min values ?
Message-ID: <1333664745571-4536224.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I'd like to know how to get a vector of min value from many vectors without
making a loop. For example :
>v1 = c( 1, 2, 3)
> v2 = c( 2, 3, 4)
> v3 = c(3, 4, 5)
> df = data.frame(v1, v2, v3)
> df
v1 v2 v3
1 1 2 3
2 2 3 4
3 3 4 5> min_vect = min(df)
> min_vect
[1] 1
I 'd like to get min_vect = (1, 2, 3), where 1 is the min of v1, 2 is the
min of v2 and 3 is the min of v3.
The example above are very easy but, in real, I have got v1, v2, ... v1440
Thanks for your help,
ikuzar
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-compute-a-vector-of-min-values-tp4536224p4536224.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 103
Date: Thu, 5 Apr 2012 18:55:10 -0400
From: Navin Goyal <navingoyal@gmail.com>
To: r-help@R-project.org
Subject: [R] integrate function - error -integration not occurring
with last few rows
Message-ID:
<CAEoEPfx_Yx2M1fUqPPFUc6hwU8X_fsShmf9xybJpMe5Q-Aho+Q@mail.gmail.com>
Content-Type: text/plain
Hi,
I am using the integrate function in some simulations in R (tried ver 2.12
and 2.15). The problem I have is that the last few rows do not integrate
correctly. I have pasted the code I used.
The column named "integral" shows the output from the integrate
function.
The last few rows have no integration results. I tried increasing the
doses, number of subjects, etc.... this error occurs with the last few rows
only
I am not sure why this is happening. Could someone please help me with this
issue ??
Thank you for your time
dose<-c(0)
time<-(0:6)
id<-1:25
data1<-expand.grid(id,time,dose)
names(data1)<-c("ID","TIME", "DOSE")
data1<-data1[order(data1$DOSE,data1$ID,data1$TIME),]
################
basescore=95
basescore_sd=0.12
fall=0.15
fall_sd=0.5
slope=5
dose_slope1=0.045
dose_slope2=0.045
dose_slope3=0.002
rise_sd=0.5
ed<-data1[!duplicated(data1$ID) , c(1,3)]
ed$base=1
ed$drop=1
ed$bshz<-1
ed$up<-1
ed
set.seed(5234123)
k<-0
for (i in 1:length(ed$ID))
{
k<-k+1
ed$base[k]<-basescore*exp(rnorm(1,0,basescore_sd))
ed$drop[k]<-fall*exp(rnorm(1,0,fall_sd))
ed$up[k]<-slope*exp(rnorm(1,0,rise_sd))
ed$bshz<-beta0
}
comb1<-merge(data1[, c("ID","TIME")], ed)
comb1$disprog<-1
comb1$beta1<-0.035
comb1$beta21<-0.02
comb1$beta22<-0.45
comb1$beta23<-0085
comb1$beta31<-0.7
comb1$beta32<-0.05
comb1$exphz<-1
comb2<-comb1
p<-0
for(l in 1:length(comb2$ID))
{
p<-p+1
comb2$disprog[p]<-comb2$base[p]*exp(-comb2$drop[p]*comb2$TIME[p]) +
comb2$up[p]*comb2$TIME[p]
comb2$frac[p]<-ifelse ( comb2$DOSE[p]==3,
comb2$beta31[p]*comb2$TIME[p]^comb2$beta32[p],
exp(-comb2$beta21[p]*comb2$DOSE[p])*comb2$TIME[p]^comb2$beta22[p] )
}
hz.func1<-function(t,bshz,beta1, change,other)
{
ifelse(t==0,bshz, bshz*exp(beta1*change+other))
}
comb3<-comb2
comb3$integral=0
q<-0
for (m in i:length(comb3$ID))
{
q<-q+1
comb3$integral[q]<-integrate(hz.func1, lower=0, upper=comb3$TIME[q],
bshz=comb3$bshz[q],beta1=comb3$beta1[q],
change=comb3$disprog[q], other=comb3$frac[q])$value
}
#comb3[comb3$TIME==3, ] #
#tail(comb3)
--
Navin Goyal
[[alternative HTML version deleted]]
------------------------------
Message: 104
Date: Thu, 5 Apr 2012 17:06:18 -0600
From: Grimes Mark <mark.grimes@mso.umt.edu>
To: David Winsemius <dwinsemius@comcast.net>
Cc: r-help@r-project.org, R-Sig-Mac List <r-sig-mac@stat.math.ethz.ch>
Subject: Re: [R] rgl package broke with R 2.14.2
Message-ID: <0109DDDE-8D25-421F-BE25-003298E16DA3@mso.umt.edu>
Content-Type: text/plain
Dear David, Duncan, and Jochen, and everyone
I am happy to report that with R version 2.15.0, rgl_0.92.861 now
[[elided Yahoo spam]]
Mark
> Below some more observations that might help you locate the problem.
>
> Also sorry for ignoring the posting rules in my last post. I was
> still on 2.14.1, and installing from source from the GUI version of
> R64. Installing from source from R64 GUI still works after the
> upgrade to R 2.14.2, .
>
> The 32 bit versions of R don't work for me anymore, probably because
> I have only installed the 64 bit versions of some libraries under
> Lion. I have never noticed before, since after the problem discussed
> in this thread
https://stat.ethz.ch/pipermail/r-sig-mac/2010-July/007609.html
> I have been using R64 exclusively.
>
> Also note that installing from the command line (Terminal) version
> of either R32 or R64 gives the error
>
>> checking for glEnd in -lGL... no
>> configure: error: missing required library GL
>
> Presumably, something is wrong with my bash environment and the
> configure script is picking up the wrong gl libraries.
> Installing from source does work after downloading and unzipping the
> rgl tarball with
>
> ./configure --with-gl-libs=/usr/X11/lib --with-gl-includes=/usr/X11/
> include
>
>
> Jochen
>
> On Mar 28, 2012, at 10:15 AM, Duncan Murdoch wrote:
>
>> On 12-03-27 6:31 PM, Grimes Mark wrote:
>>> Dear People
>>>
>>> I can't figure out how to fix this problem: rgl won't run
under R
>>> 2.14.2 (it was working for me before under 2.14.0). The error
>>> message
>>> is:
>>
>> rgl is currently changing fairly rapidly. I'd suggest trying to
>> install again (the current version, as of yesterday, is 0.92.861).
>> If that's not enough, I'd install a binary build (which may be
a
>> little old, but at least it built successfully...).
>>
>> Duncan Murdoch
>
> Thanks for your continued efforts Duncan. I had tried compiling that
> version from source and gotten an error but when I attempted to
> install from the current binary image which is now rgl_0.92.861.tgz,
> I got apparent success, it loads without error, and the example in ?
> rgl runs.
>
> --
>
> David Winsemius, MD
> West Hartford, CT
>
[[alternative HTML version deleted]]
------------------------------
Message: 105
Date: Thu, 5 Apr 2012 13:46:23 -0700 (PDT)
From: kritikool <krithika.gu@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] [Q] Bayeisan Network with the "deal" package
Message-ID: <1333658783697-4536014.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
I am also looking for a help in using the "deal" package.
When I try this, I get an error saying "Error in array(1, Dim) :
'dim'
specifies too large an array"
ksl.prior <- jointprior(ksl.nw)
Does anyone know what the error indicates ?
My data is gene expression values with about 58 rows (samples) and 20
columns (features)
--
View this message in context:
http://r.789695.n4.nabble.com/Q-Bayeisan-Network-with-the-deal-package-tp797813p4536014.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 106
Date: Thu, 5 Apr 2012 13:54:37 -0700 (PDT)
From: kritikool <krithika.gu@gmail.com>
To: r-help@r-project.org
Subject: Re: [R] Bayesian Belief Networks
Message-ID: <1333659277962-4536033.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi Marco,
I saw this post and was wondering if you would be able to help me.
I have a gene expression data file that i would like to build a bayesian n/w
on.
I input a file with samples as rows and columns as features into the bnlearn
package.
I read through the pdf file that talks about the bnlearn package. I
understood the many different approaches for discrete data, but did not
really understand what to do with continuous data. The example for
continuous data on your web site, built an empty network with only nodes
when I implemented it. (code shown below:
data(gaussian.test)
res = empty.graph(names(gaussian.test))
modelstring(res) = "[A][B][C][D][C|A:B][D|B][F|A:D:E:G]"
plot(res)
Do you have your own example which describes what functions to use step and
step and how to plot the n/w ?
Any help appreciated.
thanks
--
View this message in context:
http://r.789695.n4.nabble.com/Bayesian-Belief-Networks-tp3162133p4536033.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 107
Date: Fri, 6 Apr 2012 10:35:57 +1000
From: Michael Sumner <mdsumner@gmail.com>
To: Metametadata <Jeffrey064@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Inputing Excel data into R to make a map
Message-ID:
<CAAcGz9-VU+s9dWOHZ_tfSTibufPvpQ+s-pySd=xD7=16sh1kmA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Please provide a sample of the data and the code you have tried. If
"pts" is your data.frame read in from CSV and the long/lats describe
points within the mainland U.S. then this should show you sensible
points on the map:
require(maps)
map('state')
points(pts$long, pts$lat)
If not, investigate what you should expect with the actual numbers
themselves, add axes to the plot:
axis(1)
axis(2)
Do the values in long and lat make sense for that extent?
No one can tell if you need to "scale" (or transform . . .) the data
without any kind of reference to what they are. You aren't just mixing
up the order of longitude and latitude are you? R's spatial functions
generally use the "x, y" convention to match "long, lat"
order.
Also, R-Sig-Geo is a more targeted mailing list for questions like this.
Cheers, Mike.
On Fri, Apr 6, 2012 at 6:24 AM, Metametadata <Jeffrey064@gmail.com>
wrote:> Hi everyone,
> I'm trying to input an excel datasheet with city names and lat+longs,
?that
> has already been converted to a .csv file and make a map using R with my
> data. ?My datasheet is 30 cities, with their lat+long, temp, elevation. So
> far all I'm able to do is load the datasheet into R, I installed the
map+
> maptools packages so I can see a map of the US in R, but I don't know
how I
> can make my data can show up on the map. ?I've read around that I need
to
> scale something but my data is so different than most peoples online, so I
> don't know if its just the way I set up my data in Excel. I want to
post my
> datasheet on the forum, but I don't know how to. ?I've tried using
"ggplot2"
> library to plot my points and to put the code in the map that we used,
which
> is this
>
> library(maps)
>
> map("state", interior = FALSE)
>
> map("state", boundary = FALSE, col="gray", add = TRUE)
>
> So ultimately all I need to figure out how to scale my points so that it
> fits with my map, and also once they are scaled how do I combined them? If
[[elided Yahoo spam]]>
> Cheers!
>
>
> --
> View this message in context:
http://r.789695.n4.nabble.com/Inputing-Excel-data-into-R-to-make-a-map-tp4535941p4535941.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Michael Sumner
Institute for Marine and Antarctic Studies, University of Tasmania
Hobart, Australia
e-mail: mdsumner@gmail.com
------------------------------
Message: 108
Date: Fri, 6 Apr 2012 01:01:18 +0000
From: William Dunlap <wdunlap@tibco.com>
To: "Christopher R. Dolanc" <crdolanc@ucdavis.edu>,
"r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] count() function
Message-ID:
<E66794E69CFDE04D9A70842786030B932911E4@PA-MBX04.na.tibco.com>
Content-Type: text/plain; charset="us-ascii"
I think you are looking for the function called length(). I cannot recreate
your output, since I don't know what is in NZ_Conifers, but with the
built-in
dataset mtcars I get:
> ddply(mtcars, .(cyl,gear,carb), summarize, MeanWt=mean(wt), N=length(wt))
cyl gear carb MeanWt N
1 4 3 1 2.46500 1
2 4 4 1 2.07250 4
3 4 4 2 2.68375 4
4 4 5 2 1.82650 2
5 6 3 1 3.33750 2
6 6 4 4 3.09375 4
7 6 5 6 2.77000 1
8 8 3 2 3.56000 4
9 8 3 3 3.86000 3
10 8 3 4 4.68580 5
11 8 5 4 3.17000 1
12 8 5 8 3.57000 1
> with(mtcars, sum(cyl==8 & gear==3 & carb==4)) # output line 10
[1] 5
If all you want is the count of things in various categories, you can use table
instead of ddply and length:
> with(mtcars, table(cyl, gear, carb))
, , carb = 1
gear
cyl 3 4 5
4 1 4 0
6 2 0 0
8 0 0 0
, , carb = 2
gear
cyl 3 4 5
4 0 4 2
6 0 0 0
8 4 0 0
...
Using ftable on table's output gives a nicer looking printout, but
table's output is easier
to use in a program.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
Behalf
> Of Christopher R. Dolanc
> Sent: Thursday, April 05, 2012 12:16 PM
> To: r-help@r-project.org
> Subject: [R] count() function
>
> I keep expecting R to have something analogous to the =count function in
> Excel, but I can't find anything. I simply want to count the data for a
> given category.
>
> I've been using the ddply() function in the plyr package to summarize
> means and st dev of my data, with this code:
>
> ddply(NZ_Conifers,.(ElevCat, DataSource, SizeClass), summarise,
> avgDensity=mean(Density), sdDensity=sd(Density), n=sum(Density))
>
> and that gives me results that look like this:
>
> ElevCat DataSource SizeClass avgDensity sdDensity n
> 1 Elev1 FIA Class1 38.67768 46.6673478 734.87598
> 2 Elev1 FIA Class2 27.34096 23.3232470 820.22879
> 3 Elev1 FIA Class3 15.38758 0.7088432 76.93790
> 4 Elev1 VTM Class1 66.37897 70.2050817 24958.49284
> 5 Elev1 VTM Class2 39.40786 34.9343269 11782.95152
> 6 Elev1 VTM Class3 21.17839 12.3487600 1461.30895
>
> But, instead of "sum(Density)", I'd really like counts of
"Density", so
> that I know the sample size of each group. Any suggestions?
>
> --
> Christopher R. Dolanc
> Post-doctoral Researcher
> University of California, Davis
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 109
Date: Fri, 6 Apr 2012 01:17:33 +0000
From: William Dunlap <wdunlap@tibco.com>
To: "nashjc@uottawa.ca" <nashjc@uottawa.ca>,
"r-help@r-project.org"
<r-help@r-project.org>
Subject: Re: [R] Appropriate method for sharing data across functions
Message-ID:
<E66794E69CFDE04D9A70842786030B93291231@PA-MBX04.na.tibco.com>
Content-Type: text/plain; charset="us-ascii"
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
Behalf
> Of John C Nash
> Sent: Thursday, April 05, 2012 1:20 PM
> To: r-help@r-project.org
> Subject: [R] Appropriate method for sharing data across functions
>
> In trying to streamline various optimization functions, I would like to
have a scratch pad
> of working data that is shared across a number of functions. These can be
called from
> different levels within some wrapper functions for maximum likelihood and
other such
> computations. I'm sure there are other applications that could benefit
from this.
>
> Below are two approaches. One uses the <<- assignment to a structure
I call OPCON. The
> other attempts to create an environment with this name, but fails. Though I
have looked
> at
> a number of references, I have so far not found an adequate description of
how to
> specify
> where the OPCON environment is located. (Both the green and blue books do
not cover
> this
> topic, at least not under "environment" in the index.)
>
> Is there a recommended approach to this? I realize I could use argument
lists, but they
> get long and tedious with the number of items I may need to pass, though
passing the
> OPCON
> structure in and out might be the proper way.
Make OPCON an environment and pass it into the functions that may read it or
alter it. There
is no real need to pass it out, since environments are changed in-place (unlike
lists). E.g.,
> x <- list2env(list(one=1, two="ii", three=3))
> x
<environment: 0x0000000003110890>
> objects(x)
[1] "one" "three" "two"
> x[["two"]]
[1] "ii"
> with(x, three+one)
[1] 4
> f <- function(z, env) { env[["newZ"]] <- z ; sqrt(z) }
> f(10, x)
[1] 3.162278
> x[["newZ"]] # put there by f()
[1] 10
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> An onAttach() approach was suggested by
> Paul
> Gilbert and tried, but it has so far not succeeded and, unfortunately, does
not seem to be
> usable from source() i.e., cannot be interpreted but must be built first.
>
> JN
>
> Example using <<-
>
> rm(list=ls())
> optstart<-function(npar){ # create structure for optimization
computations
> # npar is number of parameters ?? test??
> OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
> KFN=0, KGR=0, KHESS=0)
> # may be other stuff
> ls(OPCON)
> }
>
> add1<-function(){
> OPCON$KFN<<-1+OPCON$KFN
> test<-OPCON$KFN
> }
>
> OPCON<<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
> KFN=0, KGR=0, KHESS=0)
> ls(OPCON)
> print(add1())
> print(add1())
> print(ls.str())
>
> rm(OPCON) # Try to remove the scratchpad
> print(ls())
>
> tmp<-readline("Now try from within a function")
> setup<-optstart(4) # Need to sort out how to set this up appropriately
> cat("setup =")
> print(setup)
>
> print(add1())
> print(add1())
>
> rm(OPCON) # Try to remove the scratchpad
>
> =====================> Example (failing) using new.env:
>
> rm(list=ls())
> optstart<-function(npar){ # create structure for optimization
computations
> # npar is number of parameters ?? test??
> OPCON<-new.env(parent=globalenv())
> OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,npar), FNSCALE=1,
> KFN=0, KGR=0, KHESS=0)
> # may be other stuff
> ls(OPCON)
> }
>
> add1<-function(){
> OPCON$KFN<-1+OPCON$KFN
> test<-OPCON$KFN
> }
>
> OPCON<-new.env(parent=globalenv())
> OPCON<-list(MAXIMIZE=TRUE, PARSCALE=rep(1,4), FNSCALE=1,
> KFN=0, KGR=0, KHESS=0)
> ls(OPCON)
> print(add1())
> print(add1())
> print(ls.str())
>
> rm(OPCON) # Try to remove the scratchpad
> print(ls())
>
> tmp<-readline("Now try from within a function")
> setup<-optstart(4) # Need to sort out how to set this up appropriately
> cat("setup =")
> print(setup)
>
> print(add1())
> print(add1())
>
> rm(OPCON) # Try to remove the scratchpad
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 110
Date: Fri, 6 Apr 2012 01:21:39 +0000
From: William Dunlap <wdunlap@tibco.com>
To: Drew Tyre <atyre2@unl.edu>, Ramiro Barrantes
<ramiro@precisionbioassay.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] reclaiming lost memory in R
Message-ID:
<E66794E69CFDE04D9A70842786030B93291283@PA-MBX04.na.tibco.com>
Content-Type: text/plain; charset="us-ascii"
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
Behalf
> Of Drew Tyre
> Sent: Thursday, April 05, 2012 8:35 AM
> To: Ramiro Barrantes
> Cc: r-help@r-project.org
> Subject: Re: [R] reclaiming lost memory in R
>
> Ramiro
>
> I think the problem is the loop - R doesn't release memory allocated
inside
> an expression until the expression completes. A for loop is an expression,
> so it duplicates fit and dataset on every iteration.
The above explanation is not true.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> An alternative
> approach that I have found successful in similar circumstances is to use
> sapply(), like this
>
> fits <- list()
> sapply(1:N,function(i){
> dataset <- generateDataset(i)
> fit[[i]] <- try( memoryHogFunction(dataset, otherParameters))
> })
>
> I'm assuming above that you want to save the result of
memoryHogFunction
> from each iteration.
>
> hth
> Drew
> On Thu, Apr 5, 2012 at 8:35 AM, Ramiro Barrantes <
> ramiro@precisionbioassay.com> wrote:
>
> > Dear list,
> >
> > I am trying to reclaim what I think is lost memory in R, I have been
using
> > gc(), rm() and also using Rprof to figure out where all the memory is
going
> > but I might be missing something.
> >
> > I have the following situation
> >
> > basic loop which calls memoryHogFunction:
> >
> > for i in (1:N) {
> > dataset <- generateDataset(i)
> > fit <- try( memoryHogFunction(dataset, otherParameters))
> > }
> >
> > and within
> >
> > memoryHogFunction <- function(dataset, params){
> >
> > fit <- try(nlme(someinitialValues)
> > ...
> > fit <- try(updatenlme(otherInitialValues)
> > ...
> > fit <- try(updatenlme(otherInitialValues)
> > ...
> > ret <- fit ( and other things)
> > return a result "ret"
> > }
> >
> > The problem is that, memoryHogFunction uses a lot of memory, and at
the
> > end returns a result (which is not big) but the memory used by the
> > computation seems to be still occupied. The original loop continues,
but
> > the memory used by the program grows and grows after each call to
> > memoryHogFunction.
> >
> > I have been trying to do gc() after each run in the loop, and have
even
> > done:
> >
> > in memoryHogFunction()
> > ...
> > ret <- fit ( and other things)
> > rm(list=ls()[-match("ret",ls())])
> > return a result "ret"
> > }
> >
> > ???
> >
> > A typical results from gc() after each loop iteration says:
> > used (Mb) gc trigger (Mb) max used (Mb)
> > Ncells 326953 17.5 597831 32.0 597831 32.0
> > Vcells 1645892 12.6 3048985 23.3 3048985 23.3
> >
> > Which doesn't reflect that 340mb (and 400+mb in virtual memory)
that are
> > being used right now.
> >
> > Even when I do:
> >
> > print(sapply(ls(all.names=TRUE), function(x) object.size(get(x))))
> >
> > the largest object is 8179808, which is what it should be.
> >
> > THe only thing that looked suspicious was the following within Rprof
(with
> > memory=stats option), the tot.duplications might be a problem???:
> >
> > index: "with":"with.default"
> > vsize.small max.vsize.small vsize.large max.vsize.large
> > 30841 63378 20642 660787
> > nodes max.nodes duplications tot.duplications
> > 3446132 8115016 12395 61431787
> > samples
> > 4956
> >
> > Any suggestions? Is it something about the use of loops in R? Is it
> > maybe the try's???
> >
> > Thanks in advance for any help,
> >
> > Ramiro
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Drew Tyre
>
> School of Natural Resources
> University of Nebraska-Lincoln
> 416 Hardin Hall, East Campus
> 3310 Holdrege Street
> Lincoln, NE 68583-0974
>
> phone: +1 402 472 4054
> fax: +1 402 472 2946
> email: atyre2@unl.edu
> http://snr.unl.edu/tyre
> http://aminpractice.blogspot.com
> http://www.flickr.com/photos/atiretoo
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 111
Date: Thu, 05 Apr 2012 21:44:43 -0400
From: "John Sorkin" <JSorkin@grecc.umaryland.edu>
To: "Greg Snow" <538280@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] identify with mfcol=c(1,2)
Message-ID:
<4F7E124B020000CB000B792D@med-webappgwia1.medicine.umaryland.edu>
Content-Type: text/plain; charset=US-ASCII
Thanks!
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)
>>> Greg Snow <538280@gmail.com> 4/5/2012 4:37 PM >>>
I tried your code, first I removed the reference to the global
variable data$Line, then it works if I finish identifying by either
right clicking (I am in windows) and choosing stop, or using the stop
menu. It does as you say if I press escape or use the stop sign
button (both stop the whole evaluation rather than just the
identifying).
On Tue, Apr 3, 2012 at 8:52 AM, John Sorkin <jsorkin@grecc.umaryland.edu>
wrote:> I would like to have a figure with two graphs. This is easily accomplished
using mfcol:
>
> oldpar <- par(mfcol=c(1,2))
> plot(x,y)
> plot(z,x)
> par(oldpar)
>
> I run into trouble if I try to use identify with the two plots. If, after
identifying points on my first graph I hit the ESC key, or hitting stop menu bar
of my R session, the system stops the identification process, but fails to give
me my second graph. Is there a way to allow for the identification of points
when one is plotting to graphs in a single graph window? My code follows.
>
> plotter <- function(first,second) {
> # Allow for two plots in on graph window.
> oldpar<-par(mfcol=c(1,2))
>
> #Bland-Altman plot.
> plot((second+first)/2,second-first)
> abline(0,0)
> # Allow for indentification of extreme values.
> BAzap<-identify((second+first)/2,second-first,labels =
seq_along(data$Line))
> print(BAzap)
>
> # Plot second as a function of first value.
> plot(first,second,main="Limin vs. Limin",xlab="First
(cm^2)",ylab="Second (cm^3)")
> # Add identity line.
> abline(0,1,lty=2,col="red")
> # Allow for identification of extreme values.
> zap<-identify(first,second,labels = seq_along(data$Line))
> print(zap)
> # Add regression line.
> fit1<-lm(first~second)
> print(summary(fit1))
> abline(fit1)
> print(summary(fit1)$sigma)
>
> # reset par to default values.
> par(oldpar)
>
> }
> plotter(first,second)
>
>
> Thanks,
> John
>
>
>
>
>
>
> John David Sorkin M.D., Ph.D.
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:23}}
------------------------------
Message: 112
Date: Thu, 05 Apr 2012 21:52:51 -0400
From: "John Sorkin" <JSorkin@grecc.umaryland.edu>
To: "Greg Snow" <538280@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] meaning of sigma from LM, is it the same as RMSE
Message-ID:
<4F7E1433020000CB000B7937@med-webappgwia1.medicine.umaryland.edu>
Content-Type: text/plain; charset=US-ASCII
Again my thanks!
John
John David Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
University of Maryland School of Medicine Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524
(Phone) 410-605-7119
(Fax) 410-605-7913 (Please call phone number above prior to faxing)
>>> Greg Snow <538280@gmail.com> 4/5/2012 4:42 PM >>>
If you look at the code for summary.lm the line for the value of sigma is:
ans$sigma <- sqrt(resvar)
and above that we can see that resvar is defined as:
resvar <- rss/rdf
If that is not sufficient you can find how rss and rdf are computed in
the code as well.
On Tue, Apr 3, 2012 at 8:56 AM, John Sorkin <jsorkin@grecc.umaryland.edu>
wrote:> Is the sigma from a lm, i.e.
>
> fit1 <- lm(y~x)
> summary(fit1)
> summary(fit1)$sigma
>
> the RMSE (root mean square error)
>
> Thanks,
> John
>
> John David Sorkin M.D., Ph.D.
> Chief, Biostatistics and Informatics
> University of Maryland School of Medicine Division of Gerontology
> Baltimore VA Medical Center
> 10 North Greene Street
> GRECC (BT/18/GR)
> Baltimore, MD 21201-1524
> (Phone) 410-605-7119
> (Fax) 410-605-7913 (Please call phone number above prior to faxing)
>
> Confidentiality Statement:
> This email message, including any attachments, is for ...{{dropped:23}}
------------------------------
Message: 113
Date: Thu, 5 Apr 2012 23:25:25 -0400
From: David Winsemius <dwinsemius@comcast.net>
To: "Christopher R. Dolanc" <crdolanc@ucdavis.edu>
Cc: r-help@r-project.org
Subject: Re: [R] count() function
Message-ID: <0FBABD9B-5FA9-419B-AC40-A80301BA4666@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 3:15 PM, Christopher R. Dolanc wrote:
> I keep expecting R to have something analogous to the =count
> function in Excel, but I can't find anything. I simply want to count
> the data for a given category.
>
> I've been using the ddply() function in the plyr package to
> summarize means and st dev of my data, with this code:
Color me puzzled. The plyr package _has_ a count function.
>
> ddply(NZ_Conifers,.(ElevCat, DataSource, SizeClass), summarise,
> avgDensity=mean(Density), sdDensity=sd(Density), n=sum(Density))
>
> and that gives me results that look like this:
>
> ElevCat DataSource SizeClass avgDensity sdDensity n
> 1 Elev1 FIA Class1 38.67768 46.6673478 734.87598
> 2 Elev1 FIA Class2 27.34096 23.3232470 820.22879
> 3 Elev1 FIA Class3 15.38758 0.7088432 76.93790
> 4 Elev1 VTM Class1 66.37897 70.2050817 24958.49284
> 5 Elev1 VTM Class2 39.40786 34.9343269 11782.95152
> 6 Elev1 VTM Class3 21.17839 12.3487600 1461.30895
>
> But, instead of "sum(Density)", I'd really like counts of
"Density",
> so that I know the sample size of each group. Any suggestions?
>
> --
> Christopher R. Dolanc
> Post-doctoral Researcher
> University of California, Davis
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 114
Date: Thu, 5 Apr 2012 19:38:18 -0700 (PDT)
From: ieatnapalm <erawls@tulane.edu>
To: r-help@r-project.org
Subject: [R] Help with gsub function or a similar function
Message-ID: <1333679898232-4536584.post@n4.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hey, sorry if this has been addressed before, but I'm really new to R and
having trouble with the gsub function. I need a way to make this function
exclude certain values from being substituted:
ie my data looks something like (15:.0234,10:.0157) and I'm trying to
replace the leading 15 with something else - but of course it replaces the
second 15 with something else too. If there's a way to exclude anything with
a leading decimal point or something like that, that would do the trick.
Thanks yall.
--
View this message in context:
http://r.789695.n4.nabble.com/Help-with-gsub-function-or-a-similar-function-tp4536584p4536584.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 115
Date: Thu, 5 Apr 2012 19:57:08 -0700 (PDT)
To: "r-help@r-project.org" <r-help@r-project.org>
Subject: [R] simulation
Message-ID:
<1333681028.38436.YahooMailNeo@web65407.mail.ac4.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Hello,
i need to simulate 100 times, n=40 ,?
the distribution has 90% from X~N(0,1) + 10% from X~N(20,10)
Is my loop below correct?
Thank you
n=40
for(i in 1:100){
x<-rnorm(40,0,1) ?# 90% of n
z<-rnorm(40,20,10) ?# 10% of n
}
x+z
------------------------------
Message: 116
Date: Thu, 5 Apr 2012 23:46:54 -0400
From: David Winsemius <dwinsemius@comcast.net>
To: ieatnapalm <erawls@tulane.edu>
Cc: r-help@r-project.org
Subject: Re: [R] Help with gsub function or a similar function
Message-ID: <FF7863E8-CDC2-44A0-A8F0-F9AB8A0EB2D1@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 10:38 PM, ieatnapalm wrote:
> Hey, sorry if this has been addressed before, but I'm really new to
> R and
> having trouble with the gsub function. I need a way to make this
> function
> exclude certain values from being substituted:
> ie my data looks something like (15:.0234,10:.0157) and I'm trying to
> replace the leading 15 with something else - but of course it
> replaces the
> second 15 with something else too. If there's a way to exclude
> anything with
> a leading decimal point or something like that, that would do the
> trick.
> Thanks yall.
>
Cross-posting to SO and rhelp in quick succession is deprecated. If
you hadn't gotten an answer on SO after a day or so, then fine, go a
head cross-post but in point of fact you have already gotten three
answer on SO so you are just wasting rhelp brain-bandwidth.
> --
> View this message in context:
http://r.789695.n4.nabble.com/Help-with-gsub-function-or-a-similar-function-tp4536584p4536584.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 117
Date: Thu, 5 Apr 2012 23:51:27 -0400
From: David Winsemius <dwinsemius@comcast.net>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] simulation
Message-ID: <9355C68E-E5F2-4620-842D-25E2BF5FBFD6@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On Apr 5, 2012, at 10:57 PM, Christopher Kelvin wrote:
> Hello,
> i need to simulate 100 times, n=40 ,
> the distribution has 90% from X~N(0,1) + 10% from X~N(20,10)
> Is my loop below correct?
> Thank you
>
> n=40
> for(i in 1:100){
> x<-rnorm(40,0,1) # 90% of n
>
You are overwriting x and y and at the end of that loop you will only
have two vectors of length 40 each. If you wanted a 90 10 weighting
then why not lengths of 36 and 4???
To do repeated simulations you will find this help page useful:
?replicate
> z<-rnorm(40,20,10) # 10% of n
> }
> x+z
At this point you should not be using "+" but rather the c() function
if you are trying to join those two vectors. I think you need to spend
more time working through "Introduction to R".
>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
------------------------------
Message: 118
Date: Fri, 6 Apr 2012 07:47:41 +0200
From: Berend Hasselman <bhh@xs4all.nl>
To: Navin Goyal <navingoyal@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] integrate function - error -integration not occurring
with last few rows
Message-ID: <70A9BC0A-E432-480A-B765-82966309E96B@xs4all.nl>
Content-Type: text/plain; charset=us-ascii
On 06-04-2012, at 00:55, Navin Goyal wrote:
> Hi,
> I am using the integrate function in some simulations in R (tried ver 2.12
> and 2.15). The problem I have is that the last few rows do not integrate
> correctly. I have pasted the code I used.
> The column named "integral" shows the output from the integrate
function.
> The last few rows have no integration results. I tried increasing the
> doses, number of subjects, etc.... this error occurs with the last few rows
> only
>
> I am not sure why this is happening. Could someone please help me with this
> issue ??
> Thank you for your time
>
> dose<-c(0)
> time<-(0:6)
> id<-1:25
>
> data1<-expand.grid(id,time,dose)
> names(data1)<-c("ID","TIME", "DOSE")
> data1<-data1[order(data1$DOSE,data1$ID,data1$TIME),]
>
> ################
> basescore=95
> basescore_sd=0.12
> fall=0.15
> fall_sd=0.5
> slope=5
> dose_slope1=0.045
> dose_slope2=0.045
> dose_slope3=0.002
> rise_sd=0.5
>
> ed<-data1[!duplicated(data1$ID) , c(1,3)]
> ed$base=1
> ed$drop=1
> ed$bshz<-1
> ed$up<-1
> ed
>
> set.seed(5234123)
> k<-0
>
> for (i in 1:length(ed$ID))
> {
> k<-k+1
> ed$base[k]<-basescore*exp(rnorm(1,0,basescore_sd))
> ed$drop[k]<-fall*exp(rnorm(1,0,fall_sd))
> ed$up[k]<-slope*exp(rnorm(1,0,rise_sd))
> ed$bshz<-beta0
> }
>
> comb1<-merge(data1[, c("ID","TIME")], ed)
> comb1$disprog<-1
> comb1$beta1<-0.035
> comb1$beta21<-0.02
> comb1$beta22<-0.45
> comb1$beta23<-0085
> comb1$beta31<-0.7
> comb1$beta32<-0.05
> comb1$exphz<-1
>
> comb2<-comb1
>
> p<-0
> for(l in 1:length(comb2$ID))
> {
> p<-p+1
> comb2$disprog[p]<-comb2$base[p]*exp(-comb2$drop[p]*comb2$TIME[p]) +
> comb2$up[p]*comb2$TIME[p]
> comb2$frac[p]<-ifelse ( comb2$DOSE[p]==3,
> comb2$beta31[p]*comb2$TIME[p]^comb2$beta32[p],
> exp(-comb2$beta21[p]*comb2$DOSE[p])*comb2$TIME[p]^comb2$beta22[p] )
> }
>
> hz.func1<-function(t,bshz,beta1, change,other)
> {
> ifelse(t==0,bshz, bshz*exp(beta1*change+other))
> }
>
> comb3<-comb2
> comb3$integral=0
>
> q<-0
> for (m in i:length(comb3$ID))
> {
> q<-q+1
> comb3$integral[q]<-integrate(hz.func1, lower=0, upper=comb3$TIME[q],
> bshz=comb3$bshz[q],beta1=comb3$beta1[q],
> change=comb3$disprog[q], other=comb3$frac[q])$value
> }
Where is beta0 in the line with ed$bshz<-beta0 ?
In the last for loop for (m in i:length(comb3$ID)) could it be that i should
be 1?
When the i is changed to 1 then integrate results are <> 0, which might be
what you expect?
Berend
------------------------------
Message: 119
Date: Fri, 6 Apr 2012 08:29:09 +0200
From: peter dalgaard <pdalgd@gmail.com>
To: ikuzar <razuki@hotmail.fr>
Cc: r-help@r-project.org
Subject: Re: [R] how to compute a vector of min values ?
Message-ID: <AD8888D8-48BB-4742-BF8D-CD97CA8EF827@gmail.com>
Content-Type: text/plain; charset=us-ascii
On Apr 6, 2012, at 00:25 , ikuzar wrote:
> Hi,
>
> I'd like to know how to get a vector of min value from many vectors
without
> making a loop. For example :
>
>> v1 = c( 1, 2, 3)
>> v2 = c( 2, 3, 4)
>> v3 = c(3, 4, 5)
>> df = data.frame(v1, v2, v3)
>> df
> v1 v2 v3
> 1 1 2 3
> 2 2 3 4
> 3 3 4 5
>> min_vect = min(df)
>> min_vect
> [1] 1
>
> I 'd like to get min_vect = (1, 2, 3), where 1 is the min of v1, 2 is
the
> min of v2 and 3 is the min of v3.
>
> The example above are very easy but, in real, I have got v1, v2, ... v1440
sapply(df, min)
(possibly sapply(df, min, na.rm=TRUE) )
>
> Thanks for your help,
>
> ikuzar
>
> --
> View this message in context:
http://r.789695.n4.nabble.com/how-to-compute-a-vector-of-min-values-tp4536224p4536224.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd.mes@cbs.dk Priv: PDalgd@gmail.com
------------------------------
Message: 120
Date: Fri, 6 Apr 2012 01:18:35 -0500
From: Kumar Mainali <kpmainali@gmail.com>
To: "r-help@r-project.org" <r-help@r-project.org>
Subject: [R] Legend based on levels of a variable
Message-ID:
<CABK368hyQixSX0NbX6t+od2c-JHTso0nxG3Zi5nPpvhBmRZiEw@mail.gmail.com>
Content-Type: text/plain
I have a bivariate plot of axis2 against axis1 (data below). I would like
to use different size, type and color for points in the plot for the point
coming from different region. For some reasons, I cannot get it done. Below
is my code.
col <- rep(c("blue", "red", "darkgreen"), c(16,
16, 16))
## Choose different size of points
cex <- rep(c(1, 1.2, 1), c(16, 16, 16))
## Choose the form of the points (square, circle, triangle and
diamond-shaped
pch <- rep(c(15, 16, 17), c(16, 16, 16))
plot(axis1, axis2, main="My plot", xlab="Axis 1",
ylab="Axis 2",
col=c(Category, col), pch=pch, cex=cex)
legend(4, 12.5, c("NorthAmerica", "SouthAmerica",
"Asia"), col = col,
pch = pch, pt.cex = cex, title = "Region")
I also prefer a control on what kind of point I want to use for different
levels of Region. Something like this:
legend(4,12.5, col(levels(Category), Asia="red",
NorthAmerica="blue",
SouthAmerica="green"))
Thanks,
Kumar
Region axis1 axis2 NorthAmerica 5 14 NorthAmerica 8 13 NorthAmerica 8
11 NorthAmerica 6 11 NorthAmerica 5 13 SouthAmerica 8 17 SouthAmerica 7
16 SouthAmerica 7 13 SouthAmerica 8 14 SouthAmerica 6 17 Asia 7 13 Asia
6 15 Asia 7 14 Asia 5 13 Asia 4 16
[[alternative HTML version deleted]]
------------------------------
Message: 121
Date: Fri, 6 Apr 2012 16:27:51 +0800
From: jpm miao <miaojpm@gmail.com>
To: r-help@r-project.org
Subject: [R] Time series - year on year growth rate
Message-ID:
<CABcx46De0SXsFOxrVKh8WMFfrU+aaf=rxUN4XNWWewdMxC7SWw@mail.gmail.com>
Content-Type: text/plain
Hello,
Is there a function in R that calculates the year-on-year growth rate of
some time series?
In EView the function is @pchy.
Thanks,
miao
[[alternative HTML version deleted]]
------------------------------
Message: 122
Date: Fri, 06 Apr 2012 17:01:13 +0800
From: Jinsong Zhao <jszhao@yeah.net>
To: "Richard M. Heiberger" <rmh@temple.edu>
Cc: "r-help@R-project.org" <r-help@r-project.org>
Subject: Re: [R] Fisher's LSD multiple comparisons in a two-way ANOVA
Message-ID: <4F7EB0D9.5060208@yeah.net>
Content-Type: text/plain; charset=windows-1252; format=flowed
On 2012-04-05 10:49, Richard M. Heiberger wrote:> Here is your example. The table you displayed in gigawiz ignored the
> two-way factor structure
> and interpreted the data as a single factor with 6 levels. I created
> the interaction of
> a and b to get that behavior.
> ## your example, with data stored in a data.frame
> tmp <- data.frame(x=c(76, 84, 78, 80, 82, 70, 62, 72,
> 71, 69, 72, 74, 66, 74, 68, 66,
> 69, 72, 72, 78, 74, 71, 73, 67,
> 86, 67, 72, 85, 87, 74, 83, 86,
> 66, 68, 70, 76, 78, 76, 69, 74,
> 72, 72, 76, 69, 69, 82, 79, 81),
> a=factor(rep(c("A1", "A2"), each =
24)),
> b=factor(rep(c("B1", "B2",
"B3"), each=8, times=2)))
> x.aov <- aov(x ~ a*b, data=tmp)
> summary(x.aov)
> ## your request
> require(multcomp)
> tmp$ab <- with(tmp, interaction(a, b))
> xi.aov <- aov(x ~ ab, data=tmp)
> summary(xi.aov)
> xi.glht <- glht(xi.aov, linfct=mcp(ab="Tukey"))
> confint(xi.glht)
>
> ## graphs
> ## boxplots
> require(lattice)
> bwplot(x ~ ab, data=tmp)
> ## interaction plot
> ## install.packages("HH") ## if you don't have HH yet
> require(HH)
> interaction2wt(x ~ a*b, data=tmp)
>
Thank you very much for the demonstration.
There is still a small difference between the results of glht() and the
the table displayed in gigawiz. I try my best to figure out, but fail...
By the way, I have a similar question. I built a ANOVA model:
activity ~ pH * I * f
Df Sum Sq Mean Sq F value Pr(>F)
pH 1 1330 1330 59.752 2.15e-10 ***
I 1 137 137 6.131 0.0163 *
f 6 23054 3842 172.585 < 2e-16 ***
pH:I 1 152 152 6.809 0.0116 *
pH:f 6 274 46 2.049 0.0741 .
I:f 6 5015 836 37.544 < 2e-16 ***
pH:I:f 6 849 142 6.356 3.82e-05 ***
Residuals 56 1247 22
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.
Now, how can I do a multi-comparison on `pH:I'?
Do I need to do separate ANOVA for each `pH' or `I', just as that in
demo("MMC.WoodEnergy", "HH")? And then do multi-comparison
on `I' or
`pH' in each separate ANOVA?
Thanks again.
Regards,
Jinsong
------------------------------
Message: 123
Date: Fri, 6 Apr 2012 11:08:06 +0200
From: Petr PIKAL <petr.pikal@precheza.cz>
To: Kumar Mainali <kpmainali@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] Legend based on levels of a variable
Message-ID:
<OF578B6EA6.6872255A-ONC12579D8.00319316-C12579D8.00325016@precheza.cz>
Content-Type: text/plain; charset="US-ASCII"
Hi
>
> I have a bivariate plot of axis2 against axis1 (data below). I would
like> to use different size, type and color for points in the plot for the
point> coming from different region. For some reasons, I cannot get it done.
Below> is my code.
>
> col <- rep(c("blue", "red", "darkgreen"),
c(16, 16, 16))
> ## Choose different size of points
> cex <- rep(c(1, 1.2, 1), c(16, 16, 16))
> ## Choose the form of the points (square, circle, triangle and
> diamond-shaped
> pch <- rep(c(15, 16, 17), c(16, 16, 16))
>
> plot(axis1, axis2, main="My plot", xlab="Axis 1",
ylab="Axis 2",
> col=c(Category, col), pch=pch, cex=cex)
> legend(4, 12.5, c("NorthAmerica", "SouthAmerica",
"Asia"), col = col,
> pch = pch, pt.cex = cex, title = "Region")
>
> I also prefer a control on what kind of point I want to use for
different> levels of Region. Something like this:
> legend(4,12.5, col(levels(Category), Asia="red",
NorthAmerica="blue",
> SouthAmerica="green"))
So why you do not use Region and/or Category for automatic point
colouring/size/type.
Without data I can only use built in one.
with(iris, plot(Sepal.Length, Sepal.Width, col= as.numeric(Species)))
legend("topright", legend=levels(iris$Species), pch=19, col=1:3)
Regards
Petr
>
> Thanks,
> Kumar
>
> Region axis1 axis2 NorthAmerica 5 14 NorthAmerica 8 13 NorthAmerica
8> 11 NorthAmerica 6 11 NorthAmerica 5 13 SouthAmerica 8 17 SouthAmerica
7> 16 SouthAmerica 7 13 SouthAmerica 8 14 SouthAmerica 6 17 Asia 7 13
Asia> 6 15 Asia 7 14 Asia 5 13 Asia 4 16
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 124
Date: Fri, 6 Apr 2012 11:27:19 +0200
From: Petr PIKAL <petr.pikal@precheza.cz>
To: "Pedro Henrique" <lamarao@superig.com.br>
Cc: r-help@r-project.org
Subject: [R] Hi: Help Using Spreadsheets
Message-ID:
<OFE132A9B9.322E7F78-ONC12579D8.00325443-C12579D8.00341257@precheza.cz>
Content-Type: text/plain; charset="ISO-8859-2"
Hi> Hello,
>
> I am a new user of R and I am trying to use the data I am reading from a
> spreadsheet.
> I installed the xlsReadWrite package and I am able to read data from
this > files, but how can I assign the colums into values?
> E.g:
> as I read a spreadsheet like this one:
Maybe with read.xls? Did you read it into an object?
> A B
> 1 2
> 4 9
>
> I manually assign the values:
> A<-c(1,4)
> B<-c(2,9)
Why? If you read in to an object (e.g. mydata)
>
> to plot it on a graph:
> plot(A,B)
plot(mydata$A, mydata$B)
>
> or make histograms:
> hist(A)
hist(mydata$A)
>
> But actualy I am using very large colums, does exist any other way to do
> it automatically?
Yes. But before that you shall automatically read some introduction
documentation like R-intro)
Regards
Petr
>
> Best Regards,
>
> L?mar?o
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 125
Date: Fri, 06 Apr 2012 11:29:40 +0200
From: mlell08 <mlell08@googlemail.com>
To: Kumar Mainali <kpmainali@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Subject: Re: [R] Legend based on levels of a variable
Message-ID: <4F7EB784.5040500@gmail.com>
Content-Type: text/plain
He provided data, yet in an inconvenient way at the bottom of his post.
Kumar, please use dput() to provide data to the list, because its much
easier to import:
dput(data) ## name data is made up by me
structure(list(Region = structure(c(2L, 2L, 2L, 2L, 2L, 3L, 3L,
3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L), .Label = c("Asia",
"NorthAmerica",
"SouthAmerica"), class = "factor"), axis1 = c(5L, 8L, 8L,
6L,
5L, 8L, 7L, 7L, 8L, 6L, 7L, 6L, 7L, 5L, 4L), axis2 = c(14L, 13L,
11L, 11L, 13L, 17L, 16L, 13L, 14L, 17L, 13L, 15L, 14L, 13L, 16L
)), .Names = c("Region", "axis1", "axis2"), class
= "data.frame", row.names = c(NA,
-15L))
[[alternative HTML version deleted]]
------------------------------
Message: 126
Date: Fri, 6 Apr 2012 11:29:06 +0200
From: Petr PIKAL <petr.pikal@precheza.cz>
To: MANI <mani_hku@hotmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] how to do piecewise linear regression in R?
Message-ID:
<OF9CC751BF.B7D4A31A-ONC12579D8.0033FFB8-C12579D8.00343C12@precheza.cz>
Content-Type: text/plain; charset="UTF-8"
Hi
Your post is rather screwed.
>
> [R] how to do piecewise linear regression in R?
Maybe segmented?
Regards
Petr
>
>
> Dear all,
> I want to do piecewise CAPM linear regression in R:
> RRiskArb???Rf = (1?????)[??MktLow+??MktLow(RMkt???Rf)] + ??[??Mkt
High > +??Mkt High(RMkt ???Rf )]
>
> where ?? is a dummy variable if the excess return on the value-weighted
> CRSP index is above a threshold level and zero otherwise. and at the
same > time add the restriction:
>
> ??Mkt Low + ??Mkt Low ?? Threshold = ??Mkt High + ??Mkt High ??
Threshold> to ensure continuity.
> But I do not know how to add this restriction in R, could you help me on
this?> Thanks a lot!
> Eunice
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 127
Date: Fri, 6 Apr 2012 11:59:37 +0200
From: Berend Hasselman <bhh@xs4all.nl>
To: jpm miao <miaojpm@gmail.com>
Cc: r-help@r-project.org
Subject: Re: [R] Time series - year on year growth rate
Message-ID: <01F9F23B-630D-49B2-9CFA-9C70D90F4422@xs4all.nl>
Content-Type: text/plain; charset=us-ascii
On 06-04-2012, at 10:27, jpm miao wrote:
> Hello,
>
> Is there a function in R that calculates the year-on-year growth rate of
> some time series?
>
> In EView the function is @pchy.
This might do what you need
pchy <- function(x) {
if(!is.ts(x)) stop("x is not a timeseries")
x.freq <- tsp(x)[3]
if(!(x.freq %in% c(1,2,4,12))) stop("Invalid frequency of timeseries x
(must be 1, 2, 4, 12)")
y <- diff(x,lag=x.freq)/lag(x,-x.freq)
return(y)
}
Berend
------------------------------
_______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 110, Issue 7
**************************************
[[alternative HTML version deleted]]