Hi all,
does anyone know how to automatically update starting values in R?
I' m fitting multiple nonlinear models and would like to know how I can
update starting values without having to type them in.
thank all
--- On Fri, 12/26/08, r-help-request@r-project.org
<r-help-request@r-project.org> wrote:
From: r-help-request@r-project.org <r-help-request@r-project.org>
Subject: R-help Digest, Vol 70, Issue 26
To: r-help@r-project.org
Date: Friday, December 26, 2008, 6:00 AM
Send R-help mailing list submissions to
r-help@r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request@r-project.org
You can reach the person managing the list at
r-help-owner@r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. Re: Implementing a linear restriction in lm() (Ravi Varadhan)
2. p(H0|data) for lm/lmer-objects R (Leo G?rtler)
3. Re: beginner data.frame question (Oliver Bandel)
4. Re: 4 questions regarding hypothesis testing, survey package,
ts on samples, plotting (Thomas Lumley)
5. Re: How can I avoid nested 'for' loops or quicken the
process? (Oliver Bandel)
6. Re: 4 questions regarding hypothesis testing, survey package,
ts on samples, plotting (Peter Dalgaard)
7. Re: 4 questions regarding hypothesis testing, survey package,
ts on samples, plotting (Ben Bolker)
8. Re: Class and object problem (Ben Bolker)
9. Re: 4 questions regarding hypothesis testing, survey package,
ts on samples, plotting (Peter Dalgaard)
10. Re: p(H0|data) for lm/lmer-objects R (Daniel Malter)
11. Re: Implementing a linear restriction in lm() (Daniel Malter)
12. Re: p(H0|data) for lm/lmer-objects R (Andrew Robinson)
13. Percent damage distribution (diegol)
14. Re: ggplot2 Xlim (Wayne F)
15. Re: creating standard curves for ELISA analysis (1Rnwb)
16. Re: Percent damage distribution (Ben Bolker)
17. Re: How can I avoid nested 'for' loops or quicken the
process? (Prof Brian Ripley)
18. Re: Percent damage distribution (Prof Brian Ripley)
19. Upgrading R causes Tinn-R to freeze. (rkevinburton@charter.net)
----------------------------------------------------------------------
Message: 1
Date: Thu, 25 Dec 2008 11:39:33 -0500
From: Ravi Varadhan <rvaradhan@jhmi.edu>
Subject: Re: [R] Implementing a linear restriction in lm()
To: Serguei Kaniovski <Serguei.Kaniovski@wifo.ac.at>
Cc: r-help@stat.math.ethz.ch
Message-ID: <f5bef5b03d6.495370f5@johnshopkins.edu>
Content-Type: text/plain; charset=iso-8859-1
Hi,
You could use the "offset" argument in lm(). Here is an example:
set.seed(123)
x <- runif(50)
beta <- 1
y <- 2 + beta*x + rnorm(50)
model1 <- lm (y ~ x)
model2 <- lm (y ~ 1, offset=x)
anova(model2, model1)
Best,
Ravi.
____________________________________________________________________
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvaradhan@jhmi.edu
----- Original Message -----
From: Serguei Kaniovski <Serguei.Kaniovski@wifo.ac.at>
Date: Wednesday, December 24, 2008 9:39 pm
Subject: [R] Implementing a linear restriction in lm()
To: r-help@stat.math.ethz.ch
>
> Dear All!
>
> I want to test a coeffcient restriction beta=1 in a univariate model
> lm
> (y~x). Entering
> lm((y-x)~1) does not help since anova test requires the same dependent
> variable. What is the right way to proceed?
>
> Thank you for your help and marry xmas,
> Serguei Kaniovski
> ________________________________________
> Austrian?Institute?of?Economic?Research?(WIFO)
>
> P.O.Box?91??????????????????????????Tel.:?+43-1-7982601-231
> 1103?Vienna,?Austria????????Fax:?+43-1-7989386
>
> Mail:?Serguei.Kaniovski@wifo.ac.at
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
>
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 2
Date: Thu, 25 Dec 2008 19:51:36 +0100
From: Leo G?rtler <leog@anicca-vijja.de>
Subject: [R] p(H0|data) for lm/lmer-objects R
To: r-help@stat.math.ethz.ch
Message-ID: <4953D638.4090507@anicca-vijja.de>
Content-Type: text/plain; charset=ISO-8859-15
Dear R-List,
I am interested in the Bayesian view on parameter estimation for
multilevel models and ordinary regression models. AFAIU traditional
frequentist p-values they give information about p(data_or_extreme|H0).
AFAIU it further, p-values in the Fisherian sense are also no alpha/type
I errors and therefor give no information about future replications.
However, p(data_or_extreme|H0) is not really interesting for social
science research questions (psychology). Much more interesting is
p(H0|data). Is there a way or formula to calculate these probabilities
of the H0 (or another hypothesis) from lm-/lmer objects in R?
Yes I know that multi-level modeling as well as regression can be done
in a purely Bayesian way. However, I am not capable of Bayesian
statistics, therefor I ask that question. I am starting to learn it a
little bit.
The frequentist literature - of course - does not cover that topic.
Thanks a lot,
best,
leo g?rtler
------------------------------
Message: 3
Date: Thu, 25 Dec 2008 19:49:58 +0000 (UTC)
From: Oliver Bandel <oliver@first.in-berlin.de>
Subject: Re: [R] beginner data.frame question
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20081225T193443-238@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
John Fox <jfox <at> mcmaster.ca> writes:
>
> Dear Kirk,
>
> Actually, co2 isn't a data frame but rather a "ts"
(timeseries) object. A> nice thing about R is that you can query and examine objects:
>
> > class(co2)
> [1] "ts"
[...]
Yes.
And with
> frequency(co2)
[1] 12
One gets "the number of observations per unit of time".
When one sets the parameter "start" and "frequency" or
"start" and "deltat" or
"start" and "end" of a time-series, one can set the used
values and that means
that functions that use those values, also will be controlled by this.
> start(co2)
[1] 1959 1> end(co2)
[1] 1997 12
Rearranging by creaating a new ts-object with different
timely parameters:
> new_co2 <- ( ts( co2,frequency=1, start=1959) )
> start(new_co2)
[1] 1959 1> end(new_co2)
[1] 2426 1
.... and the way back:
> old_co2 <- ( ts( new_co2, frequency=12, start=1959) )
> start(old_co2)
[1] 1959 1> end(old_co2)
[1] 1997 12>
using plot on those values will result in different plots.
Why the mentioned test data is showed different wih summary,
[[elided Yahoo spam]]
To have the test-data (or the way how it was constructed)
could help in helping.
Ciao,
Oliver
------------------------------
Message: 4
Date: Thu, 25 Dec 2008 12:00:21 -0800 (PST)
From: Thomas Lumley <tlumley@u.washington.edu>
Subject: Re: [R] 4 questions regarding hypothesis testing, survey
package, ts on samples, plotting
To: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Cc: r-help@r-project.org, Ben Bolker <bolker@ufl.edu>
Message-ID:
<Pine.LNX.4.43.0812251200210.24756@hymn11.u.washington.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Wed, 24 Dec 2008, Peter Dalgaard wrote:
> Ben Bolker wrote:
>>
>>
>> Khawaja, Aman wrote:
>>> I need to answer one of the question in my open source test is:
What are>>> the four questions asked about the parameters in hypothesis
testing?>>>
>>>
>>
>> Please check the posting guide.
>> * We don't answer homework questions ("open source"
doesn't mean>> that other people answer the questions for you, it means you can find
>> the answers outside your own head -- and in any case, we don't
have>> any of way of knowing that the test is really open).
>> * this is not an R question but a statistics question
>> * please don't post the same question multiple times
>
>
> Besides, this is really unanswerable without access to your teaching
material, > which probably has a list of four questions somewhere...
Starting with 'Why is this parameter different from all other
parameters?', perhaps.
> It is a bit like the History question: "Who was what in what of
whom?">
>
A traditional British equivalent is "Who dragged whom how many times
around the walls of where?", which does have just about enough context.
The R answer to the original post would probably be
1. Why aren't there any p-values in lmer()?
2. How do I extract p-values from lm()?
3. Can R do post-hoc tests?
4. Can R do tests of normality?
and in statistical consulting the questions might be
1. Doesn't that assume a Normal distribution?
2. Do you have a reference for that?
3. What was the power for that test?
4. Can you redo the test just in the left-handed avocado farmers[*]
-thomas
[*] this particular subset (c) joel on software.
Thomas Lumley Assoc. Professor, Biostatistics
tlumley@u.washington.edu University of Washington, Seattle
------------------------------
Message: 5
Date: Thu, 25 Dec 2008 20:20:48 +0000 (UTC)
From: Oliver Bandel <oliver@first.in-berlin.de>
Subject: Re: [R] How can I avoid nested 'for' loops or quicken the
process?
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20081225T201648-168@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Bert Gunter <gunter.berton <at> gene.com> writes:
>
> FWIW:
>
> Good advice below! -- after all, the first rule of optimizing code is:
> Don't!
>
> For the record (yet again), the apply() family of functions (and their
> packaged derivatives, of course) are "merely" vary carefully
written for()> loops: their main advantage is in code readability, not in efficiency
gains,> which may well be small or nonexistent. True efficiency gains require
> "vectorization", which essentially moves the for() loops from
interpreted> code to (underlying) C code (on the underlying data structures): e.g.
> compare rowMeans() [vectorized] with ave() or apply(..,1,mean).
[...]
The apply-functions do bring speed-advantages.
This is not only what I read about it,
I have used the apply-functions and really got
results faster.
The reason is simple: an apply-function does
make in C, what otherwise would be done on the level of R
with for-loops.
Ciao,
Oliver
------------------------------
Message: 6
Date: Thu, 25 Dec 2008 21:25:58 +0100
From: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Subject: Re: [R] 4 questions regarding hypothesis testing, survey
package, ts on samples, plotting
To: Thomas Lumley <tlumley@u.washington.edu>
Cc: r-help@r-project.org, Ben Bolker <bolker@ufl.edu>
Message-ID: <4953EC56.9010808@biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Thomas Lumley wrote:> On Wed, 24 Dec 2008, Peter Dalgaard wrote:
>> It is a bit like the History question: "Who was what in what of
whom?">>
>>
>
> A traditional British equivalent is "Who dragged whom how many times
> around the walls of where?", which does have just about enough
context.
Yes. "Joshua, Isrelites, seven, Jericho" is wrong by a hair....
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 7
Date: Thu, 25 Dec 2008 15:30:10 -0500
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] 4 questions regarding hypothesis testing, survey
package, ts on samples, plotting
To: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Cc: r-help@r-project.org, Thomas Lumley <tlumley@u.washington.edu>
Message-ID: <4953ED52.30801@ufl.edu>
Content-Type: text/plain; charset=ISO-8859-1
Peter Dalgaard wrote:> Thomas Lumley wrote:
>> On Wed, 24 Dec 2008, Peter Dalgaard wrote:
>
>>> It is a bit like the History question: "Who was what in what
of whom?">>>
>>>
>>
>> A traditional British equivalent is "Who dragged whom how many
times>> around the walls of where?", which does have just about enough
context.>
> Yes. "Joshua, Isrelites, seven, Jericho" is wrong by a hair....
>
Hmmm. Achilles, Hector, ?, Troy.
http://en.wikipedia.org/wiki/Achilles:
Achilles chased Hector around the wall of Troy three times before
Athena, in the form of Hector's favorite and dearest brother, Deiphobus,
persuaded Hector to stop running and fight Achilles face to face. After
Hector realized the trick, he knew his death was inevitable and accepted
his fate. Hector, wanting to go down fighting, charged at Achilles with
his only weapon, his sword. Achilles got his vengeance, killing Hector
with a single blow to the neck. He then tied Hector's body to his
chariot and dragged it around the battlefield for nine days.
--
Ben Bolker
Associate professor, Biology Dep't, Univ. of Florida
bolker@ufl.edu / www.zoology.ufl.edu/bolker
GPG key: www.zoology.ufl.edu/bolker/benbolker-publickey.asc
------------------------------
Message: 8
Date: Thu, 25 Dec 2008 21:17:56 +0000 (UTC)
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] Class and object problem
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20081225T211712-883@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Odette Gaston <odette.gaston <at> gmail.com> writes:
>
> Dear all,
>
> I have a problem with accessing class attributes.
> I was unable to solve this
> yet, but someone may know how to solve it.
My best guess at your immediate problem (doing
things by hand) is that you're not using the
whole vector. From your example:
Delta <- c(m1 = 0, m2 = 1.8, m3 = 4.2, m4 = 6.2)
exp(-0.5*Delta)/sum(exp(-0.5*Delta))
m1 m2 m3 m4
0.63529363 0.25829111 0.07779579 0.02861947
In general the dRedging package at
http://www.zbs.bialowieza.pl/users/kamil/r/ can do these
problems (I hate to recommend this package because it
offers the danger of thoughtless convenience,
but if you really know that you want to enumerate
models and do IC-based model averaging it can save a
lot of time). At the moment, though, it doesn't work
with glmmML-based objects (you could ask the author
to extend it).
When I tried stepAIC it didn't really enumerate
all the models for me (that's not its purpose),
so I went through and enumerated by hand. For example;
library(glmmML)
set.seed(1001)
a <- runif(100)
b <- runif(100)
c <- runif(100)
x <- runif(100)
n <- rep(20,100)
cluster <- factor(rep(1:5,20))
linpred <- a+b+c+x-2
y <- rbinom(100,prob=plogis(linpred),size=n)
data <- data.frame(y,a,b,c,x,n)
m <- list()
## full model
m[[1]] <- glmmML(cbind (y, n-y)~ x+a+b+c,
family = binomial, data, cluster)
## 3-term models
m[[2]] <- update(m[[1]],.~.-a) ## xbc
m[[3]] <- update(m[[1]],.~.-b) ## xac
m[[4]] <- update(m[[1]],.~.-c) ## xab
m[[5]] <- update(m[[1]],.~.-x) ## abc
## 2-term models
m[[6]] <- update(m[[2]],.~.-x) ## bc
m[[7]] <- update(m[[2]],.~.-b) ## xc
m[[8]] <- update(m[[2]],.~.-c) ## xb
m[[9]] <- update(m[[3]],.~.-x) ## ac
m[[10]] <- update(m[[3]],.~.-c) ## xa
m[[11]] <- update(m[[4]],.~.-x) ## ab
## 0-term models (intercept)
m[[12]] <- glmmML(cbind (y, n-y)~ 1, family = binomial, data, cluster)
m[[13]] <- update(m[[12]],.~.+a)
m[[14]] <- update(m[[12]],.~.+b)
m[[15]] <- update(m[[12]],.~.+c)
m[[16]] <- update(m[[12]],.~.+x)
## have to define logLik and AIC for glmmML objects
logLik.glmmML <- function(x) {
loglik <- (-x$deviance)/2
attr(loglik,"df") <- length(coef(x))
loglik
}
AIC.glmmML <- function(x) x$aic
library(bbmle)
## now it works (the answers are pretty trivial
## in this made-up case
AICtab(m,sort=TRUE,weights=TRUE,delta=TRUE)
------------------------------
Message: 9
Date: Thu, 25 Dec 2008 22:20:38 +0100
From: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Subject: Re: [R] 4 questions regarding hypothesis testing, survey
package, ts on samples, plotting
To: Ben Bolker <bolker@ufl.edu>
Cc: r-help@r-project.org, Thomas Lumley <tlumley@u.washington.edu>
Message-ID: <4953F926.5040603@biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Ben Bolker wrote:> Peter Dalgaard wrote:
>> Thomas Lumley wrote:
>>> On Wed, 24 Dec 2008, Peter Dalgaard wrote:
>>>> It is a bit like the History question: "Who was what in
what of whom?">>>>
>>>>
>>> A traditional British equivalent is "Who dragged whom how
many times>>> around the walls of where?", which does have just about
enough context.>> Yes. "Joshua, Isrelites, seven, Jericho" is wrong by a
hair....>>
>
> Hmmm. Achilles, Hector, ?, Troy.
>
> http://en.wikipedia.org/wiki/Achilles:
>
> Achilles chased Hector around the wall of Troy three times before
> Athena, in the form of Hector's favorite and dearest brother,
Deiphobus,> persuaded Hector to stop running and fight Achilles face to face. After
> Hector realized the trick, he knew his death was inevitable and accepted
> his fate. Hector, wanting to go down fighting, charged at Achilles with
> his only weapon, his sword. Achilles got his vengeance, killing Hector
> with a single blow to the neck. He then tied Hector's body to his
> chariot and dragged it around the battlefield for nine days.
>
I have
http://thanasis.com/achilles.htm
Achilles ignored Hector's dying wish to have his body returned to his
father Priam for ransom. Instead he fastened leather straps to the body
of Hector, secured them on his chariot and whipping up his immortal
horses Balius, Xanthus and Pedasus, dragged the corpse three times
around the walls of Troy, much to the dismay of the devastated Trojans.
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 10
Date: Thu, 25 Dec 2008 16:35:35 -0500
From: "Daniel Malter" <daniel@umd.edu>
Subject: Re: [R] p(H0|data) for lm/lmer-objects R
To: " 'Leo G?rtler' " <leog@anicca-vijja.de>,
<r-help@stat.math.ethz.ch>
Message-ID: <200812252135.AHJ65697@md4.mail.umd.edu>
Content-Type: text/plain; charset="iso-8859-1"
This is very opaque to me. But if H0 is a null hypothesis (i.e. a hypothesis
about one or several coefficients in your model), then you can test linear
or nonlinear restrictions of the coefficients. Because your coefficients are
derived using your data, it appears to me you get something like a
p(H0|data).
-------------------------
cuncta stricte discussurus
-------------------------
-----Urspr?ngliche Nachricht-----
Von: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] Im
Auftrag von Leo G?rtler
Gesendet: Thursday, December 25, 2008 1:52 PM
An: r-help@stat.math.ethz.ch
Betreff: [R] p(H0|data) for lm/lmer-objects R
Dear R-List,
I am interested in the Bayesian view on parameter estimation for multilevel
models and ordinary regression models. AFAIU traditional frequentist
p-values they give information about p(data_or_extreme|H0).
AFAIU it further, p-values in the Fisherian sense are also no alpha/type I
errors and therefor give no information about future replications.
However, p(data_or_extreme|H0) is not really interesting for social science
research questions (psychology). Much more interesting is p(H0|data). Is
there a way or formula to calculate these probabilities of the H0 (or
another hypothesis) from lm-/lmer objects in R?
Yes I know that multi-level modeling as well as regression can be done in a
purely Bayesian way. However, I am not capable of Bayesian statistics,
therefor I ask that question. I am starting to learn it a little bit.
The frequentist literature - of course - does not cover that topic.
Thanks a lot,
best,
leo g?rtler
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 11
Date: Thu, 25 Dec 2008 16:42:27 -0500
From: "Daniel Malter" <daniel@umd.edu>
Subject: Re: [R] Implementing a linear restriction in lm()
To: "'Serguei Kaniovski'"
<Serguei.Kaniovski@wifo.ac.at>
Cc: r-help@stat.math.ethz.ch
Message-ID: <200812252142.DMB84001@md0.mail.umd.edu>
Content-Type: text/plain; charset="iso-8859-1"
If it is only for a single coefficient you can just subtract your test-value
from the coefficient and divide by the coefficient's standard-error, which
gives you a t-value for the test (see Greene 2006).
Otherwise, lookup "linear.hypothesis" in the "car" library.
Cheers,
Daniel
-------------------------
cuncta stricte discussurus
-------------------------
-----Urspr?ngliche Nachricht-----
Von: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] Im
Auftrag von Ravi Varadhan
Gesendet: Thursday, December 25, 2008 11:40 AM
An: Serguei Kaniovski
Cc: r-help@stat.math.ethz.ch
Betreff: Re: [R] Implementing a linear restriction in lm()
Hi,
You could use the "offset" argument in lm(). Here is an example:
set.seed(123)
x <- runif(50)
beta <- 1
y <- 2 + beta*x + rnorm(50)
model1 <- lm (y ~ x)
model2 <- lm (y ~ 1, offset=x)
anova(model2, model1)
Best,
Ravi.
____________________________________________________________________
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns
Hopkins University
Ph. (410) 502-2619
email: rvaradhan@jhmi.edu
----- Original Message -----
From: Serguei Kaniovski <Serguei.Kaniovski@wifo.ac.at>
Date: Wednesday, December 24, 2008 9:39 pm
Subject: [R] Implementing a linear restriction in lm()
To: r-help@stat.math.ethz.ch
>
> Dear All!
>
> I want to test a coeffcient restriction beta=1 in a univariate model
> lm (y~x). Entering
> lm((y-x)~1) does not help since anova test requires the same
> dependent variable. What is the right way to proceed?
>
> Thank you for your help and marry xmas, Serguei Kaniovski
> ________________________________________
> Austrian?Institute?of?Economic?Research?(WIFO)
>
> P.O.Box?91??????????????????????????Tel.:?+43-1-7982601-231
> 1103?Vienna,?Austria????????Fax:?+43-1-7989386
>
> Mail:?Serguei.Kaniovski@wifo.ac.at
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
>
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 12
Date: Fri, 26 Dec 2008 09:03:54 +1100
From: Andrew Robinson <A.Robinson@ms.unimelb.edu.au>
Subject: Re: [R] p(H0|data) for lm/lmer-objects R
To: "'Leo G?rtler'" <leog@anicca-vijja.de>
Cc: r-help@stat.math.ethz.ch
Message-ID: <20081225220354.GH1294@ms.unimelb.edu.au>
Content-Type: text/plain; charset=us-ascii
Dear Leo,
> Dear R-List,
>
> I am interested in the Bayesian view on parameter estimation for
multilevel> models and ordinary regression models.
You might find Gelman & Hill's recent book to be good reading, and
there is a book in the Use-R series that focuses on using R to perform
Bayesian analyses.
> AFAIU traditional frequentist p-values they give information about
> p(data_or_extreme|H0). AFAIU it further, p-values in the Fisherian
> sense are also no alpha/type I errors and therefor give no
> information about future replications.
I don't think that the last comment is necessarily relevant nor is it
necessarily true.
> However, p(data_or_extreme|H0) is not really interesting for social
science> research questions (psychology). Much more interesting is
> p(H0|data).
That's fine, but first you have to believe that the statement has
meaning.
> Is there a way or formula to calculate these probabilities of the H0
> (or another hypothesis) from lm-/lmer objects in R?
See the books above. Note that in order to do so, you will need to
nominate a prior distribution of some kind.
> Yes I know that multi-level modeling as well as regression can be done in
a> purely Bayesian way. However, I am not capable of Bayesian statistics,
> therefor I ask that question. I am starting to learn it a little bit.
No offense, but it sounds to me like you want to have the Bayesian
omelette without breaking the Bayesian eggs ;). Certain kinds of
multi-level models are mathematically identical to certain kinds of
Empirical Bayes models, but that does not make them Bayesian (despite
what some people say). I caution against your implied goal of
obtaining Bayesian statistics without performing a Bayesian analysis.
Good luck,
Andrew
--
Andrew Robinson
Department of Mathematics and Statistics Tel: +61-3-8344-6410
University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599
http://www.ms.unimelb.edu.au/~andrewpr
http://blogs.mbs.edu/fishing-in-the-bay/
------------------------------
Message: 13
Date: Thu, 25 Dec 2008 14:20:29 -0800 (PST)
From: diegol <diegol81@gmail.com>
Subject: [R] Percent damage distribution
To: r-help@r-project.org
Message-ID: <21170344.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
R version: 2.7.0
Running on: WinXP
I am trying to model damage from fire losses (given that the loss occurred).
Since I have the individual insured amounts, rather than sampling dollar
damage from a continuous distribution ranging from 0 to infinity, I want to
sample from a percent damage distribution from 0-100%. One obvious solution
is to use runif(n, min=0, max=1), but this does not seem to be a good idea,
since I would not expect damage to be uniform.
I have not seen such a distribution in actuarial applications, and rather
than inventing one from scratch I thought I'd ask you if you know one,
maybe
from other disciplines, readily available in R.
Thank you in advance.
-----
~~~~~~~~~~~~~~~~~~~~~~~~~~
Diego Mazzeo
Actuarial Science Student
Facultad de Ciencias Econ?micas
Universidad de Buenos Aires
Buenos Aires, Argentina
--
View this message in context:
http://www.nabble.com/Percent-damage-distribution-tp21170344p21170344.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 14
Date: Thu, 25 Dec 2008 14:43:45 -0800 (PST)
From: Wayne F <wdf61@mac.com>
Subject: Re: [R] ggplot2 Xlim
To: r-help@r-project.org
Message-ID: <21170453.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
I'm just a ggplot2 beginner, but...
It seems to me that you're mixing continuous and factor variables/concepts.
It looks to me as if ForkLength and Number are continuous values. But
you'll
need to convert ForkLength into a factor before using geom="bar". I
do that
and the graph "works" but the bars are extremely busy, which I assume
is
what you mean by "crowded".
As I try several different things, I'm seeing error messages. Are you not
seeing error messages?
Is the bottom line that you simply want to display some continuous data in a
histogram-ish style, and you don't like the default "binning" of
Number for
each of many ForkLengths?
If you simply use geom="line", things look clear and simple, no need
to bin
or simplify or...
If you do end up using geom="bar", I believe the mistake you're
making --
and I see an error message when I try -- is that you are using
scale_x_continuous whereas the X axis is discrete, so you should be using
scale_x_discrete. But then it will take some R magic to combine your
"bins"
into wider bins so you get a "less crowded" look.
Or perhaps I'm misunderstanding?
Wayne
Felipe Carrillo wrote:>
> Hi: I need some help.
> I am ploting a bar graph but I can't adjust my x axis scale
> I use this code:
> i <- qplot(ForkLength,Number,data=FL,geom="bar")
> i + geom_bar(colour="blue",fill="grey65") # too
crowded>
> FL_dat <- ggplot(FL,aes(x=ForkLength,y=Number)) +
> geom_bar(colour="green",fill="grey65")
> FL_dat + scale_x_continuous(limits=c(20,170)) # Can't see anything
>
> FL Number
> 29 22.9
> 30 63.4
> 31 199.3
> 32 629.6
> 33 2250.1
> ...
>
--
View this message in context:
http://www.nabble.com/ggplot2-Xlim-tp21161660p21170453.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 15
Date: Thu, 25 Dec 2008 07:26:46 -0800 (PST)
From: 1Rnwb <sbpurohit@gmail.com>
Subject: Re: [R] creating standard curves for ELISA analysis
To: r-help@r-project.org
Message-ID: <21168216.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thank you for your suggestions, I am sorry that
http://www.nabble.com/file/p21168216/ds2_panelA_p8_B3_dil4x.csv
ds2_panelA_p8_B3_dil4x.csv I forgot to include the concentration of Standard
to use. the first standard (A1, A2) is 67000 and dilution series is created
by diluting it 1/3. i am reposting the full Absorbance data once again to
have a full idea about the output file created by the ELISA software. I am
also uploading the sample file for a better understanding, serum samples in
this case are diluted 4 times
Location Sample ENA-78(37) FgF(54) G-CSF(58) GM-CSF(71) IFNg(75) IL-10(50)
IL-17(20) IL-1b(6) IL-1ra(16) IL-2(17) IL-4(21) IL-5(09) IL-6(32) IL-8(36)
MCP1(78) MIP-1a(59) MIP-1b(74) TNFa(77) VEGF(52)
A1 s1 6923 12667.5 4644 3247 11878 11648 4142.5 6536.5 5409 7057.5 5146
10921 5437 7968 6590.5 11497.5 8358 8088.5 7721
B1 s2 4093 9680 2535.5 2000.5 8392 7017 2135 3913 3162 4226.5 2757 8132.5
2907 6416 6216 8332 5625.5 4364 4225
C1 s3 1689.5 4586 929 1020 4313 3549.5 961.5 2220.5 992 1585 1289.5 4309
1294 3748 4315 2780.5 2504 2043 1268
D1 s4 642 1440 352 418 1769 1327.5 318.5 824 409 528 399 1384 664 1693 1461
651 803 576.5 720
E1 s5 228.5 319.5 141 143 741 590.5 170 385 155 230 114 503.5 218 701 493
237 245 236 320
F1 s6 94 42 65 67 301 271 60 147 78 99.5 28 156.5 106 397 128 74 65 108 154
G1 s7 43 11 36 36 109 94 30.5 60.5 39 52 8 48 28 163 39 46 28 37 73.5
H1 b 15 2 12 15 7 4 3 4 19 6 2.5 0 0.5 78 23 12 16.5 3 30
A2 s1 6317 12543.5 4743 3757 10073.5 11016 4432 6990 5019 6687 4578 9856
5589.5 7265 6533 11368.5 7486 7503 7823
B2 s2 4487 9343.5 2114 2029 8300 6541 2027.5 4099 2986.5 3826 2857 7192.5
3197 5786 6386 7741 5086.5 4560.5 4409
C2 s3 1577 4024.5 942 1041 4035 3093.5 1098 1943 1133 1672 1263 3706 1421
3223 3729 2681 2065 1717 1453.5
D2 s4 609 1371.5 366.5 421.5 1884 1397 361 944 422 631 496 1442 535 1646
1523 766 791 718 723
E2 s5 234 304.5 143 165 760 541.5 160 358.5 163.5 249.5 111 459 222.5 765
416 188 215 235 281
F2 s6 90 44 64 68 268.5 218 55 135 73.5 102 25 140 101 304 104 72.5 57 87
120.5
G2 s7 39 9 34 35 90 90 31.5 57 38 47.5 9 42.5 25 133 38 42 29 33 70.5
H2 b 12 1.5 14 12 8 5 3 5 15 5 1.5 0 0.5 79.5 23.5 8 15 1 33
A3 1 683 5 23 23 16 10 16 9 66 10 6 4 12 653 641.5 23 22 14 182.5
B3 2 523 7 23 18 19.5 11 11 16 59.5 15.5 6 2 8.5 1789 369.5 26 28 16 140
C3 3 686.5 4 26 18 12 6 15.5 18.5 118 17 4 2 10 2040.5 714.5 20 28 17.5
123.5
D3 4 1640.5 5 71 17 17 9 13 16 564.5 13 5 4 18 1258 957 31 164 24.5 291
E3 5 158.5 5 34.5 20 15.5 8 13 13.5 75 13 4.5 4 50.5 1075 330 20.5 23 11 87
F3 6 862 5 21 22 18 10 14.5 12 58 18 4 2 7 2207 555.5 23 27 19 124
G3 7 710 6 30 16.5 19 6 13 14.5 105 17 6 2 12 1755.5 557 23 32 13 135
H3 8 1047 4 41 20 16 10.5 14 12 111 11 4.5 4 10.5 1690.5 404.5 23 50.5 23
212
A4 9 512.5 6.5 22 22 18 11 11 30 167 13 4 3 5 2729 1420 37 48 18 333
B4 10 979.5 5 20 27 19 2 18 15 122 14 4 2 7 1581 496 18 35 20 94
C4 11 270 6 23 20.5 19.5 9 14.5 68 656 15 3 4 79 5995 2964 50 40.5 17 198.5
D4 12 207 4 27 21 19 10 14 11 39 16 5 3 6.5 1622.5 311 25 25 15 181
E4 13 367 7.5 25 21 19 12 10 13 50 12 7 1 8.5 1395.5 718.5 24 30 19 219
F4 14 462 5 23 20 19 7.5 14 10 107 14 4 0 10 1715.5 484.5 23 22 19 265
G4 15 441.5 5 19 20 18 7 16 29 271 12 5 1 10 6917 6325 24.5 32 15.5 156
H4 16 521 7 38 22 18 10 14 11 164 16 4 3 23 1967 744 25 34.5 15 202
A5 17 759 5 18 21 16.5 10 10 10 55 11 4 1.5 16 1731.5 752 19 30 12 288
B5 18 624 6 22.5 20 21 12.5 11 12 52 12.5 4 2 8 2329.5 533 23 30.5 15 125
C5 19 735 5 21 19 14 10 17 11.5 291.5 9 5 3 10 773 1682.5 26 33 17.5 67
D5 20 450 5 25 16 15.5 8 17 12 65 12 5 3.5 9 1970 335 20 23 14 110
E5 21 405 5 21 18 14 7 12 10 139 13.5 5 0 8.5 1318 433 25.5 33.5 14 89
F5 22 155 3.5 12.5 10 12 4 35 6 24 4.5 3 2 8.5 391 257 19 114.5 8 104
G5 23 472 6 23 17 18 6 12 38.5 348 11 3 2 7 2764 1612 39 967 20 197
H5 24 326 5.5 20 19 17 7.5 13.5 11 44 13 5 3 66 1579 272.5 24 24 13 152.5
A6 25 341 5 24 22 15 8 13.5 13 68.5 12 4 2 7 1591 483 22 34 11 84
B6 26 460.5 5 21 23 16 10 11 160 677.5 11 5 2 23 5326 1495 46.5 62 19 138
C6 27 454 4 32.5 16.5 15 9 10.5 14 104.5 10.5 3 1 50 1468 1459 25.5 38 17
142
D6 28 604 6 27 18 16.5 7 14.5 37 950.5 12.5 5 4 14 5643 5980 24 36 18 324.5
E6 29 491 7 22.5 18 19 8 13.5 23 240 17 4 1 11 3802.5 1902 30.5 47.5 20 297
F6 30 414 4 24.5 20 20.5 9 13 14 39 16 3 3 6 1384.5 585.5 23 32 13 95
G6 31 423 5.5 21 19.5 19 9 16 299 1428 15 5 2 49 6343 6018 160.5 335 11.5
211
H6 32 286 6 28 18 10 9 14 13 108 13.5 3 2 27.5 1369 808 20 32 237 70
A7 33 874.5 6 23.5 20 16 8 12.5 65 588 6 5.5 3 17 5915 5098 36 81 23 229
B7 34 1211 3 23 20 16 33 12 9.5 78 9 4.5 2 8 2097 693.5 16.5 28 13 274.5
C7 35 257 4.5 25.5 17 16.5 9 12 10 52 10 5 3 7 1456 750 24 23 11 70.5
Thanks for the help
1Rnwb wrote:>
> Hello R guru's
>
> I am a newbie to R, In my research work I usually generate a lot of ELISA
> data in form of absorbance values. I ususally use Excel to calculate the
> concentrations of unknown, but it is too tedious and manual especially
> when I have 100's of files to process. I would appreciate some help
in> creating a R script to do this with minimal manual input. s A1-G1 and
> A2-G2 are standards serially diluted H1 and H2 are Blanks. A3 to H12 are
> serum samples. I am pasting the structure of my data below:
>
>
>
> A1 14821
> B1 11577
> C1 5781
> D1 2580
> E1 902
> F1 264
> G1 98
> H1 4
> A2 14569.5
> B2 11060
> C2 5612
> D2 2535
> E2 872
> F2 285
> G2 85
> H2 3
> A3 1016
> B3 2951.5
> C3 547
> D3 1145
> E3 4393
> F3 4694
> G3 1126
> H3 1278
> A4 974.5
> B4 3112.5
> C4 696.5
> D4 2664.5
> E4 184.5
> F4 1908
> G4 108.5
> H4 1511
> A5 463.5
> B5 1365
> C5 816
> D5 806
> E5 1341
> F5 1157
> G5 542.5
> H5 749
>
>
--
View this message in context:
http://www.nabble.com/creating-standard-curves-for-ELISA-analysis-tp20917182p21168216.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 16
Date: Fri, 26 Dec 2008 00:29:17 +0000 (UTC)
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] Percent damage distribution
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20081226T002735-750@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
diegol <diegol81 <at> gmail.com> writes:
>
>
> R version: 2.7.0
> Running on: WinXP
>
> I am trying to model damage from fire losses (given that the loss
occurred).> Since I have the individual insured amounts, rather than sampling dollar
> damage from a continuous distribution ranging from 0 to infinity, I want
to> sample from a percent damage distribution from 0-100%. One obvious
solution> is to use runif(n, min=0, max=1), but this does not seem to be a good
idea,> since I would not expect damage to be uniform.
>
Beta distribution (rbeta(...)) or
logistic-binomial distribution
plogis(rnorm(...)) .
See e.g.
Smithson, Michael, and Jay Verkuilen. 2006. A better lemon squeezer?
Maximum-likelihood regression with beta-distributed dependent variables.
Psychological Methods 11, no. 1 (March): 54-71. doi:2006-03820-004.
------------------------------
Message: 17
Date: Fri, 26 Dec 2008 08:44:19 +0000 (GMT)
From: Prof Brian Ripley <ripley@stats.ox.ac.uk>
Subject: Re: [R] How can I avoid nested 'for' loops or quicken the
process?
To: Oliver Bandel <oliver@first.in-berlin.de>
Cc: r-help@stat.math.ethz.ch
Message-ID: <alpine.LFD.2.00.0812260833330.4072@gannet.stats.ox.ac.uk>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Thu, 25 Dec 2008, Oliver Bandel wrote:
> Bert Gunter <gunter.berton <at> gene.com> writes:
>
>>
>> FWIW:
>>
>> Good advice below! -- after all, the first rule of optimizing code is:
>> Don't!
>>
>> For the record (yet again), the apply() family of functions (and their
>> packaged derivatives, of course) are "merely" vary carefully
written for()>> loops: their main advantage is in code readability, not in efficiency
gains,>> which may well be small or nonexistent. True efficiency gains require
>> "vectorization", which essentially moves the for() loops
from interpreted>> code to (underlying) C code (on the underlying data structures): e.g.
>> compare rowMeans() [vectorized] with ave() or apply(..,1,mean).
> [...]
>
> The apply-functions do bring speed-advantages.
>
> This is not only what I read about it,
> I have used the apply-functions and really got
> results faster.
>
> The reason is simple: an apply-function does
> make in C, what otherwise would be done on the level of R
> with for-loops.
Not true of apply(): true of lapply() and hence sapply(). I'll leave you
to check eapply, mapply, rapply, tapply.
So the issue is what is meant by 'the apply() family of functions':
people
often mean *apply(), of which apply() is an unusual member, if one at all.
[Historical note: a decade ago lapply was internally a for() loop. I
rewrote it in C in 2000: I also moved apply to C at the same time but it
proved too little an advantage and was reverted. The speed of lapply
comes mainly from reduced memory allocation: for() is also written in C.]
--
Brian D. Ripley, ripley@stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 18
Date: Fri, 26 Dec 2008 08:55:33 +0000 (GMT)
From: Prof Brian Ripley <ripley@stats.ox.ac.uk>
Subject: Re: [R] Percent damage distribution
To: diegol <diegol81@gmail.com>
Cc: r-help@r-project.org
Message-ID: <alpine.LFD.2.00.0812260849000.4072@gannet.stats.ox.ac.uk>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Not an R question as yet .....
In my limited experience (we have some insurance projets), 100% can occur,
but otherwise a beta distbribution may suit, which suggests a mixture
distribution. But start with an empirical examination (histogram, ecdf,
density plot) of the distribution, since it may reveal other features.
The next question is 'why model'? For such a simple problem (a
univariate distribution) a plot may be a sufficent analysis, and for e.g.
simulation you could just re-sample the data.
On Thu, 25 Dec 2008, diegol wrote:
>
> R version: 2.7.0
> Running on: WinXP
>
> I am trying to model damage from fire losses (given that the loss
occurred).> Since I have the individual insured amounts, rather than sampling dollar
> damage from a continuous distribution ranging from 0 to infinity, I want
to> sample from a percent damage distribution from 0-100%. One obvious
solution> is to use runif(n, min=0, max=1), but this does not seem to be a good
idea,> since I would not expect damage to be uniform.
>
> I have not seen such a distribution in actuarial applications, and rather
> than inventing one from scratch I thought I'd ask you if you know one,
maybe> from other disciplines, readily available in R.
>
> Thank you in advance.
>
> -----
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
> Diego Mazzeo
> Actuarial Science Student
> Facultad de Ciencias Econ?micas
> Universidad de Buenos Aires
> Buenos Aires, Argentina
> --
> View this message in context:
http://www.nabble.com/Percent-damage-distribution-tp21170344p21170344.html> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Brian D. Ripley, ripley@stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 19
Date: Fri, 26 Dec 2008 1:08:40 -0800
From: <rkevinburton@charter.net>
Subject: [R] Upgrading R causes Tinn-R to freeze.
To: r-help@stat.math.ethz.ch
Message-ID: <20081226040840.0CUYA.1997811.root@mp18>
Content-Type: text/plain; charset=utf-8
I recently upgraded from 2.8.0 to 2.8.1 by first installing the 2.8.1 version
then copying the binaries and the library to the 2.8.0 folder. Now Tinn-R will
not start up. I just see that start up splash screen for a long period of time.
It seems fozen to me. Any guesses on what I did wrong in the upgrade?
Thank you.
Kevin
------------------------------
_______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 70, Issue 26
**************************************
[[alternative HTML version deleted]]