Dear all, I''m trying to store/extract the mean& standard error of the fixed effects parameter and the variance of the random effects parameter from "lmer" procedure from mlmre4 package developed by bates n pinheiro. while storing fixed effects parameter is straight forward, the same is not true for storing the variance parameter of the random effects. kindly help me ~prabhu [[alternative HTML version deleted]] ______________________________________________ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

You need the VarCorr function. I think you mean that lmer is in the Matrix package.> -----Original Message----- > From: r-help-bounces at stat.math.ethz.ch > [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of prabhu bhaga > Sent: Tuesday, July 11, 2006 11:19 AM > To: r-help at stat.math.ethz.ch > Subject: [R] storing the estimates from lmer > > Dear all, > > I''m trying to store/extract the mean& standard error of the > fixed effects parameter and the variance of the random > effects parameter from "lmer" > procedure from mlmre4 package developed by bates n pinheiro. > while storing fixed effects parameter is straight forward, > the same is not true for storing the variance parameter of > the random effects. kindly help me > > ~prabhu > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! > http://www.R-project.org/posting-guide.html >

On 7/11/06, Doran, Harold <HDoran at air.org> wrote:> You need the VarCorr function. I think you mean that lmer is in the > Matrix package.Currently lmer is in the Matrix package. The plan is to move it back to the lme4 package when interpackage linking has been added to R - perhaps as early as the release of R-2.4.0. Because the lme4 package automatically loads the Matrix package it is safest to act as if lmer actually resided in the lme4 package and use library(lme4) fm1 <- lmer(...> > > -----Original Message----- > > From: r-help-bounces at stat.math.ethz.ch > > [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of prabhu bhaga > > Sent: Tuesday, July 11, 2006 11:19 AM > > To: r-help at stat.math.ethz.ch > > Subject: [R] storing the estimates from lmer > > > > Dear all, > > > > I''m trying to store/extract the mean& standard error of the > > fixed effects parameter and the variance of the random > > effects parameter from "lmer" > > procedure from mlmre4 package developed by bates n pinheiro. > > while storing fixed effects parameter is straight forward, > > the same is not true for storing the variance parameter of > > the random effects. kindly help me > > > > ~prabhu > > > > [[alternative HTML version deleted]] > > > > ______________________________________________ > > R-help at stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide! > > http://www.R-project.org/posting-guide.html > > > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html >

The structure of ''lmer'' objects and helper functions is outlined in the ''lmer'' and ''lmer-class'' help pages. The latter mentions "''vcov ''signature(object = "mer")'': Calculate variance-covariance matrix of the _fixed_ effect terms, see also ''vcov''." Thus, sqrt(diag(vcov(lmer.object))) should give you standard errors of the fixed effects. The parameter estimates can be obtained using ''VarCorr''. However, characterizing their random variability is harder, because their distribution is not as simply summarized. The ''lmer-class'' help page says that an ''lmer'' object includes a slot, "''Omega'': A list of positive-definite matrices stored as ''"dpoMatrix"'' objects that are the relative precision matrices of the random effects associated with each of the grouping factors." However, I don''t know how to use this. For problems like this, the ''lme4'' and ''Matrix'' packages include a function ''simulate'', which is what I might use, at least until I figured out how to get essentially the same answer from the Omega slot. Hope this helps, Spencer Graves prabhu bhaga wrote:> Dear all, > > I''m trying to store/extract the mean& standard error of the fixed effects > parameter and the variance of the random effects parameter from "lmer" > procedure from mlmre4 package developed by bates n pinheiro. while storing > fixed effects parameter is straight forward, the same is not true for > storing the variance parameter of the random effects. kindly help me > > ~prabhu > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

p.s. I intended to include the following extension to an example from the ''lmer'' help page: fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) (fm1.fix <- fixef(fm1)) (fm1.fix.se <- sqrt(diag(vcov(fm1)))) fm1.fix/fm1.fix.se fm1.ran <- VarCorr(fm1) diag(fm1.ran$Subject) ######################## The structure of ''lmer'' objects and helper functions is outlined in the ''lmer'' and ''lmer-class'' help pages. The latter mentions "''vcov ''signature(object = "mer")'': Calculate variance-covariance matrix of the _fixed_ effect terms, see also ''vcov''." Thus, sqrt(diag(vcov(lmer.object))) should give you standard errors of the fixed effects. The parameter estimates can be obtained using ''VarCorr''. However, characterizing their random variability is harder, because their distribution is not as simply summarized. The ''lmer-class'' help page says that an ''lmer'' object includes a slot, "''Omega'': A list of positive-definite matrices stored as ''"dpoMatrix"'' objects that are the relative precision matrices of the random effects associated with each of the grouping factors." However, I don''t know how to use this. For problems like this, the ''lme4'' and ''Matrix'' packages include a function ''simulate'', which is what I might use, at least until I figured out how to get essentially the same answer from the Omega slot. Hope this helps, Spencer Graves prabhu bhaga wrote:> Dear all, > > I''m trying to store/extract the mean& standard error of the fixed effects > parameter and the variance of the random effects parameter from "lmer" > procedure from mlmre4 package developed by bates n pinheiro. while storing > fixed effects parameter is straight forward, the same is not true for > storing the variance parameter of the random effects. kindly help me > > ~prabhu > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Hi Spencer, On 7/15/06, Spencer Graves <spencer.graves at pdf.com> wrote:> p.s. I intended to include the following extension to an example from > the ''lmer'' help page: > > fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) > (fm1.fix <- fixef(fm1)) > (fm1.fix.se <- sqrt(diag(vcov(fm1)))) > fm1.fix/fm1.fix.se > > fm1.ran <- VarCorr(fm1) > diag(fm1.ran$Subject)I''m confident that you are aware that sqrt(diag(vcov(fm1))) and sqrt(diag(fm1.ran$Subject)) refer to different parameters the model. However, some readers of your reply may not see the distinction. Allow me to try to clarify. The summary for this fitted model is lmer> (fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)) Linear mixed-effects model fit by REML Formula: Reaction ~ Days + (Days | Subject) Data: sleepstudy AIC BIC logLik MLdeviance REMLdeviance 1753.628 1769.593 -871.8141 1751.986 1743.628 Random effects: Groups Name Variance Std.Dev. Corr Subject (Intercept) 612.090 24.7405 Days 35.072 5.9221 0.066 Residual 654.941 25.5918 number of obs: 180, groups: Subject, 18 Fixed effects: Estimate Std. Error t value (Intercept) 251.4051 6.8246 36.838 Days 10.4673 1.5458 6.771 Correlation of Fixed Effects: (Intr) Days -0.138 The fixed effects described in the lower part of this summary are a familiar type of parameter in a statistical model. They are coefficients in a linear predictor. We produce estimates of these parameters and also provide a measure of the precision of these estimates - their standard errors. The vcov generic function returns an estimate of the precision of the estimated parameters (typically parameters that are coefficients in a linear predictor). Thus> sqrt(diag(vcov(fm1)))[1] 6.824558 1.545789 provides the standard errors of the fixed effects estimates. The random effects, although they also appear in the linear predictor, are not formally parameters in the model. They are unobserved random variables whose variance covariance matrix has a known form but unknown value. It is the values that determine the variance-covariance matrix that are parameters in the model. These parameters are returned by VarCorr> VarCorr(fm1)$Subject 2 x 2 Matrix of class "dpoMatrix" (Intercept) Days (Intercept) 612.09032 9.60428 Days 9.60428 35.07165 attr(,"sc") [1] 25.59182 In other words, the 2 x 2 matrix shown above is the estimate of the variance-covariance matrix of the random effects associated with the grouping factor "Subject". Thus> sqrt(diag(VarCorr(fm1)$Subject))[1] 24.740459 5.922132 gives the estimated standard deviations of the random effects. These are estimates of parameters in the model. They are *not* standard errors of parameter estimates. The lmer function and related software does not return standard errors of the estimates of variance components. This is intentional. Below I give my familiar rant on why I think returning such standard errors is a bad practice. I encourage users of lmer who wish to determine the precision of the estimates of the variance components to create a Markov chain Monte Carlo sample of the parameters and evaluate the HPDintervals.> sm1 <- mcmcsamp(fm1, 50000) > library(coda)Warning message: use of NULL environment is deprecated> HPDinterval(sm1)lower upper (Intercept) 236.6518363 266.5465536 Days 7.0136243 13.8947993 log(sigma^2) 6.2550082 6.7295329 log(Sbjc.(In)) 5.4928205 7.5751372 log(Sbjc.Days) 2.8197523 4.6337518 atanh(Sbj.(I).Dys) -0.6988632 0.9836688 deviance 1752.5158501 1766.6461469 attr(,"Probability") [1] 0.95 <rant> Some software, notably SAS PROC MIXED, does produce standard errors for the estimates of variances and covariances of random effects. In my opinion this is more harmful than helpful. The only use I can imagine for such standard errors is to form confidence intervals or to evaluate a z-statistic or something like that to be used in a hypothesis test. However, those uses require that the distribution of the parameter estimate be symmetric, or at least approximately symmetric, and we know that the distribution of the estimate of a variance component is more like a scaled chi-squared distribution which is anything but symmetric. It is misleading to attempt to summarize our information about a variance component by giving only the estimate and a standard error of the estimate. </rant>> > ######################## > The structure of ''lmer'' objects and helper functions is outlined in > the ''lmer'' and ''lmer-class'' help pages. The latter mentions "''vcov > ''signature(object = "mer")'': Calculate variance-covariance matrix of the > _fixed_ effect terms, see also ''vcov''." Thus, > sqrt(diag(vcov(lmer.object))) should give you standard errors of the > fixed effects. > > The parameter estimates can be obtained using ''VarCorr''. However, > characterizing their random variability is harder, because their > distribution is not as simply summarized. The ''lmer-class'' help page > says that an ''lmer'' object includes a slot, "''Omega'': A list of > positive-definite matrices stored as ''"dpoMatrix"'' objects that are the > relative precision matrices of the random effects associated with each > of the grouping factors." However, I don''t know how to use this. For > problems like this, the ''lme4'' and ''Matrix'' packages include a function > ''simulate'', which is what I might use, at least until I figured out how > to get essentially the same answer from the Omega slot. > > Hope this helps, > Spencer Graves > > prabhu bhaga wrote: > > Dear all, > > > > I''m trying to store/extract the mean& standard error of the fixed effects > > parameter and the variance of the random effects parameter from "lmer" > > procedure from mlmre4 package developed by bates n pinheiro. while storing > > fixed effects parameter is straight forward, the same is not true for > > storing the variance parameter of the random effects. kindly help me > > > > ~prabhu > > > > [[alternative HTML version deleted]] > > > > ______________________________________________ > > R-help at stat.math.ethz.ch mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html >

Douglas Bates <bates <at> stat.wisc.edu> writes: .....> I encourage users of lmer who wish to determine the precision of the > estimates of the variance components to create a Markov chain Monte > Carlo sample of the parameters and evaluate the HPDintervals. > > > sm1 <- mcmcsamp(fm1, 50000) > > library(coda) > Warning message: > use of NULL environment is deprecated > > HPDinterval(sm1) > lower upper > (Intercept) 236.6518363 266.5465536 > Days 7.0136243 13.8947993 > log(sigma^2) 6.2550082 6.7295329 > log(Sbjc.(In)) 5.4928205 7.5751372 > log(Sbjc.Days) 2.8197523 4.6337518 > atanh(Sbj.(I).Dys) -0.6988632 0.9836688 > deviance 1752.5158501 1766.6461469 > attr(,"Probability") > [1] 0.95 >And DB wrote in http://finzi.psych.upenn.edu/R/Rhelp02a/archive/76742.html "Evaluating entire terms is more difficult but you can always calculate the F ratio and put a lower bound on the denominator degrees of freedom." As a mcmc-challenged subject, it would be nice if someone could provide an example for this; or how to get CI estimates for an arbitrary contrast with mcmcsamp. Dieter

On 7/15/06, Douglas Bates <bates at stat.wisc.edu> wrote:> Hi Spencer,[....]> <rant> > Some software, notably SAS PROC MIXED, does produce standard errors > for the estimates of variances and covariances of random effects. In > my opinion this is more harmful than helpful. The only use I can > imagine for such standard errors is to form confidence intervals or to > evaluate a z-statistic or something like that to be used in a > hypothesis test. However, those uses require that the distribution of > the parameter estimate be symmetric, or at least approximately > symmetric, and we know that the distribution of the estimate of a > variance component is more like a scaled chi-squared distribution > which is anything but symmetric.You should add ..."when the true value of the variance is (close to) zero", I guess. Or does not standard asymptotic ML theory apply to these models? BTW, what is a "scaled chi-squared distribution"? G?ran

On 7/17/06, G?ran Brostr?m <goran.brostrom at gmail.com> wrote:> On 7/15/06, Douglas Bates <bates at stat.wisc.edu> wrote: > [....] > > <rant> > > Some software, notably SAS PROC MIXED, does produce standard errors > > for the estimates of variances and covariances of random effects. In > > my opinion this is more harmful than helpful. The only use I can > > imagine for such standard errors is to form confidence intervals or to > > evaluate a z-statistic or something like that to be used in a > > hypothesis test. However, those uses require that the distribution of > > the parameter estimate be symmetric, or at least approximately > > symmetric, and we know that the distribution of the estimate of a > > variance component is more like a scaled chi-squared distribution > > which is anything but symmetric. > > You should add ..."when the true value of the variance is (close to) > zero", I guess. Or does not standard asymptotic ML theory apply to > these models? BTW, what is a > "scaled chi-squared distribution"?Consider a simple case of an iid sample from a normal distribution with mean $\mu$ and variance $\sigma^2$. In that case the sample variance $s^2$ has a $\sigma^2\chi^2$ distribution with n-1 degrees of freedom. (Either that or I have been seriously misinforming my intro statistics classes for several years now.) That''s all I meant by a "scaled chi-squared distribution". All I am claiming here is that estimates of other variance components in more complicated models have a similar behavior, not exactly this behavior. The point is that they would not be expected to have nice, symmetric distributions that can be characterized by the estimate and a standard error of the estimate. If you create a Markov chain Monte Carlo sample from a fitted lmer object you generally find that the logarithm of a variance component has a posterior distribution that is close to symmetric. Depending on how precisely the variance component is estimated, the distribution of the variance component itself can be far from symmetric. If it still seems that I am stating things too loosely then perhaps we could correspond off-list and I could try to explain more clearly what I am claiming.