I suggest this discussion be moved to the R-SIG-mixed-models mailing
list which I am cc:ing on this reply. Please delete the R-help
mailing list from replies to this message.
On Jan 16, 2008 11:44 AM, Feldman, Tracy <tsfeldman at noble.org>
wrote:> Dear All,
>
>
>
> I used lmer for data with non-normally distributed error and both fixed
> and random effects. I tried to calculate a "Type III" sums of
squares
> result, by I conducting likelihood ratio tests of the full model against
> a model reduced by one variable at a time (for each variable
> separately). These tests gave appropriate degrees of freedom for each of
> the two fixed effects, but when I left out one of two random effects
> (each random effect is a categorical variable with 5 and 8 levels,
> respectively) and tested that reduced model against the full model,
> output showed that the test degrees of freedom = 1, which was incorrect.
Why is that incorrect? The degrees of freedom for a likelihood ratio
test is usually defined as the difference in the number of parameters
and random effects are not parameters. They are an unobserved level
of random variation. The parameter associated with the random effects
is, in the simple cases, the variance of the random effects.
> Since I used an experimental design with spatial and temporal
> "blocks"-where I repeated the same experiment several times, with
a
> different treatments in each spatial block each time (and with different
> combinations of individuals in each treatment)-I am now thinking that I
> should leave the random effects in the model no matter what (and only
> test for fixed effects). This leaves me with three related questions:
> 1. Why do Likelihood Ratio Tests of a full model against a model
> with one less random effect report the incorrect degrees of freedom?
You are more likely to get helpful responses if you avoid value
judgements in your questions.
> Are such tests treating each random variable as one combined entity? I
> can provide code and data if this would help.
>
>
>
> 2. In a publication, is it reasonable to report that I only tested
> models that included random effects? Do I need to report results of a
> test of significance of these random effects (i.e., I am not sure how or
> if I should include any information about the random effects in my
> "ANOVA-type" tables)?
>
>
>
> 3. If I should test for the significance of random effects, per se
> (and report these), is it more appropriate to simply fit models with and
> without random effects to see if the pattern of fixed effects is
> different? I can look at random effects using
"ranef(model_name)", but
> this function does not assess their significance.
>
>
>
> I am not subscribed to this list, so if possible, please reply to me
> directly at tsfeldman at noble.org . Thank you for your time and help.
>
>
>
> Sincerely,
>
>
>
> Tracy Feldman
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>