laurie bayet <lauriebayet <at> gmail.com> writes:
>
> Hi,
>
> I want to set up a mixed model ANCOVA but cannot find a way to do it.
>
> There is:
>
> * 1 subject factor (random, between subjects) called Subject
> * 3 categorical within subjects factors called Emotion, Sex, Race
> * 1 continuous covariate (**WITHIN subjects**) called Score
> and
> * a continuous dependent variable called logRT
>
> I need a nice and clean table with p-values and effect sizes for each
> factors and relevant interactions.
>
> Which function should I use?
>
> I am guessing lmer from lme4 but could not find any example on the forums
> or on my manual from Ga?l Millot.
>
> Here is a wild guess :
>
> ModelRT <- lmer(logRT ~ Race + Sex+ Emotion + Score + Race*Sex +
> Race*Emotion + Sex*Emotion + Race*Sex*Emotion + (1 | Subject))
>
> Would that be correct ?
>
> Thank you,
>
> laurie
>
* This might be better on r-sig-mixed-models at r-project.org
* In R '*' indicates "main effects plus all interactions"
(':' is
for an interaction only), so you can abbreviate your formula to
ModelRT <- lmer(logRT ~ Race*Sex*Emotion + (1 | Subject))
or using lme from the nlme package:
ModelRT <- lme(logRT~Race*Sex*Emotion, random=~1|Subject)
* You should strongly consider passing an explicit 'data' argument
rather than picking up the variables from the workspace
* See ?pvalues in lme4 for some of your choices about getting
tables of p-values and effect sizes (e.g. with auxiliary functions
from the car, lmerTest, or pbkrtest packages). Beware that lme
will give you denominator and degrees of freedom, but the degrees
of freedom may very likely be miscalculated for your within-subject
continuous covariate
* You should strongly consider whether you need to include
among-subject variance in the within-subject factors in your model
[see the two refs below]
@article{barr_random_2013,
title = {Random effects structure for confirmatory hypothesis testing: Keep
it maximal},
volume = {68},
issn = {0749-{596X}},
shorttitle = {Random effects structure for confirmatory hypothesis testing},
url = {http://www.sciencedirect.com/science/article/pii/S0749596X12001180},
doi = {10.1016/j.jml.2012.11.001},
abstract = {Linear mixed-effects models ({LMEMs)} have become increasingly
prominent in psycholinguistics and related areas. However, many researchers
do not seem to appreciate how random effects structures affect the
generalizability of an analysis. Here, we argue that researchers using
{LMEMs} for confirmatory hypothesis testing should minimally adhere to the
standards that have been in place for many decades. Through theoretical
arguments and Monte Carlo simulation, we show that {LMEMs} generalize best
when they include the maximal random effects structure justified by the
design. The generalization performance of {LMEMs} including data-driven
random effects structures strongly depends upon modeling criteria and sample
size, yielding reasonable results on moderately-sized samples when
conservative criteria are used, but with little or no power advantage over
maximal models. Finally, random-intercepts-only {LMEMs} used on
within-subjects and/or within-items data from populations where subjects
and/or items vary in their sensitivity to experimental manipulations always
generalize worse than separate F1 and F2 tests, and in many cases, even
worse than F1 alone. Maximal {LMEMs} should be the ?gold standard? for
confirmatory hypothesis testing in psycholinguistics and beyond.},
number = {3},
urldate = {2013-09-26},
journal = {Journal of Memory and Language},
author = {Barr, Dale J. and Levy, Roger and Scheepers, Christoph and Tily,
Harry J.},
month = apr,
year = {2013},
keywords = {Generalization, Linear mixed-effects models, Monte Carlo
simulation, statistics},
pages = {255--278}
}
@article{schielzeth_conclusions_2009,
title = {Conclusions beyond support: overconfident estimates in mixed models},
volume = {20},
issn = {1045-2249, 1465-7279},
shorttitle = {Conclusions beyond support},
url = {http://beheco.oxfordjournals.org/content/20/2/416},
doi = {10.1093/beheco/arn145},
abstract = {Mixed-effect models are frequently used to control for the
nonindependence of data points, for example, when repeated measures from the
same individuals are available. The aim of these models is often to estimate
fixed effects and to test their significance. This is usually done by
including random intercepts, that is, intercepts that are allowed to vary
between individuals. The widespread belief is that this controls for all
types of pseudoreplication within individuals. Here we show that this is not
the case, if the aim is to estimate effects that vary within individuals and
individuals differ in their response to these effects. In these cases,
random intercept models give overconfident estimates leading to conclusions
that are not supported by the data. By allowing individuals to differ in the
slopes of their responses, it is possible to account for the nonindependence
of data points that pseudoreplicate slope information. Such random slope
models give appropriate standard errors and are easily implemented in
standard statistical software. Because random slope models are not always
used where they are essential, we suspect that many published findings have
too narrow confidence intervals and a substantially inflated type I error
rate. Besides reducing type I errors, random slope models have the potential
to reduce residual variance by accounting for between-individual variation
in slopes, which makes it easier to detect treatment effects that are
applied between individuals, hence reducing type {II} errors as well.},
language = {en},
number = {2},
urldate = {2012-07-27},
journal = {Behavioral Ecology},
author = {Schielzeth, Holger and Forstmeier, Wolfgang},
month = mar,
year = {2009},
keywords = {experimental design, maternal effects, mixed-effect models,
random regression, repeated measures, type I error},
pages = {416--420}
}