Grathwohl, Dominik, LAUSANNE, NRC-BAS
2006-Jul-25 10:48 UTC
[R] Multiple tests on repeated measurements
Dear R-helpers:
My question is how do I efficient and valid correct for multiple tests in a
repeated measurement design:
Suppose we measure at two distinct visits with repeated subjects a treatment
difference on the same variable.
The treatment differences are assessed with a mixed model and adjusted by two
methods for multiple tests:
# 1. Method: Adjustment with library(multcomp)
library(nlme)
library(multcomp)
n <- 30 # number of subjects
sd1 <- 0.5 # Standard deviation of the random intercept
sd2 <- 0.8 # Standard deviation of the residuals
id <- rep(1:n,times=2); v <- rep(0:1, each=n); trt <-
rep(sample(rep(0:1, each=n/2), n), times=2)
df <- data.frame(id, v, trt,
y=2 + rep(rnorm(10,0,sd1), times=2) + 0.5*v + 0.7*trt + 0.2*v*trt + rnorm(2*n,
0, sd2))
m1 <- lme(y ~ v + trt + v*trt, data=df, random= ~ 1|id)
summary(m1)
par4 <- m1$coef$fixed
cov4 <- vcov(m1)
cm4 <- matrix(c(0, 0, 1, 0, 0, 0, 1, 1), nrow = 2, ncol=4, byrow=TRUE,
dimnames = list(c("diff/v=0", "diff/v=1"),
c("C.1", "C.2", "C.3", "C.4")))
v4 <- csimint(estpar=par4, df=n-6, # I'm not sure whether I found
# the correct degrees of freedom
covm=cov4,
cmatrix=cm4, conf.level=0.95)
sv4 <- summary(v4)
# 2. Method: I found in Handbook of Statistics Vol 13, p.616,
# same can be found in http://home.clara.net/sisa/bonhlp.htm
# Bonferroni on correlated outcomes:
raw.p <- sv4$p.value.raw
co4 <- cor(df$y[df$v==0],df$y[df$v==1])
rho <- mean(c(1,co4,co4,1))
pai <- 1-(1-raw.p)^2^(1-rho)
# The results of two methods are presented in the following lines:
out <- cbind(raw.p, sv4$p.value.bon, sv4$p.value.adj, pai)
colnames(out) <- c("raw.p", "bon.p",
"multcomp.p", "bon.cor.p")
out
As you can see there are quite big differences
between the two ways adjusting for multiple tests on repeated measurements.
I guess that the multcomp library is not appropriate for this kind of
hypotheses.
However I could not find an explanation in the help files.
May be one of the experts can point me in the right direction?
Kind regards,
Dominik
platform i386-pc-mingw32
arch i386
os mingw32
system i386, mingw32
status
major 2
minor 2.1
year 2005
month 12
day 20
svn rev 36812
language R
[[alternative HTML version deleted]]
Spencer Graves
2006-Aug-02 08:25 UTC
[R] Correlation adjusted Bonferroni? (was: Multiple tests on repeated measurements)
I'm not familiar with the correlation adjustment to Bonferroni you
mention below, though it sounds interesting. However, I think there is
something not right about it or about how you have interpreted it. Your
code produced the following for me:
p.value.raw p.value.bon p.value.adj
= raw.p = bon.p =multcomp.p "bon.cor.p"
diff/v=0 0.028572509 0.057145019 0.054951102 0.034934913
diff/v=1 0.001727993 0.003455987 0.003415545 0.002119276
In the absence of other information, I'd be inclined to believe
csimint(..)$p.value.adj or ..$p.value.bon over your "bon.cor.p".
Hope this helps.
Spencer Graves
Grathwohl, Dominik, LAUSANNE, NRC-BAS wrote:> Dear R-helpers:
>
> My question is how do I efficient and valid correct for multiple tests in a
repeated measurement design:
> Suppose we measure at two distinct visits with repeated subjects a
treatment difference on the same variable.
> The treatment differences are assessed with a mixed model and adjusted by
two methods for multiple tests:
>
> # 1. Method: Adjustment with library(multcomp)
>
> library(nlme)
> library(multcomp)
>
> n <- 30 # number of subjects
> sd1 <- 0.5 # Standard deviation of the random intercept
> sd2 <- 0.8 # Standard deviation of the residuals
> id <- rep(1:n,times=2); v <- rep(0:1, each=n); trt <-
rep(sample(rep(0:1, each=n/2), n), times=2)
> df <- data.frame(id, v, trt,
> y=2 + rep(rnorm(10,0,sd1), times=2) + 0.5*v + 0.7*trt + 0.2*v*trt +
rnorm(2*n, 0, sd2))
> m1 <- lme(y ~ v + trt + v*trt, data=df, random= ~ 1|id)
> summary(m1)
> par4 <- m1$coef$fixed
> cov4 <- vcov(m1)
> cm4 <- matrix(c(0, 0, 1, 0, 0, 0, 1, 1), nrow = 2, ncol=4, byrow=TRUE,
> dimnames = list(c("diff/v=0", "diff/v=1"),
c("C.1", "C.2", "C.3", "C.4")))
> v4 <- csimint(estpar=par4, df=n-6, # I'm not sure whether I found
> # the correct degrees of freedom
> covm=cov4,
> cmatrix=cm4, conf.level=0.95)
> sv4 <- summary(v4)
>
> # 2. Method: I found in Handbook of Statistics Vol 13, p.616,
> # same can be found in http://home.clara.net/sisa/bonhlp.htm
> # Bonferroni on correlated outcomes:
>
> raw.p <- sv4$p.value.raw
> co4 <- cor(df$y[df$v==0],df$y[df$v==1])
> rho <- mean(c(1,co4,co4,1))
> pai <- 1-(1-raw.p)^2^(1-rho)
>
> # The results of two methods are presented in the following lines:
> out <- cbind(raw.p, sv4$p.value.bon, sv4$p.value.adj, pai)
> colnames(out) <- c("raw.p", "bon.p",
"multcomp.p", "bon.cor.p")
> out
>
> As you can see there are quite big differences
> between the two ways adjusting for multiple tests on repeated measurements.
> I guess that the multcomp library is not appropriate for this kind of
hypotheses.
> However I could not find an explanation in the help files.
> May be one of the experts can point me in the right direction?
>
> Kind regards,
>
> Dominik
>
> platform i386-pc-mingw32
> arch i386
> os mingw32
> system i386, mingw32
> status
> major 2
> minor 2.1
> year 2005
> month 12
> day 20
> svn rev 36812
> language R
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.