Ulrich Keller
2006-Jul-18 18:15 UTC
[R] Test for equality of coefficients in multivariate multiple regression
Hello, suppose I have a multivariate multiple regression model such as the following: > DF<-data.frame(x1=rep(c(0,1),each=50),x2=rep(c(0,1),50)) > tmp<-rnorm(100) > DF$y1<-tmp+DF$x1*.5+DF$x2*.3+rnorm(100,0,.5) > DF$y2<-tmp+DF$x1*.5+DF$x2*.7+rnorm(100,0,.5) > x.mlm<-lm(cbind(y1,y2)~x1+x2,data=DF) > coef(x.mlm) y1 y2 (Intercept) 0.07800993 0.2303557 x1 0.52936947 0.3728513 x2 0.13853332 0.4604842 How can I test whether x1 and x2 respectively have the same effect on y1 and y2? In other words, how can I test if coef(x.mlm)[2,1] is statistically equal to coef(x.mlm)[2,2] and coef(x.mlm)[3,1] to coef(x.mlm)[3,2]? I looked at linear.hypothesis {car} and glh.test {gmodels}, but these do not seem the apply to multivariate models. Thank you in advance, Uli Keller
John Fox
2006-Jul-18 19:33 UTC
[R] Test for equality of coefficients in multivariate multiple regression
Dear Ulrich, I'll look into generalizing linear.hypothesis() so that it handles multivariate linear models. Meanwhile, vcov(x.mlm) will give you the covariance matrix of the coefficients, so you could construct your own test by ravelling coef(x.mlm) into a vector. I hope that this helps, John On Tue, 18 Jul 2006 20:15:12 +0200 Ulrich Keller <uhkeller at web.de> wrote:> Hello, > > suppose I have a multivariate multiple regression model such as the > following: > > > DF<-data.frame(x1=rep(c(0,1),each=50),x2=rep(c(0,1),50)) > > tmp<-rnorm(100) > > DF$y1<-tmp+DF$x1*.5+DF$x2*.3+rnorm(100,0,.5) > > DF$y2<-tmp+DF$x1*.5+DF$x2*.7+rnorm(100,0,.5) > > x.mlm<-lm(cbind(y1,y2)~x1+x2,data=DF) > > coef(x.mlm) > y1 y2 > (Intercept) 0.07800993 0.2303557 > x1 0.52936947 0.3728513 > x2 0.13853332 0.4604842 > > How can I test whether x1 and x2 respectively have the same effect on > y1 > and y2? In other words, how can I test if coef(x.mlm)[2,1] is > statistically equal to coef(x.mlm)[2,2] and coef(x.mlm)[3,1] to > coef(x.mlm)[3,2]? I looked at linear.hypothesis {car} and glh.test > {gmodels}, but these do not seem the apply to multivariate models. > Thank you in advance, > > Uli Keller > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.-------------------------------- John Fox Department of Sociology McMaster University Hamilton, Ontario, Canada http://socserv.mcmaster.ca/jfox/
Andrew Robinson
2006-Jul-19 01:51 UTC
[R] Test for equality of coefficients in multivariate multiple regression
Hi Uli, I suggest that you try to rewrite the model system into a single mixed-effects model, which would allow direct parameterization of the tests that you're interested in. A useful article, which may be overkill for your needs, is: Hall, D.B. and Clutter, M. (2004). Multivariate multilevel nonlinear mixed effects models for timber yield predictions, Biometrics, 60: 16-24. See the publications link on Daniel Hall's website: http://www.stat.uga.edu/~dhall/ Cheers Andrew On Tue, Jul 18, 2006 at 08:15:12PM +0200, Ulrich Keller wrote:> Hello, > > suppose I have a multivariate multiple regression model such as the > following: > > > DF<-data.frame(x1=rep(c(0,1),each=50),x2=rep(c(0,1),50)) > > tmp<-rnorm(100) > > DF$y1<-tmp+DF$x1*.5+DF$x2*.3+rnorm(100,0,.5) > > DF$y2<-tmp+DF$x1*.5+DF$x2*.7+rnorm(100,0,.5) > > x.mlm<-lm(cbind(y1,y2)~x1+x2,data=DF) > > coef(x.mlm) > y1 y2 > (Intercept) 0.07800993 0.2303557 > x1 0.52936947 0.3728513 > x2 0.13853332 0.4604842 > > How can I test whether x1 and x2 respectively have the same effect on y1 > and y2? In other words, how can I test if coef(x.mlm)[2,1] is > statistically equal to coef(x.mlm)[2,2] and coef(x.mlm)[3,1] to > coef(x.mlm)[3,2]? I looked at linear.hypothesis {car} and glh.test > {gmodels}, but these do not seem the apply to multivariate models. > Thank you in advance, > > Uli Keller > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.-- Andrew Robinson Department of Mathematics and Statistics Tel: +61-3-8344-9763 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 Email: a.robinson at ms.unimelb.edu.au http://www.ms.unimelb.edu.au
Ulrich Keller
2006-Jul-19 10:09 UTC
[R] Test for equality of coefficients in multivariate multiple regression
Hello and thank you for your answers, Andrew and Berwin. If I'm not mistaken, the mixed-model version of Berwin's approach would be: #My stuff: DF<-data.frame(x1=rep(c(0,1),each=50),x2=rep(c(0,1),50)) tmp<-rnorm(100) DF$y1<-tmp+DF$x1*.5+DF$x2*.3+rnorm(100,0,.5) DF$y2<-tmp+DF$x1*.5+DF$x2*.7+rnorm(100,0,.5) x.mlm<-lm(cbind(y1,y2)~x1+x2,data=DF) #1st part of Andrew's suggestion: DF2 <- with(DF, data.frame(y=c(y1,y2))) DF2$x11 <- with(DF, c(x1, rep(0,100))) DF2$x21 <- with(DF, c(x2, rep(0,100))) DF2$x12 <- with(DF, c(rep(0,100), x1)) DF2$x22 <- with(DF, c(rep(0,100), x2)) DF2$x1 <- with(DF, c(x1, x1)) DF2$wh <- rep(c(0,1), each=100) #Mixed version of models: > DF2$unit <- rep(c(1:100), 2) > library(nlme) > mm1 <- lme(y~wh + x11 + x21 + x12 + x22, random= ~1 | unit, DF2, method="ML") > fixef(mm1) (Intercept) wh x11 x21 x12 x22 0.07800993 0.15234579 0.52936947 0.13853332 0.37285132 0.46048418 > coef(x.mlm) y1 y2 (Intercept) 0.07800993 0.2303557 x1 0.52936947 0.3728513 x2 0.13853332 0.4604842 > mm2 <- update(mm1, y~wh + x1 + x12 + x22) > anova(mm1, mm2) Model df AIC BIC logLik Test L.Ratio p-value mm1 1 8 523.6284 550.0149 -253.8142 mm2 2 7 522.0173 545.1055 -254.0086 1 vs 2 0.388908 0.5329 This seems to be correct. What anova() tells me is that the effect of x1 is the same for y1 and y2. What I don't understand then is why the coefficients for x12 and x22 differ so much between mm1 and mm2: > fixef(mm2) (Intercept) wh x1 x12 x22 0.1472766 0.1384474 0.5293695 -0.1565182 0.3497476 > fixef(mm1) (Intercept) wh x11 x21 x12 x22 0.07800993 0.15234579 0.52936947 0.13853332 0.37285132 0.46048418 Sorry for being a bit slow here, I'm (obviously) not a statistician. Thanks again, Uli Andrew Robinson wrote:> G'day Berwin, > > my major reason for preferring the mixed-effect approach is that, as > you can see below, the residual df for your models are 195 and 194, > respectively. The 100 units are each contributing two degrees of > freedom to your inference, and presumably to your estimate of the > variance. I feel a little sqeueamish about that, because it's not > clear to me that they can be assumed to be independent. I worry about > the effects on the size of the test. With a mixed-effects model each > unit could be a cluster of two observations, and I would guess the > size would be closer to nominal, if not nominal. > > Cheers > > Andrew