I missed the earlier thread, but if I had the data and not just
the table of coefficients and standard errors, then I'd try combining
the data sets as follows:
> set.seed(1)
> df1 <- data.frame(x=1:3, y=1:3+rnorm(3))
> df2 <- data.frame(x=1:3, y=1:3+rnorm(3))
> DF. <- data.frame(source=rep(c(-1,1), each=3), rbind(df1, df2))
>
> anova(lm(y~x+I(source*x), DF.))
Analysis of Variance Table
Response: y
Df Sum Sq Mean Sq F value Pr(>F)
x 1 0.47271 0.47271 0.5696 0.5053
I(source * x) 1 0.23386 0.23386 0.2818 0.6323
Residuals 3 2.48964 0.82988
>
hope this helps. spencer graves
Joerg Schaber wrote:
> I would suggest the method of Sokal and Rholf (1995) S. 498, using the
> F test.
> Below I repeat the analysis by Spencer Graves:
>
> Spencer:
> > df1 <- data.frame(x=1:3, y=1:3+rnorm(3))
> > df2 <- data.frame(x=1:3, y=1:3+rnorm(3))
> > fit1 <- lm(y~x, df1)
> > s1 <- summary(fit1)$coefficients
> > fit2 <- lm(y~x, df2)
> > s2 <- summary(fit2)$coefficients
> > db <- (s2[2,1]-s1[2,1])
> > sd <- sqrt(s2[2,2]^2+s1[2,2]^2)
> > df <- (fit1$df.residual+fit2$df.residual)
> > td <- db/sd
> > 2*pt(-abs(td), df)
> [1] 0.8757552
>
> Sokal & Rholf
> > n <- length(df1$x) > ssx1 <- var(df1$x)*(n-1) #
sums of
> squares
> > ssx2 <- var(df2$x)*(n-1)
> > ssy1 <- var(df1$y)*(n-1)
> > ssy2 <- var(df2$y)*(n-1)
> > sxy1 <- cor(df1$x,df1$y)*(n-1)
> > sxy2 <- cor(df2$x,df2$y)*(n-1)
> > d2xy1 <- ssy1 - sxy1^2/ssx1 # unexplained
> > d2xy2 <- ssy2 - sxy2^2/ssx2
> > Fs <- db^2/((ssx1+ssx2)/(ssx1*ssx2)*(d2xy1+ d2xy2)/(2*n-4)) # F
> statistic
> > 1-pf(Fs,1,2*n-4)
> [1] 0.8827102
>
> slight differences, but can be MUCH larger with more data!
>
> greetings,
>
> joerg
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
> http://www.R-project.org/posting-guide.html