Dear Wolfgang Viechtbauer and list members: I have discovered your "MiMa" function for fitting meta-analytic mixed-effects models through an earlier discussion on this list. I think it is extremely useful and fills an important gap. In particular, since it is programmed so transparently, it is easy to adapt it for one's own needs. (For example, I have found it easy to identify and adapt the few lines I had to change to make the function fit models without intercept - impossible with one of the commercial packages for meta-analysis). I agree with Emmanuel Charpentier's suggestion that your function would be even more useful if it was more alike lm or glm (some time in the future perhaps). For now, one question: How do I calculate the correct R-squared for models fitted with MiMa? Thanks Christian Gold University of Bergen www.uib.no/people/cgo022
Viechtbauer Wolfgang (STAT)
2007-Mar-12 11:23 UTC
[R] meta-regression, MiMa function, and R-squared
Dear All, I am actually in the process of turning the mima function (with additional functions for predict, resid, and so on) into a full package. Making the syntax of the function more like that for lm would indeed be useful. However, for that I would have to familiarize myself more with the internals of R to understand how exactly I can make use of the formula syntax. As for calculating (something like) R^2, there are essentially two approaches I may suggest. I assume you have a vector of effect size estimates "y", the corresponding vector of estimated sampling variances "v", and you have one or more moderator variables "x1" through "xp". 1) Fit the model containing x1 through xp with the mima function and let tau2 denote the estimate of residual heterogeneity from that model. Create a new variable "w <- 1/(v + tau2)". Note that the mima function does nothing else but fit the model with weighted least squares using those weights. So, you could actually use "lm(y ~ x1 + ... + xp, weights=w)" and you should get the exact same parameter estimates. Therefore, "summary(lm(y ~ x1 + ... + xp, weights=w))" will give you R^2. Note that this is the coefficient of determination for transformed data whose meaning may not be entirely intuitive. See: Willett, J. B., & Singer, J. D. (1988). Another cautionary note about R^2: Its use in weighted least-squares regression analysis. American Statistician, 42(3), 236-238. for a nice discussion of this. 2) Another approach that is used in the meta-analytic context is this. First estimate the total amount of heterogeneity by using a model without moderators (i.e., a random-effects model). Let that estimate be denoted by "tau2.tot". Next, fit the model with moderators. Let the estimate of residual heterogeneity be denoted by "tau2.res". Then "(tau2.tot - tau2.res)/tau2.tot" is an estimate of the proportion of the total amount of heterogeneity that is accounted for by the moderators included in the model. This is an intuitive measure that has an R^2 flavor to it, but I would not directly call it R^2. Hope this helps, -- Wolfgang Viechtbauer ?Department of Methodology and Statistics ?University of Maastricht, The Netherlands ?http://www.wvbauer.com/ -----Original Message----- From: Christian Gold [mailto:c.gold at magnet.at] Sent: Monday, March 12, 2007 10:59 To: r-help at stat.math.ethz.ch; wvb at wvbauer.com Subject: meta-regression, MiMa function, and R-squared Dear Wolfgang Viechtbauer and list members: I have discovered your "MiMa" function for fitting meta-analytic mixed-effects models through an earlier discussion on this list. I think it is extremely useful and fills an important gap. In particular, since it is programmed so transparently, it is easy to adapt it for one's own needs. (For example, I have found it easy to identify and adapt the few lines I had to change to make the function fit models without intercept - impossible with one of the commercial packages for meta-analysis). I agree with Emmanuel Charpentier's suggestion that your function would be even more useful if it was more alike lm or glm (some time in the future perhaps). For now, one question: How do I calculate the correct R-squared for models fitted with MiMa? Thanks Christian Gold University of Bergen www.uib.no/people/cgo022
Dear Wolfgang Thanks for your prompt and clear response concerning the R^2. You write:> Note that the mima function does nothing else but fit the model withweighted least squares using those weights. So, you could actually use "lm(y ~ x1 + ... + xp, weights=w)" and you should get the exact same parameter estimates. Therefore, "summary(lm(y ~ x1 + ... + xp, weights=w))" will give you R^2. Is this really true? I thought that "in weighted regression the /relative/ weights are assumed known whereas in meta-regression the /actual/ weights are assumed known" (Higgins & Thompson, 2004, "Controlling the risk of spurious findings from meta-regression", Statistics in Medicine, 23, p. 1665). Also, I did calculate my regression problem with lm using inverse variance weights before I discovered your function, and have compared the results now. The regression coefficient was the same, but the confidence interval was wider with mima. Furthermore, the CI with mima depended on the absolute size of the weights (as I assume it should do), whereas with lm it did not. Can you explain? Thanks Christian
Viechtbauer Wolfgang (STAT)
2007-Mar-12 14:36 UTC
[R] meta-regression, MiMa function, and R-squared
Yes, there is indeed a slight difference. The models fitted by lm() using the weights option (and this is the same in essentially all other software) assume that the weights are known up to a constant. The parameter estimates will be exactly the same, but the standard errors of the estimates will differ by exactly that constant. If you divide the standard errors that you get from lm() with the weights option by the residual standard error, then you get exactly the same standard errors as those given by the mima() function. Fortunately, that multiplicative constant has no bearing on the value of R^2. You can see this by using "lm(y ~ x1 + ... + xp, weights=w*10)". The value of R^2 is unchanged. Best, -- Wolfgang Viechtbauer ?Department of Methodology and Statistics ?University of Maastricht, The Netherlands ?http://www.wvbauer.com/ -----Original Message----- From: Christian Gold [mailto:c.gold at magnet.at] Sent: Monday, March 12, 2007 13:35 To: Viechtbauer Wolfgang (STAT) Cc: r-help at stat.math.ethz.ch Subject: Re: meta-regression, MiMa function, and R-squared Dear Wolfgang Thanks for your prompt and clear response concerning the R^2. You write:> Note that the mima function does nothing else but fit the model withweighted least squares using those weights. So, you could actually use "lm(y ~ x1 + ... + xp, weights=w)" and you should get the exact same parameter estimates. Therefore, "summary(lm(y ~ x1 + ... + xp, weights=w))" will give you R^2. Is this really true? I thought that "in weighted regression the /relative/ weights are assumed known whereas in meta-regression the /actual/ weights are assumed known" (Higgins & Thompson, 2004, "Controlling the risk of spurious findings from meta-regression", Statistics in Medicine, 23, p. 1665). Also, I did calculate my regression problem with lm using inverse variance weights before I discovered your function, and have compared the results now. The regression coefficient was the same, but the confidence interval was wider with mima. Furthermore, the CI with mima depended on the absolute size of the weights (as I assume it should do), whereas with lm it did not. Can you explain? Thanks Christian