Dear Michael,
Thanks for your reaction. The estimates in the example in the link are not
exactly identical, but almost. However, in my case, there is a substantial
difference between the estimates of the overall effect.
For example: I want to estimate the effect size of different mental health
disorders. I start with estimating the effect size of substance use disorder
(SUD), using a three-level random effects model.
In the first approach I have fitted an intercept only model with a subset of the
data including only data on SUD. The overall effect size of SUD is d = 0.185 (SE
= 0.085), p = .033.
Using the second approach, I have included a categorical moderator
'disorder' (SUD, DBD, ADHD). The reference group is SUD and I have added
the predictors DBD (i.e., DBD is coded with '1', and SUD and ADHD with
'0') and ADHD (i.e., ADHD is coded with '1' and SUD and DBD with
'0'). The mean effect size of SUD (intercept) is d = 0.300 (SE = 0.104),
p = .005.
To conclude, there is a substantial difference between the estimated effect
sizes (d = 0.185 versus d = 0.300).
Approach 1:> summary(sud, digits=3)
Multivariate Meta-Analysis Model (k = 49; method: REML)
logLik Deviance AIC BIC AICc
-11.808 23.616 29.616 35.230 30.161
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.042 0.206 49 no y
sigma^2.2 0.050 0.223 13 no ID
Test for Heterogeneity:
Q(df = 48) = 409.874, p-val < .001
Model Results:
estimate se tval pval ci.lb ci.ub
0.185 0.085 2.189 0.033 0.015 0.356 *
Approach 2:> summary(external, digits=3)
Multivariate Meta-Analysis Model (k = 123; method: REML)
logLik Deviance AIC BIC AICc
-58.470 116.940 126.940 140.878 127.467
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.065 0.256 123 no y
sigma^2.2 0.113 0.336 17 no ID
Test for Residual Heterogeneity:
QE(df = 120) = 846.602, p-val < .001
Test of Moderators (coefficient(s) 2,3):
QM(df = 2) = 0.956, p-val = 0.387
Model Results:
estimate se tval pval ci.lb ci.ub
intrcpt 0.300 0.104 2.874 0.005 0.093 0.506 **
DBD 0.114 0.084 1.354 0.178 -0.053 0.281
ADHD 0.087 0.104 0.836 0.405 -0.119 0.293
> Subject: Re: [R] Mean effect size in meta-analysis using Metafor
> To: wibbeltjec at hotmail.com; r-help at r-project.org
> From: lists at dewey.myzen.co.uk
> Date: Sat, 12 Dec 2015 16:12:58 +0000
>
> Dear Carlijn
>
> I wonder whether
>
> http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
>
> answers your question? If you had given us an example of your fitting
> procedure we might know for sure.
>
>
> On 12/12/2015 15:35, Carlijn . wrote:
> >
> >
> > Hi all,
> >
> >
> >
> > I have a question about doing a meta-analysis, in particular a
three-level
> > meta-analysis using Metafor.
> >
> > I have estimated the mean overall effect size of males by using two
different
> > ways:
> >
> > 1. moderator analysis (male = 0, female = 1) using the whole data set
> >
> > 2. intercept-only model with a subset of the data (only males)
> >
> >
> >
> > The mean effect size estimated by using the categorical moderator
analysis
> > (1) differs considerably from the overall mean effect size estimated
in an
> > intercept-only model using a subset of the data (2).
> >
> >
> >
> > Can someone explain this? Which method gives a better estimation of
the
> > effect?
> >
> >
> >
> > Thank you in advance!
> >
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
[[alternative HTML version deleted]]