Wolfgang Vietchbauer wrote :
> The dependent variable to be used with the mima function can be any
> measure for which we have a known sampling variance (or approximately
> so) and that is (approximately) normally distributed. So, the
> dependent variable could be log odds ratios, log risk ratios,
> standardized mean differences, and so on. Are you looking for the
> option to input the results from each study arm individually?
Yes : this might help when there is more than one effect to assess (e.
g. comparison of three (or more) treatment option in a medical
situation. You might have to assess medical treatment vs interventional
treatment vs surgery, or surgery vs radiotherapy vs chemotherapy vs
watchful waiting : all those are real-life examples...). In those cases,
you are interested in more than one main effect : A vs B, A vs C, B vs
C, etc... In fact, you are in an ANOVA-like situation.
> (e.g.,
> the log odds for a control and a treatment group). You could also use
> mima then (with an appropriately coded moderator).
Agreed.
> However, it would
> then make more sense to assume a common (but random) intercept for all
> the arms from a single study.
Yes. And that's exactly why I thought lmer() might be helpful.
> At this point, the function isn't set up
> that way, but I think I could rewrite it to do that.
That could be an option, or the job of a wrapper function (more on this
later...).
>> - as far as I can tell, mima allows to asses the effect of variables
>> *nesting* studies, but not of variables *crossed* in each study ;
>> therefore, ypou cannot directly test the effect of such variables ;
> I am not sure if I understand this point. I think this may relate to
> the fact that (if I understand it correctly), you want to input the
> results from each arm separately.
Not exactly. What I have in mind is that nesting and crossed variables
have different status and that their effects should be assessed
differently.
Let's have a real-life exeample : meta-analysis of clinical trials where
the intervention is applied to a disease with two slightly different
clinical presentations, all (or at least most of) the trials accruing
patients with those two forms and reporting results according to the
subgroups so defined. Furthermore, some of these trials might use a
difficult blinded assessment of the results, while the rest might use
unblinded assessment.
There, you have two different "moderator" variables :
- The clinical presentation is known, not controlled, but present in
every trial : its (fixed) effect can be assessed in each of the trials ;
its impact on the outcome is assessable on each trial (that's why I mean
when I say it is cropssed witjh treatment) ; its interaction with the
"treatment" factor is of great medical interest and is also assessable
;
- The "design" factor, on the other hand, is partially counfounded
with
inter-study variations (the "design" variable nests the
"study"
variable) ; its (conceptually fixed) effect on the outcome is
confounded with the (random) inter-study variability of said outcome
(that's the bias of outcome assessment due to non-blinding) ; its
interaction with the treatment effect is also of interest (bias in
treatment effect estimation due to non-blinding), but much more
difficult to assess.
I don't see how to specify this with mima().
> You can also test for blocks of moderators with the mima function.
> Let's say you have two dummy variables that are coded to indicate
> differences between three groups (e.g., low, medium, and high quality
> studies). Now you want to test if quality makes at all a difference
> (as opposed to testing the two dummy variables individually). Use the
> out="yes" option and then do the following:
>
> 1. from $b, take the (2x1) subset of the parameter estimates
> corresponding to the two dummy variables; denote this vector with
> b.sub
> 2. from $vb, take the (2x2) subset from the variance-covariance
> matrix corresponding to the two dummy variables (i.e., their variances
> and the covariance); denote this vector with vb.sub
> 3. then t(b.sub) %*% solve(vb.sub) %*% b.sub is approximately
> chi-square distributed under H0 with 2 degrees of freedom.
This I know, but I'd like to see it automated in a more standard way
(see below...).
> I am also going to add to the function the option to output the log
> likelihood value. Then likelihood ratio tests are a snap to do with
> full versus reduced models. But for now, the above should work.
That's would be *really* useful. IMHO, model comparison (therefore
selection) *is* the point of statistical testing. Furthermore, it
smooths out the technicalities entailed by the nature of the tested
variable (boolean, nominal, ordered or numeric).
However, I think that the "right" way to do this would be to have a
function accepting the "quasi-standard" formula notation used in lm,
glm, lme, lmer, etc ... and returning an object inheriting from "lm"
or
"glm". That would allow :
- to use mechanisms such as anova(model, model, ...),
- therefore to use step1 for semi-automated model exploration (or even
stepwise meta-regression, for the lazy, the dumb and the adventurous
among R users ...).
- to use already-written software such as multiple comparisons (quite
useful when your variable of interest is a factor or an ordered
factor...), prediction (also extremely useful), etc ...
- to use diagnostic plots,
- etc ...
all of this for (almost) free thanks to inheritance.
I am currently running a couple tests on mima() an will contact you
privately about the results ...
Sincerely,
Emmanuel Charpentier