Hi, I initially posted this to the general R mailing list, but Bert Gunter
thought this may be a mixed model issue, so suggested me to post here.
I have a dataset that has 2 groups of subjects. For each subject in each group,
the response measured is the number of success (no.success) obatined with the
number of trials (no.trials). So a probability of success (prop.success) can be
computed as no.success/no.trials for each subject in each group. The data may
look like:
for group 1:
subject 1: 5 success, 10 trials
subject 2: 3 success, 8 trials
:
:
for group 2:
subject a: 7 success, 9 trials
subject b: 6 success, 7 trials
:
:
The objective is to test if there is a statistical significant difference in the
proportion of success between the 2 groups of subjects (say n1=20, n2=30).
Initially, I can think of 3 ways to do the test:
1. regular t test based on the variable prop.success
2. Mann-Whitney test based on the variable prop.success
3. do a binomial regression as:
fit<-glm(cbind(no.success,no.trials-no.success) ~ group, data=data,
family=binomial)
anova(fit, test='Chisq')
Bert Gunter instead thought this may be modeled by a mixed model because there
is a random subject to subject variability in their probability of success
within a group. So I specified a mixed model for this data:
4. glmer(prop.success~group+(1|group), weights=no.trials, data=data,
family=binomial)
My questions is:
1. Is t test appropriate for comparing 2 groups of proportions?
2. how about Mann-Whitney non-parametric test?
3. Actually, model 3 (binomial regression) and 4 (mixed model) gave me exactly
the same test for fixed effects, and the variance component for group in model 4
is very very small (E-133), so is mixed model really necessary here?
4. Among the 4, which technique is more appropriate?
5. any other technique you can suggest?
Thank you,
John
[[alternative HTML version deleted]]