I have data consisting of binary responses from a large number of subjects on seven similar items. I have been using lmer with (crossed) random effects for subject and item. These effects are almost always (in the case of subject, always) significant additions to the model, testing this with anova. Including them also increases the Somers' Dxy value substantially. Even without those reasons, I feel I'd have to include these random effects to account for the correlation between the seven items from every subject. Otherwise the fixed between-subject effects like race, gender, etc. will seem more significant than they should. But how should I interpret the fact that without a Subject effect included, the "estimated scale" parameter is usually very close to 1, while when I include the Subject effect the scale parameter drops, usually to around 0.85? Can I at least conclude something interesting from this? Is it the same as saying that the subject effect itself (meaning the 'observed' subject BLUPs) is underdispersed with respect to its theoretical normal distribution? To summarize: a <- lmer(Response~Fixed Effects+(1|Subject)+(1|Item),data,binomial) b <- lmer(Response~Fixed Effects+(1|Item),data,binomial) a has a much better fit by any measure, and estimated scale around 0.85. b has a worse fit, but estimated scale around 1. Obvious? Interesting? Worrisome? Thanks, Dan
I have data consisting of several binary responses from a large number of subjects on seven similar items. I have been using lmer with (crossed) random effects for subject and item. These effects are almost always (in the case of subject, are always) significant additions to my model, testing this with anova. Including them also increases the Somers' Dxy value substantially. Even without those reasons, I feel I'd have to include these random effects to account for the correlation between the seven items from every subject. Otherwise my fixed between-subject effects like race, gender, etc. will seem more significant than they should. But how should I interpret the fact that without a Subject effect included, the "estimated scale" parameter is usually very close to 1, while when I include the Subject effect the scale parameter drops, usually to around 0.85? Can I at least conclude something interesting from this? Is it the same as saying that the subject effect itself (meaning the 'observed' subject BLUPs) is underdispersed with respect to its theoretical normal distribution? To summarize: a <- lmer(Response~Fixed Effects+(1|Subject)+(1|Item),data,binomial) b <- lmer(Response~Fixed Effects+(1|Item),data,binomial) a has a much better fit by any measure, and estimated scale around 0.85. b has a worse fit, and estimated scale around 1. Obvious? Interesting? Worrisome? Thanks, Dan