Greetings. I'm running some models under R using glmmPQL from MASS. These are three-level models (two grouped levels and the individual level) with dichotomous outcomes. There are several statistics of interest; for the moment, I have two specific questions: 1.) This question refers to the following model (I present first the call, then the output of summary(): morality.restr.1.pql<-glmmPQL(random = ~ 1 | groupid/participantid, fixed = r.logic.morality ~ 1, data = fgdata.df[coded.logic,], na.action=na.omit, niter=50, family=binomial) Linear mixed-effects model fit by maximum likelihood Data: fgdata.df[coded.logic, ] AIC BIC logLik 4427.735 4447.531 -2209.868 Random effects: Formula: ~1 | groupid (Intercept) StdDev: 0.3312237 Formula: ~1 | participantid %in% groupid (Intercept) Residual StdDev: 0.3651775 0.9765288 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: r.logic.morality ~ 1 Value Std.Error DF t-value p-value (Intercept) -0.1699931 0.1039887 905 -1.634727 0.1025 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -1.2951648 -0.8865510 -0.7183326 1.0428044 1.6135857 Number of Observations: 1042 Number of Groups: groupid participantid %in% groupid 20 137 Raudenbush & Bryk (1992; 2002) suggest that the Intraclass Correlation is a useful statistic for a hierarchical linear model. My understanding is that this statistic is the proportion of the model's total variance that is "explained" by each level of the model. I have calculated this for level 2 as 0.3312237^2 / (0.3312237^2 + 0.3651775^2 + 0.9765288^2) and for level 3 as 0.3651775^2 / (0.3312237^2 + 0.3651775^2 + 0.9765288^2). However, Guo and Zhao imply that the total variance for a dichotomous-outcome (logistic) model should be a constant, specifically pi^2/3. Clearly pi^2/3 is a very different number from (0.3312237^2 + 0.3651775^2 + 0.9765288^2). Can anyone shed light on this? Does this calculation make sense at all? 2.) There is the possibility in these models of using some cross-classification. The lowest unit of analysis here is the utterance: one statement made in a group discussion. Each statement is (currently) nested within a speaker, who is in turn nested within a group. The complication is that each statement is *also* nested within one of four scenarios, and the scenarios are repeated across the 20 groups. Using the scenario as a fixed covariate in the model results (or seems to) in erronenously assuming too many degrees of freedom, since utterances are clustered within scenarios. But cross-classifying the scenario * group into 80 clusters seems like it will seriously impede intepretation. Any advice? Ultimately it may not be terribly important to include the scenario as a covariate, but I would like to be able to do so if necessary. Thanks for any advice. ---------------------------------------------------------------------- Andrew J Perrin - http://www.unc.edu/~aperrin Assistant Professor of Sociology, U of North Carolina, Chapel Hill clists at perrin.socsci.unc.edu * andrew_perrin (at) unc.edu -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._