This is really a stats question, but as I am hoping this will be useful in R, here goes. I have been using the concordance coefficient for nominal variables described in Siegel & Castellan (sometimes called Cohen's kappa) to assess inter-rater agreement. In cases where one or more raters categorize an event and one or more raters do not recognize that event, I add a "non-classfied" category so that all row sums (number of raters) are equal. As far as I can see, this does not abuse the logic of the analysis - all categories are disjunct sets. Does anyone know of any objections to this workaround? I'm planning to add it as a standard feature of the kappa.nom() function. Thanks Jim -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._