FD
2008-Oct-31 03:38 UTC
[R] stratified kappa (measure agreement or interrater reliability)?
Hi All: Could anyone point me to a package that can calculate stratified kappa? My design is like this, 4 raters, 30 types of diagnosis scores, 20 patients. Each rater will rate each patient for each type of diagnosis score. The rater's value is nominal. I know I can measure the agreement between raters for each type of diagnosis score, e.g., calculate out 30 kappa values. My problem is I want to have an overall agreement measure (a single value and its significance over chance). Could anyone help me with this? Thanks in advance, FD [[alternative HTML version deleted]]
David Winsemius
2008-Oct-31 14:30 UTC
[R] stratified kappa (measure agreement or interrater reliability)?
On Oct 30, 2008, at 11:38 PM, FD wrote:> Hi All: > Could anyone point me to a package that can calculate stratified > kappa? My > design is like this, 4 raters, 30 types of diagnosis scores, 20 > patients. > Each rater will rate each patient for each type of diagnosis score. > The > rater's value is nominal. > > I know I can measure the agreement between raters for each type of > diagnosis > score, e.g., calculate out 30 kappa values. My problem is I want to > have an > overall agreement measure (a single value and its significance over > chance). > Could anyone help me with this?I am not a statistician or a psychometrician, so have no experience with any of these packages. A google search on produced this link: http://www.mail-archive.com/r-help at stat.math.ethz.ch/ msg89858.html ... and looking in package psy in CRAN, I see lkappa(), Light?s kappa for n raters, which seems to meet your specifications. The concord package may have the facilities but I am not able to tell from the documentation. Perhaps Jim Lemon can be queried. -- David Winsemius