Hi, I just started learning R, and one of the most frequent thing that I need to calculate is cohen kappa in my psychology lab and I figure being able to do inter rater reliability is a great way for me to explore R. There are two different scenario in which I need help with. (By the way, I have installed irr and concord.) 1) In the lab I am working in, we go through transcripts and find certain words to code, and we usually compare the codes between two raters. In this case, the two rater can agree (both rater coded "apple" with "A"), have a mismatch (one rater coded "apple" with "A" but another coded it as "B") or a miss (a rater coded "apple" with "A" but the other rater did not code it at all). There are several codes with similar procedures and as the codes are tallied together, a chart is constructed, similar to the one attached( r.789695.n4.nabble.com/file/n3732320/example1.xls example1.xls ) title "example1.xls." The maroon color represents the cells with data, and the lavender cells is just the total in each row/column. My question in this case is How would I calculate the Cohen Kappa for the cell shaded in maroon? 2)I helped run participants over the summer for a psych summer internship and after coding them, I will enter the data as shown in the attachment title "example2.xls ( r.789695.n4.nabble.com/file/n3732320/example2.xls example2.xls ) There is also another research assistant that entered the data and I want to find a way to check whether we are reliable or not, and want to calculate reliability for the following:TimeI, TimeA, TriesI, and TriesA. Once again, i would need to convert the excel file into csv, but aside from that, I am lost as to what I need to do. Any help is appreciated! Thank you! -- View this message in context: r.789695.n4.nabble.com/Different-approach-to-set-up-Cohen-Kappa-tp3732320p3732320.html Sent from the R help mailing list archive at Nabble.com.
The kappa2() function in the irr library takes an n x 2 matrix as input, where the two columns are the ratings by two raters. Let x and y below be the ratings of the two raters: x<-sample(c(0,1,2),100,replace=T) x o<-sample(c(0,0,0,1),100,replace=T) y<-x+o y #Then kappa is computed as: kappa2(cbind(x,y)) Otherwise, your post suggest that you should start with the basics and pick up an R manual to get acquainted with R. HTH, Daniel gavfung wrote:> > Hi, I just started learning R, and one of the most frequent thing that I > need to calculate is cohen kappa in my psychology lab and I figure being > able to do inter rater reliability is a great way for me to explore R. > There are two different scenario in which I need help with. (By the way, I > have installed irr and concord.) > > 1) In the lab I am working in, we go through transcripts and find certain > words to code, and we usually compare the codes between two raters. In > this case, the two rater can agree (both rater coded "apple" with "A"), > have a mismatch (one rater coded "apple" with "A" but another coded it as > "B") or a miss (a rater coded "apple" with "A" but the other rater did not > code it at all). There are several codes with similar procedures and as > the codes are tallied together, a chart is constructed, similar to the one > attached( r.789695.n4.nabble.com/file/n3732320/example1.xls > example1.xls ) title "example1.xls." The maroon color represents the cells > with data, and the lavender cells is just the total in each row/column. > > My question in this case is How would I calculate the Cohen Kappa for the > cell shaded in maroon? > > 2)I helped run participants over the summer for a psych summer internship > and after coding them, I will enter the data as shown in the attachment > title "example2.xls ( > r.789695.n4.nabble.com/file/n3732320/example2.xls example2.xls ) > There is also another research assistant that entered the data and I want > to find a way to check whether we are reliable or not, and want to > calculate reliability for the following:TimeI, TimeA, TriesI, and TriesA. > > Once again, i would need to convert the excel file into csv, but aside > from that, I am lost as to what I need to do. Any help is appreciated! > Thank you! >-- View this message in context: r.789695.n4.nabble.com/Different-approach-to-set-up-Cohen-Kappa-tp3732320p3732388.html Sent from the R help mailing list archive at Nabble.com.
On 08/10/2011 06:38 PM, gavfung wrote:> Hi, I just started learning R, and one of the most frequent thing that I need > to calculate is cohen kappa in my psychology lab and I figure being able to > do inter rater reliability is a great way for me to explore R. There are > two different scenario in which I need help with. (By the way, I have > installed irr and concord.) > > 1) In the lab I am working in, we go through transcripts and find certain > words to code, and we usually compare the codes between two raters. In this > case, the two rater can agree (both rater coded "apple" with "A"), have a > mismatch (one rater coded "apple" with "A" but another coded it as "B") or a > miss (a rater coded "apple" with "A" but the other rater did not code it at > all). There are several codes with similar procedures and as the codes are > tallied together, a chart is constructed, similar to the one attached( > r.789695.n4.nabble.com/file/n3732320/example1.xls example1.xls ) > title "example1.xls." The maroon color represents the cells with data, and > the lavender cells is just the total in each row/column. > > My question in this case is How would I calculate the Cohen Kappa for the > cell shaded in maroon? >I would advise not using the cohen.kappa function in the concord package, even though it will handle "not coded" responses. The package has not been maintained since most of the functions were merged with the "irr" package. If the misses were symmetric between the raters, I would simply ignore them. It is only when raters differ on misses that you can include them without inflating the test statistic. If this example is relevant, it might help you: gf.scores<-matrix(c("A","A","A","A","A","A","A","M","A","M", "B","B","B","B","B","B","B","B","M","B","M","B","B","M","B","M", "C","C","C","C","C","C","M","C","M","C","M","C","C","M", "C","M","C","M"),ncol=2,byrow=TRUE) gfraters<-rep(1:2,22) library(e1071) classAgreement(table(gfraters,gf.scores)) Jim> 2)I helped run participants over the summer for a psych summer internship > and after coding them, I will enter the data as shown in the attachment > title "example2.xls ( > r.789695.n4.nabble.com/file/n3732320/example2.xls example2.xls ) > There is also another research assistant that entered the data and I want to > find a way to check whether we are reliable or not, and want to calculate > reliability for the following:TimeI, TimeA, TriesI, and TriesA. > > Once again, i would need to convert the excel file into csv, but aside from > that, I am lost as to what I need to do. Any help is appreciated! Thank > you! > > -- > View this message in context: r.789695.n4.nabble.com/Different-approach-to-set-up-Cohen-Kappa-tp3732320p3732320.html > Sent from the R help mailing list archive at Nabble.com. > > ______________________________________________ > R-help at r-project.org mailing list > stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.