similar to: stratified kappa (measure agreement or interrater reliability)?

Displaying 20 results from an estimated 1000 matches similar to: "stratified kappa (measure agreement or interrater reliability)?"

2006 May 16
2
Interrater and intrarater variability (intraclass correlationcoefficients)
It sounds as thought you are interested in Hoyt's Anova which is a form of generalizability theory. This is usually estimated using by getting the variance components from ANOVA. > -----Original Message----- > From: r-help-bounces at stat.math.ethz.ch > [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Karl Knoblick > Sent: Tuesday, May 16, 2006 6:10 AM > To: r-help at
2006 May 16
5
Interrater and intrarater variability (intraclass correlation coefficients)
Hello! I want to calculate the intra- and interrater reliability of my study. The design is very simple, 5 raters rated a diagnostic score 3 times for 19 patients. Are there methods/funtions in R? I only found packages to calculate interrater variability and intraclass correlation coefficients for matrices of n*m (n subjects, m raters) - I have n subjects, m raters and r repetitions. Can
2013 Jan 11
0
Weighted Kappa for m Raters
Hello, I have 50 raters and 180 cases which are rated as malignant, probably malignant, probably benign, and benign. I want to compare all the raters but I want a weighted kappa to penalize differences between malignant and benign more than differences between malignant and probably malignant. I only found the weighted option in the 2-rater functions (kappa2, wkappa, cohen.kappa). The
2009 Jul 13
3
Help With Fleiss Kappa
Hi All, I am using fleiss kappa for inter rater agreement. Are there any know issues with Fleiss kappa calculation in R? Even when I supply mock data with total agreement among the raters I do not get a kappa value of 1. instead I am getting negative values. I am using the irr package version 0.70 Any help is much appreciated. Thanks and Regards M [[alternative HTML version deleted]]
2007 Jun 26
3
inter-rater agreement index kappa
Is there a function that calculates the inter-rater agreement index (kappa) in R? Thanks ../Murli [[alternative HTML version deleted]]
2012 Feb 01
1
Function to compute multi-response, multi-rater kappa?
I'm looking for a function in R that extends kappa to multiple raters when there is more than one response per subject. For example, say a group of doctors have to assign diseases to patients. Each patient will be assigned one to many diseases, and the number of doctors assigning diseases to any one patient will be two to many. Here's an extremely simple example of the type of data I
2009 Mar 26
1
ICC question: Interrater and intrarater variability (intraclass correlation coefficients)
Hello dear R help group. I encountered this old thread (http://tinyurl.com/dklgsk) containing the a similar question to the one I have, but left without an answer. I am and hoping one of you might help. A simplified situation: I have a factorial design (with 2^3 experiment combinations), for 167 subjects, each one has answered the same question twice (out of a bunch of "types" of
2003 Mar 11
0
Interrater and intrarater reliability
Dear R users The following function is R code for the main compuations in the article: M. Eliasziw, S Lorraine Young, M Gail Woodbury and Karen Fryday-Field (1994): Statistical Methodology for the Concurrent Assessment of Intrarater and Intrarater Reliability: Using Goniometric Measurements as an Example. Physical Therapy 74 (8); 777-788 The function gives the estimated inter- and intrarater
2006 Feb 09
2
latent class modle for rater agreement
Hello there, I would like to test the agreement amongst 6 raters for nominal data on a scale from 1-4, and conduct a latent class analysis in R. How should the data be formatted and what code should I use? Thank you very much Lisa Wang Princess Margaret Hospital Biostatistics tel:416 946 4501
2006 May 17
1
Response to query re: calculating intraclass correlations
Karl, If you use one of the specialized packages to calculate your ICC, make sure that you know what you're getting. (I haven't checked the packages out myself, so I don't know either.) You might want to read David Futrell's article in the May 1995 issue of Quality Progress where he describes six different ways to calculate ICCs from the same data set, all with different
2012 Feb 01
0
Multi-response, multi-rater kappa?
I'm looking for an extension of kappa to measure agreement among multiple raters when there can be more than one response per subject. For example, say a group of doctors assign diseases to patients. Each patient will be assigned one to many diseases, and the number of doctors assigning diseases to any one patient will be two to many. Here's an extremely simple example of the type of
2006 Apr 28
1
Where do I find Cohen´s kappa???
Hello, I?m looking for a way to measure the goodness of fit of my model with Cohen?s Kappa (scaling between 0 and 1). The kappa function does not give the results I?m looking for. Heres the code: z<-glm(x~y,binomial) kappa(z, exact = T) Does anyone know more? many thanks Christian -- _______________________________________________ Search for businesses by name, location, or phone number.
2002 Feb 05
2
Measures of agreement
Greetings. I've been experimenting with some algorithms for document classification (specifically, a Naive Bayes classifier and a kNN classifier) and I would now like to calculate some inter-rater reliability scores. I have the data in a PostgreSQL database, such that for each document, each measure (there are 9) has three variables: ap_(measure), nb_(measure), and knn_(measure). ap is me
2009 Jul 13
0
Fleiss Kappa
Hello Everyone I am calculating Fleiss Kappa, I have 28 raters, 5 Subjects and 5 ratings. The problem is that there are 2 missing values in the data. Would it better to replace those with "0" or should those be omitted? By omission I will be left wit only 3 subjects. and my second problem is that overall agreement comes to zero, whereas the data is not showing agreement to be close
2008 Aug 22
3
simple generation of artificial data with defined features
Dear R-colleagues, I am quite a newbie to R fighting my stupidity to solve a probably quite simple problem of generating artificial data with defined features. I am conducting a study of inter-observer-agreement in child-bronchoscopy. One of the most important measures is Kappa according to Fleiss, which is very comfortable available in R through the irr-package. Unfortunately medical doctors
2009 Dec 22
1
Cohen's kappa, unequal score ranges
Hi, I am having problems getting cohen's kappa to work. I have been using the function: ><-ckappa(x,y) from the psy package. I am trying to test for inter-observer reliability, I have 2 observers and 26 categories, however, the two observers might not necessarily have the same range of categories (I have unequal score ranges). However, I thought R could cope with this. Each time I
2003 Sep 25
0
mixing nested and crossed factors using lme
Hi all, I have an experiment where 5 raters assessed the quality of 24 web sites. (each rater rated each site once). I want to come up with a measure of reliability of the ratings for the web sites ie to what extent does each rater give the same (or similar) rating to each web site. My idea was to fit a random effects model using lme and from that, calculate the intraclass correlation as a
2003 May 28
1
Bradley Terry model and glmmPQL
Dear R-ers, I am having trouble understanding why I am getting an error using glmmPQL (library MASS). I am getting the following error: iteration 1 Error in MEEM(object, conLin, control$niterEM) : Singularity in backsolve at level 0, block 1 The long story: I have data from an experiment on pairwise comparisons between 3 treatments (a, b, c). So a typical run of an experiment
2006 Mar 07
10
Star Rating Component?
Hi, I''m looking for a star rating component for RoR, a bit like Votio (http://redalt.com/downloads/ - find the votio heading) or the star rating used on Amazon. I don''t really need the AJAX capabilities, just the ability to bind the results to a hidden drop down, or radio inputs. Multiple raters per page is also an issue. Any recomendations? -- Posted via
2010 Aug 03
2
How to extract ICC value from irr package?
Hi, all There are 62 samples in my data and I tested 3 times for each one, then I want to use ICC(intraclass correlation) from irr package to test the consistency among the tests. *combatexpdata_p[1:62] is the first text results and combatexpdata_p[63:124] * is the second one and *combatexpdata_p[125:186]* is the third. Here is the result: