I want to evaluate the overall accuracy rate of some clustering methods. For that, I have created 10 different data sets (by simulation) and I have measured the accuracy (events well-classified divided by total number of events) of each method on each data set. So at the end I have a total of 10 measures of accuracy for each method. If I calculate the standard deviation of 10 accuracy measures, what means that measure? Is the method with the lowest standard deviation of accuracy the method with the highest reproducibility/repeatibility or whatelse? What would be the right term? Thank you -- View this message in context: http://n4.nabble.com/Standard-deviation-of-an-accuracy-rate-tp1819788p1819788.html Sent from the R help mailing list archive at Nabble.com.