search for: twoclasssummary

Displaying 8 results from an estimated 8 matches for "twoclasssummary".

2011 May 12
2
Can ROC be used as a metric for optimal model selection for randomForest?
...ortance = TRUE, do.trace = 100, keep.inbag = TRUE, tuneGrid = grid, trControl=bootControl, scale = TRUE, metric = "ROC") I wanted to use ROC as the metric for variable selection. I know that this works with the logit model by making sure that classProbs = TRUE and summaryFunction = twoClassSummary in the trainControl function. However if I do the same with randomForest, I get a warning saying that "In train.default(x = trainPred, y = trainDep, method = "rf", : The metric "ROC" was not in the result set. Accuracy will be used instead." I wonder if ROC met...
2013 Mar 06
1
CARET and NNET fail to train a model when the input is high dimensional
The following code fails to train a nnet model in a random dataset using caret: nR <- 700 nCol <- 2000 myCtrl <- trainControl(method="cv", number=3, preProcOptions=NULL, classProbs = TRUE, summaryFunction = twoClassSummary) trX <- data.frame(replicate(nR, rnorm(nCol))) trY <- runif(1)*trX[,1]*trX[,2]^2+runif(1)*trX[,3]/trX[,4] trY <- as.factor(ifelse(sign(trY)>0,'X1','X0')) my.grid <- createGrid(method.name, grid.len, data=trX) my.model <- train(trX,trY,method=method.name,t...
2010 Sep 29
0
caret package version 4.63
...the list: - wrappers for a number of new models were added, notably gam models (from both the gam and mgcv packages) and logic trees - when resampling with train(), class probabilities can now be used to calculate performance (such as the AUC of an ROC curve). A basic summary function, twoClassSummary(), can be used to calculate sensitivity, specificity and the ROC AUC. - repeated k-fold CV and the bootstrap 632 technique are available in train() - pre-processing can not be used within each resampling iteration within train(). - a function for independent component regression (icr...
2010 Sep 29
0
caret package version 4.63
...the list: - wrappers for a number of new models were added, notably gam models (from both the gam and mgcv packages) and logic trees - when resampling with train(), class probabilities can now be used to calculate performance (such as the AUC of an ROC curve). A basic summary function, twoClassSummary(), can be used to calculate sensitivity, specificity and the ROC AUC. - repeated k-fold CV and the bootstrap 632 technique are available in train() - pre-processing can not be used within each resampling iteration within train(). - a function for independent component regression (icr...
2011 Dec 22
0
randomforest and AUC using 10 fold CV - Plotting results
...legend=c(paste("Random Forests (AUC=",formatC(auc1,digits=4,format="f"),")",sep="")), col=c("red"), lty=1) #Cross validation using 10 fold CV: ctrl <- trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(1) rfEstimate <- train(factor(Species) ~ .,data = iris, method = "rf", metric = "ROC", tuneGrid = data.frame(.mtry = 2), trControl = ctrl) rfEstimate How can i plot the results from the cross validation on the previous ROC plot ? thanks, david
2013 Feb 10
1
Training with very few positives
...ives? I currently have the following setup: ======================================== library(caret) tmp <- createDataPartition(Y, p = 9/10, times = 3, list = TRUE) myCtrl <- trainControl(method = "boot", index = tmp, timingSamps = 2, classProbs = TRUE, summaryFunction = twoClassSummary) RFmodel <- train(X,Y,method='rf',trControl=myCtrl,tuneLength=1, metric="ROC") SVMmodel <- train(X,Y,method='svmRadial',trControl=myCtrl,tuneLength=3, metric="ROC") KNNmodel <- train(X,Y,method='knn',trControl=myCtrl,tuneLength=10, m...
2010 Oct 22
2
Random Forest AUC
Guys, I used Random Forest with a couple of data sets I had to predict for binary response. In all the cases, the AUC of the training set is coming to be 1. Is this always the case with random forests? Can someone please clarify this? I have given a simple example, first using logistic regression and then using random forests to explain the problem. AUC of the random forest is coming out to be
2013 Nov 15
1
Inconsistent results between caret+kernlab versions
I'm using caret to assess classifier performance (and it's great!). However, I've found that my results differ between R2.* and R3.* - reported accuracies are reduced dramatically. I suspect that a code change to kernlab ksvm may be responsible (see version 5.16-24 here: http://cran.r-project.org/web/packages/caret/news.html). I get very different results between caret_5.15-61 +