search for: summaryfunct

Displaying 9 results from an estimated 9 matches for "summaryfunct".

Did you mean: summaryfunc
2011 May 12
2
Can ROC be used as a metric for optimal model selection for randomForest?
...quot;rf", importance = TRUE, do.trace = 100, keep.inbag = TRUE, tuneGrid = grid, trControl=bootControl, scale = TRUE, metric = "ROC") I wanted to use ROC as the metric for variable selection. I know that this works with the logit model by making sure that classProbs = TRUE and summaryFunction = twoClassSummary in the trainControl function. However if I do the same with randomForest, I get a warning saying that "In train.default(x = trainPred, y = trainDep, method = "rf", : The metric "ROC" was not in the result set. Accuracy will be used instead."...
2012 Apr 13
1
caret package: custom summary function in trainControl doesn't work with oob?
...y redundant for such models. Since they take a while to build in the first place, it really slows things down when estimating performance using boostrap. I can successfully run either using the oob 'resampling method' with the default RMSE optimisation, or run using bootstrap and my custom summaryFunction as the thing to optimise, but they don't work together. If I try and use oob and supply a summaryFunction caret throws an error saying it can't find the relevant metric. Now, if caret is simply polling the randomForest object for the stored oob error I can understand this limitation, bu...
2012 Feb 10
1
Custom caret metric based on prob-predictions/rankings
...classification problems, and I'm trying to specify a custom scoring metric (recall at p, ROC, etc.) that depends on not just the class output but the probability estimates, so that caret::train can choose the optimal tuning parameters based on this metric. However, when I supply a trainControl summaryFunction, the data given to it contains only class predictions, so the only metrics possible are things like accuracy, kappa, etc. Is there any way to do this that I'm looking? If not, could I put this in as a feature request? Thanks! -- Yang Zhang http://yz.mit.edu/
2013 Mar 06
1
CARET and NNET fail to train a model when the input is high dimensional
The following code fails to train a nnet model in a random dataset using caret: nR <- 700 nCol <- 2000 myCtrl <- trainControl(method="cv", number=3, preProcOptions=NULL, classProbs = TRUE, summaryFunction = twoClassSummary) trX <- data.frame(replicate(nR, rnorm(nCol))) trY <- runif(1)*trX[,1]*trX[,2]^2+runif(1)*trX[,3]/trX[,4] trY <- as.factor(ifelse(sign(trY)>0,'X1','X0')) my.grid <- createGrid(method.name, grid.len, data=trX) my.model <- train(trX,trY...
2011 May 28
0
how to train ksvm with spectral kernel (kernlab) in caret?
...vm with a spectral kernel from the kernlab package. Sadly a svm with spectral kernel is not among the many methods in caret... using caret to train svmRadial: ------------------ library(caret) library(kernlab) data(iris) TrainData<- iris[,1:4] TrainClasses<- iris[,5] set.seed(2) fitControl$summaryFunction<- Rand svmNew<- train(TrainData, TrainClasses, method = "svmRadial", preProcess = c("center", "scale"), metric = "cRand", tuneLength = 4) svmNew ------------------- here is an example on how to train the ksvm with spectral kernel ---------...
2011 Dec 22
0
randomforest and AUC using 10 fold CV - Plotting results
...;bottomright",legend=c(paste("Random Forests (AUC=",formatC(auc1,digits=4,format="f"),")",sep="")), col=c("red"), lty=1) #Cross validation using 10 fold CV: ctrl <- trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(1) rfEstimate <- train(factor(Species) ~ .,data = iris, method = "rf", metric = "ROC", tuneGrid = data.frame(.mtry = 2), trControl = ctrl) rfEstimate How can i plot the results from the cross validation on the previous ROC plot ? thanks...
2013 Feb 10
1
Training with very few positives
...ith very few positives? I currently have the following setup: ======================================== library(caret) tmp <- createDataPartition(Y, p = 9/10, times = 3, list = TRUE) myCtrl <- trainControl(method = "boot", index = tmp, timingSamps = 2, classProbs = TRUE, summaryFunction = twoClassSummary) RFmodel <- train(X,Y,method='rf',trControl=myCtrl,tuneLength=1, metric="ROC") SVMmodel <- train(X,Y,method='svmRadial',trControl=myCtrl,tuneLength=3, metric="ROC") KNNmodel <- train(X,Y,method='knn',trControl=my...
2013 Nov 15
1
Inconsistent results between caret+kernlab versions
I'm using caret to assess classifier performance (and it's great!). However, I've found that my results differ between R2.* and R3.* - reported accuracies are reduced dramatically. I suspect that a code change to kernlab ksvm may be responsible (see version 5.16-24 here: http://cran.r-project.org/web/packages/caret/news.html). I get very different results between caret_5.15-61 +
2010 Oct 22
2
Random Forest AUC
Guys, I used Random Forest with a couple of data sets I had to predict for binary response. In all the cases, the AUC of the training set is coming to be 1. Is this always the case with random forests? Can someone please clarify this? I have given a simple example, first using logistic regression and then using random forests to explain the problem. AUC of the random forest is coming out to be