similar to: caret: Error when using rpart and CV != LOOCV

Displaying 20 results from an estimated 500 matches similar to: "caret: Error when using rpart and CV != LOOCV"

2012 Jul 12
1
Caret: Use timingSamps leads to error
I want to use the caret package and found out about the timingSamps obtion to obtain the time which is needed to predict results. But, as soon as I set a value for this option, the whole model generation fails. Check this example: ------------------------- library(caret) tc=trainControl(method='LGOCV', timingSamps=10) tcWithout=trainControl(method='LGOCV')
2012 Jun 09
1
caret: compare linear models of different degree
I want to use the caret package to train linear models. I want to compare these models when using different degrees (aka degrees of interaction). This is possible for the 'earth' method (using the '.degree' parameter) but I found no possibility of customizing the degree for the 'lm' method. This might be due to the fact that the basic 'lm' function does not support
2012 May 21
3
Need help for R install
Dear R committee: I am Renzhi, Ph.D student in computer science in the University of Missouri. I have one question for you. I try to install R in the linux server, but I don't have the root permission, is there any way to install the R locally? Thank you very much for helping me. Renzhi Cao Graduate Research Assistant Department of Computer Science University of
2013 Mar 06
1
CARET and NNET fail to train a model when the input is high dimensional
The following code fails to train a nnet model in a random dataset using caret: nR <- 700 nCol <- 2000 myCtrl <- trainControl(method="cv", number=3, preProcOptions=NULL, classProbs = TRUE, summaryFunction = twoClassSummary) trX <- data.frame(replicate(nR, rnorm(nCol))) trY <- runif(1)*trX[,1]*trX[,2]^2+runif(1)*trX[,3]/trX[,4] trY <-
2005 Jul 25
5
passing formula arguments cv.glm
I am trying to write a wrapper for the last example in help(cv.glm) that deals with leave-one-out-cross-validation (LOOCV) for a logistic model. This wrapper will be used as part of a bigger program. Here is my wrapper funtion : logistic.LOOCV.err <- function( formu=NULL, data=NULL ){ cost.fn <- function(cl, pred) mean( abs(cl-pred) > 0.5 ) glmfit <- glm(
2013 Mar 23
1
LOOCV over SVM,KNN
Good afternoon. I would like to know if there is any function in R to do LOOCV with these classifiers: 1)SVM 2)Neural Networks 3)C4.5 ( J48) 4)KNN Thanks a lot! [[alternative HTML version deleted]]
2013 Nov 15
1
Inconsistent results between caret+kernlab versions
I'm using caret to assess classifier performance (and it's great!). However, I've found that my results differ between R2.* and R3.* - reported accuracies are reduced dramatically. I suspect that a code change to kernlab ksvm may be responsible (see version 5.16-24 here: http://cran.r-project.org/web/packages/caret/news.html). I get very different results between caret_5.15-61 +
2010 Apr 25
1
function pointer question
Hello, I have the following function that receives a "function pointer" formal parameter name "fnc": loocv <- function(data, fnc) { n <- length(data.x) score <- 0 for (i in 1:n) { x_i <- data.x[-i] y_i <- data.y[-i] yhat <- fnc(x=x_i,y=y_i) score <- score + (y_i - yhat)^2 } score <- score/n
2004 Nov 24
2
LDA with previous PCA for dimensionality reduction
Dear all, not really a R question but: If I want to check for the classification accuracy of a LDA with previous PCA for dimensionality reduction by means of the LOOCV method: Is it ok to do the PCA on the WHOLE dataset ONCE and then run the LDA with the CV option set to TRUE (runs LOOCV) -- OR-- do I need - to compute for each 'test-bag' (the n-1 observations) a PCA
2012 Nov 23
1
caret train and trainControl
I am used to packages like e1071 where you have a tune step and then pass your tunings to train. It seems with caret, tuning and training are both handled by train. I am using train and trainControl to find my hyper parameters like so: MyTrainControl=trainControl( method = "cv", number=5, returnResamp = "all", classProbs = TRUE ) rbfSVM <- train(label~., data =
2009 Jun 08
3
caret package
Hi all I am using the caret package and having difficulty in obtaining the results using regression, I used the glmnet to model and trying to get the coefficients and the model parameters I am trying to use the extractPrediction to obtain a confusion matrix and it seems to be giving me errors. x<-read.csv("x.csv", header=TRUE); y<-read.csv("y.csv", header=TRUE);
2017 Dec 02
0
How can you find the optimal number of values to randomly sample to optimize random forest classification without trial and error?
I have data set up like the following: control1 <- sample(1:75, 3947398, replace=TRUE) control2 <- sample(1:75, 28793, replace=TRUE) control3 <- sample(1:100, 392733, replace=TRUE) control4 <- sample(1:75, 858383, replace=TRUE) patient1 <- sample(1:100, 28048, replace=TRUE) patient2 <- sample(1:50, 80400, replace=TRUE) patient3 <- sample(1:100, 48239, replace=TRUE) control
2011 Jan 24
5
Train error:: subscript out of bonds
Hi, I am trying to construct a svmpoly model using the "caret" package (please see code below). Using the same data, without changing any setting, I am just changing the seed value. Sometimes it constructs the model successfully, and sometimes I get an ?Error in indexes[[j]] : subscript out of bounds?. For example when I set seed to 357 following code produced result only for 8
2007 Oct 30
1
NAIVE BAYES with 10-fold cross validation
hi there!! i am trying to implement the code of the e1071 package for naive bayes, but it doens't really work, any ideas?? i am very glad about any help!! i need a naive bayes with 10-fold cross validation: code: library(e1071) model <- naiveBayes(code ~ ., mydata) tune.control <- tune.control(random = FALSE, nrepeat = 1, repeat.aggregate = min, sampling = c("cross"),
2013 Mar 02
2
caret pls model statistics
Greetings, I have been exploring the use of the caret package to conduct some plsda modeling. Previously, I have come across methods that result in a R2 and Q2 for the model. Using the 'iris' data set, I wanted to see if I could accomplish this with the caret package. I use the following code: library(caret) data(iris) #needed to convert to numeric in order to do regression #I
2008 Sep 18
1
caret package: arguments passed to the classification or regression routine
Hi, I am having problems passing arguments to method="gbm" using the train() function. I would like to train gbm using the laplace distribution or the quantile distribution. here is the code I used and the error: gbm.test <- train(x.enet, y.matrix[,7], method="gbm", distribution=list(name="quantile",alpha=0.5), verbose=FALSE,
2011 Aug 28
1
Trying to extract probabilities in CARET (caret) package with a glmStepAIC model
Dear developers, I have jutst started working with caret and all the nice features it offers. But I just encountered a problem: I am working with a dataset that include 4 predictor variables in Descr and a two-category outcome in Categ (codified as a factor). Everything was working fine I got the results, confussion matrix etc. BUT for obtaining the AUC and predicted probabilities I had to add
2009 Jun 30
2
NaiveBayes fails with one input variable (caret and klarR packages)
Hello, We have a system which creates thousands of regression/classification models and in cases where we have only one input variable NaiveBayes throws an error. Maybe I am mistaken and I shouldn't expect to have a model with only one input variable. We use R version 2.6.0 (2007-10-03). We use caret (v4.1.19), but have tested similar code with klaR (v.0.5.8), because caret relies on
2008 Feb 27
7
Cross Validation
Hello, How can I do a cross validation in R? Thank You!
2013 Nov 06
1
R help-classification accuracy of DFA and RF using caret
Hi, I am a graduate student applying published R scripts to compare the classification accuracy of 2 predictive models, one built using discriminant function analysis and one using random forests (webpage link for these scripts is provided below). The purpose of these models is to predict the biotic integrity of streams. Specifically, I am trying to compare the classification accuracy (i.e.,