search for: tunegrid

Displaying 20 results from an estimated 23 matches for "tunegrid".

2023 May 09
1
RandomForest tuning the parameters
...> > > customRF$levels <- function(x) x$classes > > > > > > # Set grid search parameters > > control <- trainControl(method="repeatedcv", number=10, repeats=3, > search='grid') > > > > # Outline the grid of parameters > > tunegrid <- expand.grid(.maxnodes=c(10,20,30,50), .ntree=c(100, 200, > 300)) > > set.seed(seed) > > > > # Train the model > > rf_gridsearch <- train(x=X_train_, y=y_train_, method=customRF, > metric=metric, tuneGrid=tunegrid, trControl=control) > > > > plot(rf...
2010 Nov 22
1
Sporadic errors when training models using CARET
...5772 16 8.3776 5.0882 17 8.6567 7.2640 18 20.9386 20.1107 19 12.2903 4.7864 20 10.5920 7.5204 21 10.2679 9.5493 22 6.2023 11.2333 23 -5.0720 -4.8701 24 6.6417 11.5139 > svmLinearGrid <- expand.grid(.C=0.1) > svmLinearFit <- train(train.x, train.y, method="svmLinear", tuneGrid=svmLinearGrid) Fitting: C=0.1 Error in indexes[[j]] : subscript out of bounds > svmLinearFit <- train(train.x, train.y, method="svmLinear", tuneGrid=svmLinearGrid) Fitting: C=0.1 maximum number of iterations reached 0.0005031579 0.0005026807maximum number of iterations reached 0.00...
2012 Nov 23
1
caret train and trainControl
...trainControl to find my hyper parameters like so: MyTrainControl=trainControl( method = "cv", number=5, returnResamp = "all", classProbs = TRUE ) rbfSVM <- train(label~., data = trainset, method="svmRadial", tuneGrid = expand.grid(.sigma=c(0.0118),.C=c(8,16,32,64,128)), trControl=MyTrainControl, fit = FALSE ) Once this returns my ideal parameters, in this case Cost of 64, do I simply just re-run the whole process again, passing a grid only containing the specific parameters? lik...
2009 Jan 15
2
problems with extractPrediction in package caret
...anual and the vignettes but unfortunately I´m getting an error message I can`t figure out. Here is my code: rfControl <- trainControl(method = "oob", returnResamp = "all", returnData=TRUE, verboseIter = TRUE) rftrain <- train(x=train_x, y=trainclass, method="rf", tuneGrid=tuneGrid, tr.control=rfControl) pred <- predict(rftrain) pred # this works fine expred <- extractPrediction(rftrain) Error in models[[1]]$trainingData : $ operator is invalid for atomic vectors My predictors are 28 numeric attributes and one factor. I`m working with the latest version...
2008 Sep 18
1
caret package: arguments passed to the classification or regression routine
...laplace distribution or the quantile distribution. here is the code I used and the error: gbm.test <- train(x.enet, y.matrix[,7], method="gbm", distribution=list(name="quantile",alpha=0.5), verbose=FALSE, trControl=trainControl(method="cv",number=5), tuneGrid=gbmGrid ) Model 1: interaction.depth=1, shrinkage=0.1, n.trees=300 collapsing over other values of n.trees Error in gbm.fit(trainX, modY, interaction.depth = tuneValue$.interaction.depth, : formal argument "distribution" matched by multiple actual arguments The same error occured wit...
2023 May 08
1
RandomForest tuning the parameters
...newdata, type = "prob") customRF$sort <- function(x) x[order(x[,1]),] customRF$levels <- function(x) x$classes ? # Set grid search parameters control <- trainControl(method="repeatedcv", number=10, repeats=3, search='grid') ? # Outline the grid of parameters tunegrid <- expand.grid(.maxnodes=c(10,20,30,50), .ntree=c(100, 200, 300)) set.seed(seed) ? # Train the model rf_gridsearch <- train(x=X_train_, y=y_train_, method=customRF, metric=metric, tuneGrid=tunegrid, trControl=control) ? plot(rf_gridsearch) rf_gridsearch$bestTune ############################...
2012 Oct 17
0
Superficie de respuesta con rsm y nnet
...algo alta y que la falta de ajuste es significativa, decidí probar con una red neural artificial, usando los paquetes "caret" y "nnet". He probado las siguientes opciones: NN.1 <- train(R ~ V1 + V2, data = DATOS.Codificados, method = "nnet", maxit = 10000, tuneGrid = SIZE.DECAY, trace = F, linout = TRUE) donde R² = 0.9936, y AAD = 4.87 %, pero la forma de la superficie de respuesta es muy irregular, nada parecida al modelo obtenido con "rsm". NN.2 <- train(R ~ V1 + V2 + V1:V2, data = DATOS.Codificados, method = "nnet", maxit = 1...
2017 Nov 24
0
Using bartMachine with the caret package
Dave Langer in this video https://www.youtube.com/watch?v=z8PRU46I3NY uses the titanic data as an example of using caret to create xgbTree models. The caret train() function has a tuneGrid parameter which takes a list set up like so: tune.grid <- expand.grid(eta = c(0.05, 0.075, 0.1), nrounds = c(50, 75, 100), max_depth = 6:8, min_child_weight = c(2, 2.25, 2.5), colsample_bytree =...
2011 May 12
2
Can ROC be used as a metric for optimal model selection for randomForest?
Dear all, I am using the "caret" Package for predictors selection with a randomForest model. The following is the train function: rfFit<- train(x=trainRatios, y=trainClass, method="rf", importance = TRUE, do.trace = 100, keep.inbag = TRUE, tuneGrid = grid, trControl=bootControl, scale = TRUE, metric = "ROC") I wanted to use ROC as the metric for variable selection. I know that this works with the logit model by making sure that classProbs = TRUE and summaryFunction = twoClassSummary in the trainControl function. However if I do th...
2010 Mar 23
1
caret package, how can I deal with RFE+SVM wrong message?
...with it? > rfGrid<-expand.grid(.mtry=c(1:2)) > rfectrl<-rfeControl(functions=caretFuncs,method="cv",verbose=F,returnResamp="final",number=10) > subsets<-c(3,4) > set.seed(2) > rf.RFE<-rfe(trx,try,sizes=subsets,rfeControl=rfectrl,method="rf",tuneGrid=rfGrid) Loading required package: class Attaching package: 'class' The following object(s) are masked from package:reshape : condense Fitting: mtry=1 Fitting: mtry=2 Error in varImp.randomForest(object$finalModel, ...) : subscript out of bounds In addition: Warni...
2013 Mar 06
1
CARET and NNET fail to train a model when the input is high dimensional
...licate(nR, rnorm(nCol))) trY <- runif(1)*trX[,1]*trX[,2]^2+runif(1)*trX[,3]/trX[,4] trY <- as.factor(ifelse(sign(trY)>0,'X1','X0')) my.grid <- createGrid(method.name, grid.len, data=trX) my.model <- train(trX,trY,method=method.name,trace=FALSE,trControl=myCtrl,tuneGrid=my.grid, metric="ROC") print("Done") The error I get is: task 2 failed - "arguments imply differing number of rows: 1334, 666" However, everything works if I reduce nR to, say 20. Any thoughts on what may be causing this? Is there a place where I could report this...
2010 Oct 22
2
Random Forest AUC
Guys, I used Random Forest with a couple of data sets I had to predict for binary response. In all the cases, the AUC of the training set is coming to be 1. Is this always the case with random forests? Can someone please clarify this? I have given a simple example, first using logistic regression and then using random forests to explain the problem. AUC of the random forest is coming out to be
2012 May 30
1
caret() train based on cross validation - split dataset to keep sites together?
...8994??? 1980-04-06????????? 8.3???????????12.6 90342??? 1980-07-13????????? 18.9??????????22.3 90342??? 1980-07-14????????? 19.3??????????28.4 EXAMPLE SCRIPT FOR MODEL FITTING fitControl <- trainControl(method = "repeatedcv", number=10, repeats=3) tuning <- read.table("temptunegrid.txt",head=T,sep=",") tuning # # Model with 100 iterations registerDoMC(4) tempmod100its <- train(watmntemp~tempa + tempb + tempc + tempd + tempe + netarea + netbuffor + strmslope + netsoilprm + netslope + gwndx + mnaspect + urb + ag + forest + buffor + tempa7day + tempb7day +...
2010 Sep 29
0
caret package version 4.63
...aggregating and visualization resampling results (resamples) has been enhanced with more visualization methods. The class can also work with caret's feature selection routines (rfe() and sbf()) - the print method for train() has been improved - functions can be now be passed to the tuneGrid argument in train() - an existing function that catalogs the existing models available within train(), called modelLookup(), is now available to the users - when parallel processing, more computations are being executed in the worker processes than previously (e.g. performance calcs...
2010 Sep 29
0
caret package version 4.63
...aggregating and visualization resampling results (resamples) has been enhanced with more visualization methods. The class can also work with caret's feature selection routines (rfe() and sbf()) - the print method for train() has been improved - functions can be now be passed to the tuneGrid argument in train() - an existing function that catalogs the existing models available within train(), called modelLookup(), is now available to the users - when parallel processing, more computations are being executed in the worker processes than previously (e.g. performance calcs...
2011 Feb 16
1
caret::train() and ctree()
Like earth can be trained simultaneously for degree and nprune, is there a way to train ctree simultaneously for mincriterion and maxdepth? Also, I notice there are separate methods ctree and ctree2, and if both options are attempted to tune with one method, the summary averages the option it doesn't support. The full log is attached, and notice these lines below for
2011 Dec 22
0
randomforest and AUC using 10 fold CV - Plotting results
...col=c("red"), lty=1) #Cross validation using 10 fold CV: ctrl <- trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(1) rfEstimate <- train(factor(Species) ~ .,data = iris, method = "rf", metric = "ROC", tuneGrid = data.frame(.mtry = 2), trControl = ctrl) rfEstimate How can i plot the results from the cross validation on the previous ROC plot ? thanks, david
2012 Feb 10
1
Choosing glmnet lambda values via caret
Usually when using raw glmnet I let the implementation choose the lambdas. However when training via caret::train the lambda values are predetermined. Is there any way to have caret defer the lambda choices to caret::train and thus choose the optimal lambda dynamically? -- Yang Zhang http://yz.mit.edu/
2012 Apr 06
0
resampling syntax for caret package
...utcome) predictions <- rep(-1,length(cv_index)) pamGrid <- seq(0.1,5,by=0.2) pamGrid <- data.frame(.threshold=pamGrid) # manual leave-one-out for (holdout in cv_index) { pamFit1 <- train(colon.x[-holdout,], outcome[-holdout], method = "pam", tuneGrid= pamGrid, trControl = trainControl(method = "cv")) predictions[holdout] = predict(pamFit1,newdata = colon.x[holdout,,drop=FALSE]) } # end example > sessionInfo() R version 2.14.2 (2012-02-29) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] LC_CTYPE...
2009 Jul 12
1
Splitting dataset for Tuning Parameter with Cross Validation
Hi, My question might be a little general. I have a number of values to select for the complexity parameters in some classifier, e.g. the C and gamma in SVM with RBF kernel. The selection is based on which values give the smallest cross validation error. I wonder if the randomized splitting of the available dataset into folds is done only once for all those choices for the parameter values, or