Displaying 20 results from an estimated 20000 matches similar to: "Regression evaluation"
2023 May 08
1
RandomForest tuning the parameters
Dear R-experts,
Here below a toy example with some error messages, especially at the end of the code (Tuning the parameters). Your help to correct my R code would be highly appreciated.
#######################################
#libraries
library(lattice)
library(ggplot2)
library(caret)
library(randomForest)
??
#Data
2023 May 09
1
RandomForest tuning the parameters
Hi Sacha,
On second thought, perhaps this is more the direction that you want ...
X2 = cbind(X_train,y_train)
colnames(X2)[3] = "y"
regr2<-randomForest(y~x1+x2, data=X2,maxnodes=10, ntree=10)
regr
regr2
#Make prediction
predictions= predict(regr, X_test)
predictions2= predict(regr2, X_test)
HTH,
Eric
On Tue, May 9, 2023 at 6:40?AM Eric Berger <ericjberger at gmail.com>
2009 Jan 25
0
caret version 4.06 released
Version 4.06 of the caret package was sent to CRAN.
caret can be used to tune the parameters of predictive models using
resampling, estimate variable importance and visualize the results.
There are also various modeling and "helper" functions that can be
useful for training models. caret has wrappers to over 50 different
models for classification and regression. See the package
2009 Jan 25
0
caret version 4.06 released
Version 4.06 of the caret package was sent to CRAN.
caret can be used to tune the parameters of predictive models using
resampling, estimate variable importance and visualize the results.
There are also various modeling and "helper" functions that can be
useful for training models. caret has wrappers to over 50 different
models for classification and regression. See the package
2012 Feb 10
1
Custom caret metric based on prob-predictions/rankings
I'm dealing with classification problems, and I'm trying to specify a
custom scoring metric (recall at p, ROC, etc.) that depends on not just
the class output but the probability estimates, so that caret::train
can choose the optimal tuning parameters based on this metric.
However, when I supply a trainControl summaryFunction, the data given
to it contains only class predictions, so the
2016 Feb 08
3
tamaño de rolling window (series temporales)
Hola!!
Estoy intentando evaluar mi modelo de series temporales (uso auto.arima).
Para ello he implemetado el método "rolling window" que se basa en ir
añadiendo progresivamente datos al conjunto de train para testar el
modelo. Por ejemplo:
- Train: 1 año, test: día 1 (24 observaciones, una por hora) --> evalúo
ese día (RMSE por ejemplo)
- Train: 1 año + 1 día, test: día 2 -->
2010 Sep 29
0
caret package version 4.63
Version 4.63 of the caret package is now on CRAN.
caret can be used to tune the parameters of predictive models using
resampling, estimate variable importance and visualize the results.
There are also various modeling and "helper" functions that can be
useful for training models.
caret has wrappers to over 99 different models for classification
and regression. See the package vignettes
2010 Sep 29
0
caret package version 4.63
Version 4.63 of the caret package is now on CRAN.
caret can be used to tune the parameters of predictive models using
resampling, estimate variable importance and visualize the results.
There are also various modeling and "helper" functions that can be
useful for training models.
caret has wrappers to over 99 different models for classification
and regression. See the package vignettes
2007 Dec 11
1
postResample R² and lm() R²
Hello,
I'm with a conceptual doubt regarding Rsquared of both lm() and
postResample(library caret).
I've got a multiple regression linear model (lets say mlr) with anR² value
of 67.52%.
Then I use this model pro make predictions with predict() function using the
same data as input , that is, use the generated model to predict the value
associated with data that I used as input.
Next, if
2012 Feb 10
1
Choosing glmnet lambda values via caret
Usually when using raw glmnet I let the implementation choose the
lambdas. However when training via caret::train the lambda values are
predetermined. Is there any way to have caret defer the lambda
choices to caret::train and thus choose the optimal lambda
dynamically?
--
Yang Zhang
http://yz.mit.edu/
2013 Nov 06
1
R help-classification accuracy of DFA and RF using caret
Hi,
I am a graduate student applying published R scripts to compare the classification accuracy of 2 predictive models, one built using discriminant function analysis and one using random forests (webpage link for these scripts is provided below). The purpose of these models is to predict the biotic integrity of streams. Specifically, I am trying to compare the classification accuracy (i.e.,
2012 Apr 13
1
caret package: custom summary function in trainControl doesn't work with oob?
Hi all,
I've been using a custom summary function to optimise regression model
methods using the caret package. This has worked smoothly. I've been using
the default bootstrapping resampling method. For bagging models
(specifically randomForest in this case) caret can, in theory, uses the
out-of-bag (oob) error estimate from the model instead of resampling, which
(in theory) is largely
2017 Nov 24
0
Using bartMachine with the caret package
Dave Langer in this video https://www.youtube.com/watch?v=z8PRU46I3NY
uses the titanic data as an example of using caret to create xgbTree
models. The caret train() function has a tuneGrid parameter which
takes a list set up like so:
tune.grid <- expand.grid(eta = c(0.05, 0.075, 0.1),
nrounds = c(50, 75, 100),
max_depth = 6:8,
2012 May 30
1
caret() train based on cross validation - split dataset to keep sites together?
Hello all,
I have searched and have not yet identified a solution so now I am sending
this message. In short, I need to split my data into training, validation,
and testing subsets that keep all observations from the same sites together
? preferably as part of a cross validation procedure. Now for the longer
version. And I must confess that although my R skills are improving, they
are not so
2010 Apr 06
1
Caret package and lasso
Dear all,
I have used following code but everytime I encounter a problem of not having
coefficients for all the variables in the predictor set.
# code
rm(list=ls())
library(caret)
# generating response and design matrix
X<-matrix(rnorm(50*100),nrow=50)
y<-rnorm(50*1)
# Applying caret package
con<-trainControl(method="cv",number=10)
data<-NULL
data<- train(X,y,
2017 Jul 06
0
svm.formula versus svm.default - different results
Dear community,
I'm performing svm-regression with svm at library e1071.
As I wrote in another post: "svm e1071 call - different results", I get different results if I use the svm.default rather than the svm.formula, being better the ones at svm.formula
I've debugged both options.
While debugging the svm.formula, I've seen that when I reach the call:
ret <-
2010 Jan 25
0
glmnet in caret packge
Dear all,
I want to train my model with LASSO using caret package
(glmnet). So, in glmnet, there are two parameters, alpha and lambda. How can
I fix my alpha=1 to get a lasso model?
con<-trainControl(method="cv",number=10)
model <- train(X, y, "glmnet", metric="RMSE",tuneLength = 10, trControl =
con)
Thanks
Alex Roy
[[alternative HTML
2011 Nov 16
0
problem to tunning RandomForest, an unexpected result
Dear Researches,
I am using RF (in regression way) for analize several metrics extract from
image. I am tuning RF setting a loop using different range of mtry, tree
and nodesize using the lower value of MSE-OOB
mtry from 1 to 5
nodesize from1 to 10
tree from 1 to 500
using this paper as refery
Palmer, D. S., O'Boyle, N. M., Glen, R. C., & Mitchell, J. B. O. (2007).
Random Forest Models
2012 Apr 06
0
resampling syntax for caret package
Max and List,
Could you advise me if I am using the proper caret syntax to carry out
leave-one-out cross validation. In the example below, I use example
data from the rda package. I use caret to tune over a grid and select
an optimal value. I think I am then using the optimal selection for
prediction. So there are two rounds of resampling with the first one
taken care of by caret's train
2007 Oct 05
0
new packages: caret, caretLSF and caretNWS
Three more packages will be showing up on your mirror soon.
The caret package (short for "Classification And REgression Training")
aims to simplify the model building process. The package has functions
for
- data splitting: balanced train/test splits, cross-validation and
bootstrapping sampling functions. There is also a function for maximum
dissimilarity sampling.
-