Displaying 11 results from an estimated 11 matches for "minobsinnode".
2014 Jul 02
0
How do I call a C++ function (for k-means) within R?
...ar.type=as.integer(var.type),
var.monotone=as.integer(var.monotone),
distribution=as.character(distribution.call.name),
n.trees=as.integer(n.trees),
interaction.depth=as.integer(interaction.depth),
n.minobsinnode=as.integer(n.minobsinnode),
n.classes = as.integer(nClass),
shrinkage=as.double(shrinkage),
bag.fraction=as.double(bag.fraction),
nTrain=as.integer(nTrain),
fit.old=as.double(NA),...
2009 Jun 17
1
gbm for cost-sensitive binary classification?
...iss sth here. Anyone has similar experience and can advise me how to implement cost-sensitive classification with gbm.
model.gbm <- gbm.fit(tr[,1:DIM],tr.y,offset = NULL,misc = NULL,distribution = "bernoulli",w = tr.w,var.monotone = NULL,n.trees = NTREE,interaction.depth = TREEDEPTH,n.minobsinnode = 10,shrinkage = 0.05,bag.fraction = BAGRATIO,train.fraction = 1.0,keep.data = TRUE,verbose = TRUE,var.names = NULL,response.name = NULL);
or
model.gbm <- gbm(tr.y ~ .,distribution = "bernoulli",data=data.frame(cbind(tr[,1:DIM],tr.y)),weights = tr.w,var.monotone=NULL,n.trees=NTREE,...
2008 Mar 05
0
Using tune with gbm --grid search for best hyperparameters
...et this to work. I note that there is no
wrapper for gbm but that it is possible to use non-wrapped functions (like
lm) without problem. Here's a snippet of code to illustrate.
> data(mtcars) obj <-
> gbm(mpg~disp+wt+carb,data=mtcars,distribution="gaussian",n.trees=1000,n.minobsinnode=5)
> summary(obj) #just to demonstrate that gbm worked
var rel.inf
1 disp 55.185469
2 wt 40.198605
3 carb 4.615926
# now let's find the best value for n.minobsinnode using tune
> tune.obj <-
> tune(gbm,mpg~disp+wt+carb,data=mtcars,distribution="gaussian",n.trees=...
2009 Jul 10
1
help! Error in using Boosting...
...shrinkage=0.001, n.trees=20000, bag.fraction=1,
distribution="bernoulli")
Here is the error:
Error in gbm.fit(y = mytraindata[, 1], x = mytraindata[, -1],
interaction.depth = 4, :
The dataset size is too small or subsampling rate is too large:
cRows*train.fraction*bag.fraction <= n.minobsinnode
What might be the problem?
Thanks a lot!
2013 Mar 24
3
Parallelizing GBM
...have a need for parallelization.
I normally rely on gbm.fit for speed reasons, and I usually call it this
way
gbm_model <- gbm.fit(trainRF,prices_train,
offset = NULL,
misc = NULL,
distribution = "multinomial",
w = NULL,
var.monotone = NULL,
n.trees = 50,
interaction.depth = 5,
n.minobsinnode = 10,
shrinkage = 0.001,
bag.fraction = 0.5,
nTrain = (n_train/2),
keep.data = FALSE,
verbose = TRUE,
var.names = NULL,
response.name = NULL)
Does anybody know an easy way to parallelize the model (in this case it
means simply having 4 cores on the same machine working on the problem)?
Any sugg...
2006 May 27
2
boosting - second posting
...additive model, 2: two-way
interactions, etc.
+ bag.fraction = 0.5, # subsampling fraction, 0.5 is
probably best
+ train.fraction = 0.5, # fraction of data for training,
+ # first train.fraction*N used
for training
+ n.minobsinnode = 10, # minimum total weight needed in
each node
+ cv.folds = 5, # do 5-fold cross-validation
+ keep.data=TRUE, # keep a copy of the dataset
with the object
+ verbose=FALSE) # print out progress
>
> best.iter = gbm...
2013 Jun 23
1
Which is the final model for a Boosted Regression Trees (GBM)?
...[3] "train.error" "valid.error"
[5] "oobag.improve" "trees"
[7] "c.splits" "bag.fraction"
[9] "distribution" "interaction.depth"
[11] "n.minobsinnode" "n.trees"
[13] "nTrain" "response.name"
[15] "shrinkage" "train.fraction"
[17] "var.levels" "var.monotone"
[19] "var.names" &qu...
2010 Feb 28
1
Gradient Boosting Trees with correlated predictors in gbm
...<- sqrt(var(Y)/SNR)
Y <- Y + rnorm(n,0,sigma)
mydata <- data.frame(X,Y)
#Fit Model (should take less than 20 seconds on an average modern computer)
gbm1 <- gbm(formula = Y ~ X1 + X2 + X3 + X4 + X5,
data=mydata,
distribution = "gaussian",
n.trees = 500,
interaction.depth = 2,
n.minobsinnode = 10,
shrinkage = 0.1,
bag.fraction = 0.5,
train.fraction = 1,
cv.folds=5,
keep.data = TRUE,
verbose = TRUE)
## Plot variable influence
best.iter <- gbm.perf(gbm1, plot.it = T, method="cv")
print(best.iter)
summary(gbm1,n.trees=best.iter) # based on the estimated best number of
trees...
2010 Apr 26
3
R.GBM package
HI, Dear Greg,
I AM A NEW to GBM package. Can boosting decision tree be implemented in
'gbm' package? Or 'gbm' can only be used for regression?
IF can, DO I need to combine the rpart and gbm command?
Thanks so much!
--
Sincerely,
Changbin
--
[[alternative HTML version deleted]]
2006 May 25
0
boosting
...additive model, 2: two-way
interactions, etc.
+ bag.fraction = 0.5, # subsampling fraction, 0.5 is
probably best
+ train.fraction = 0.5, # fraction of data for training,
+ # first train.fraction*N used
for training
+ n.minobsinnode = 10, # minimum total weight needed in
each node
+ cv.folds = 5, # do 5-fold cross-validation
+ keep.data=TRUE, # keep a copy of the dataset
with the object
+ verbose=FALSE) # print out progress
>
> best.iter = gbm...
2017 Dec 14
0
Distributions for gbm models
...caret recognises?
> getModelInfo("gbm")[["gbm"]]$parameters
parameter class label
1 n.trees numeric # Boosting Iterations
2 interaction.depth numeric Max Tree Depth
3 shrinkage numeric Shrinkage
4 n.minobsinnode numeric Min. Terminal Node Size
Is that a limitation of the caret package? Or is there something I'm
not getting?
--
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.
___ Patrick Connolly
{~._.~} Great minds discuss ideas
_( Y )_...