similar to: Reproducibility issue in gbm (32 vs 64 bit)

Displaying 20 results from an estimated 3000 matches similar to: "Reproducibility issue in gbm (32 vs 64 bit)"

2010 Feb 28
1
Gradient Boosting Trees with correlated predictors in gbm
Dear R users, I’m trying to understand how correlated predictors impact the Relative Importance measure in Stochastic Boosting Trees (J. Friedman). As Friedman described “ …with single decision trees (referring to Brieman’s CART algorithm), the relative importance measure is augmented by a strategy involving surrogate splits intended to uncover the masking of influential variables by others
2009 Oct 30
1
possible memory leak in predict.gbm(), package gbm ?
Dear gbm users, When running predict.gbm() on a "large" dataset (150,000 rows, 300 columns, 500 trees), I notice that the memory used by R grows beyond reasonable limits. My 14GB of RAM are often not sufficient. I am interpreting this as a memory leak since there should be no reason to expand memory needs once the data are loaded and passed to predict.gbm() ? Running R version 2.9.2 on
2010 Sep 21
1
package gbm, predict.gbm with offset
Dear all, the help file for predict.gbm states that "The predictions from gbm do not include the offset term. The user may add the value of the offset to the predicted value if desired." I am just not sure how exactly, especially for a Poisson model, where I believe the offset is multiplicative ? For example: library(MASS) fit1 <- glm(Claims ~ District + Group + Age +
2008 Sep 22
1
gbm error
Good afternoon Has anyone tried using Dr. Elith's BRT script? I cannot seem to run gbm.step from the installed gbm package. Is it something external to gbm? When I run the script itself <- gbm.step(data=model.data, gbm.x = colx:coly, gbm.y = colz, family = "bernoulli", tree.complexity = 5, learning.rate = 0.01, bag.fraction = 0.5) ... I
2005 Jan 12
4
gbm
Hi, there: I am wondering if I can find some detailed explanation on gbm or explanation on examples of gbm. thanks, Ed
2010 Apr 26
3
R.GBM package
HI, Dear Greg, I AM A NEW to GBM package. Can boosting decision tree be implemented in 'gbm' package? Or 'gbm' can only be used for regression? IF can, DO I need to combine the rpart and gbm command? Thanks so much! -- Sincerely, Changbin -- [[alternative HTML version deleted]]
2009 Jun 17
1
gbm for cost-sensitive binary classification?
I recently use gbm for a binary classification problem. As expected, it gets very good results, based on Area under ROC with 7-fold cross validation. However, the application (malware detection) is cost-sensitive, getting a FP (classify a clean sample as a dirty one) is much worse than getting a FN (miss a dirty sample). I would like to tune the gbm model biased to very low FP rate. For this
2005 Feb 18
2
gbm
Hi, there: I am always experiencing the scalability of some R packages. This time, I am trying gbm to do adaboosting on my project. Initially I tried to grow trees by using rpart on a dataset with 200 variables and 30,000 observations. Now, I am thinking if I can apply adaboosting on it. I am wondering if here is anyone who did a similar thing before and can provide some sample codes. Also any
2013 Mar 24
3
Parallelizing GBM
Dear All, I am far from being a guru about parallel programming. Most of the time, I rely or randomForest for data mining large datasets. I would like to give a try also to the gradient boosted methods in GBM, but I have a need for parallelization. I normally rely on gbm.fit for speed reasons, and I usually call it this way gbm_model <- gbm.fit(trainRF,prices_train, offset = NULL, misc =
2012 Apr 25
1
Question about NV18 and GBM library.
Hi, I have a geforce 4mx 440 agp 8x, and I'm trying to use the GBM library, (as jbarnes in: http://virtuousgeek.org/blog/index.php/jbarnes/2011/10/ and David Hermann in KMSCON: https://github.com/dvdhrm/kmscon), without success. when I try to create a gbm_device, I get: (below the code.) nouveau_drm_screen_create: unknown chipset nv18 dri_init_screen_helper: failed to create pipe_screen
2013 Jun 23
1
Which is the final model for a Boosted Regression Trees (GBM)?
Hi R User, I was trying to find a final model in the following example by using the Boosted regression trees (GBM). The program gives the fitted values but I wanted to calculate the fitted value by hand to understand in depth. Would you give moe some hints on what is the final model for this example? Thanks KG ------- The following script I used #----------------------- library(dismo)
2010 Jun 23
1
gbm function
 Hello   I have questions about gbm package.  It seems we have to devide data to two part (training set and test set) for first.   1- trainig set for running of gbm function 2- test set for gbm.perf      is it rigth? I have 123 sample that I devided 100 for trainig and 23 for test.   So, parameter of cv.folds in gbm function is for what?   Thanks alot Azam       [[alternative HTML
2012 Apr 16
1
Can't install package gbm, because packageVersion is not an exported object from namespace::Utils
I'm running R 2.11.1 on 64 bit Debian. I've had no problem installing any other CRAN packages, but installing package "gbm" fails due to: *** installing help indices ** building package indices ... ** testing if installed package can be loaded Error : .onAttach failed in attachNamespace() for 'gbm', details: call: NULL error: 'packageVersion' is not an
2018 Feb 19
2
gbm.step para clasificación no binaria
Hola erreros, ¿sabéis si gbm.step puede usarse para clasificación no binaria? Gracias -- Dr Manuel Mendoza Department of Biogeography and Global Change National Museum of Natural History (MNCN) Spanish Scientific Council (CSIC) C/ Serrano 115bis, 28006 MADRID Spain
2010 May 21
1
Question regarding GBM package
Dear R expert I have come across the GBM package for R and it seemed appropriate for my research. I am trying to predict the number of FPGA resources required by a Software Function if it were mapped onto hardware. As input I use software metrics (a lot of them). I already use several regression techniques, and the graphs I produce with GBM look promising. Now my question... I see that the
2018 Feb 19
3
gbm.step para clasificación no binaria
Gracias Carlos. Hasta donde yo entiendo si las hay: El argumento family puede ser: "gaussian" (for minimizing squared error); por lo que tiene que ser numérica "bernoulli" (logistic regression for 0-1 out-comes); binaria por narices "poisson" (count outcomes; requires the response to be a positive integer); numérica también, pues. La única podría ser
2010 May 01
1
bag.fraction in gbm package
Hi, Dear Greg, Sorry to bother you again. I have several questions about the 'gbm' package. if the train.fraction is less than 1 (ie. 0.5) , then the* first* 50% will be used to fit the model, the other 50% can be used to estimate the performance. if bag.fraction is 0.5, then gbm use the* random* 50% of the data to fit the model, and the other 50% data is used to estimate the
2012 Dec 12
1
extracting splitting rules from GBM
I extracting splitting rules from Greg Ridgeway's GBM 1.6-3.2 in R 2.15.2, so I can run classification in a production system outside of R. ?I have it working and verified for a dummy data set with all variable types (numeric, factor, ordered) and missing values, but in the titanic survivors data set the splitting rule for factors does not make sense. ?The attached code and log below explains
2010 Jun 15
1
output from the gbm package
HI, Dear Greg and R community, I have one question about the output of gbm package. the output of Boosting should be f(x), from it , how to calculate the probability for each observations in data set? SInce it is stochastic, how can guarantee that each observation in training data are selected at least once? IF SOME obs are not selected, how to calculate the training error? Thanks? --
2018 Feb 19
3
gbm.step para clasificación no binaria
Hola de nuevo. Se me olvidaba la principal razón para utilizar gbm.step del paquete dismo. Como sabéis, los boosted si sobreajustan (a diferencia de los random forest o cualquier otro bootstrap) pero gbm.step hace validación cruzada para determinar el nº óptimo de árboles y evitarlo. Es fundamental. La opción que me queda, Carlos, es hacerlo con gbm, pero muchas veces, y usar el