similar to: Regarding randomForest regression

Displaying 20 results from an estimated 500 matches similar to: "Regarding randomForest regression"

2010 Mar 30
1
predict.kohonen for SOM returns NA?
All, The kohonen predict function is returning NA for SOM predictions regardless of data used... even the package example for a SOM using wine data is returning NA's Does anyone have a working example SOM. Also, what is the purpose of trainY, what would be the dependent data for an unsupervised SOM? As may be apparent to you by my questions, I am very new to kohonen maps and am very grateful
2012 Oct 22
1
random forest
Hi all, Can some one tell me the difference between the following two formulas? 1. epiG.rf <-randomForest(gamma~.,data=data, na.action = na.fail,ntree = 300,xtest = NULL, ytest = NULL,replace = T, proximity =F) 2.epiG.rf <-randomForest(gamma~.,data=data, na.action = na.fail,ntree = 300,xtest = NULL, ytest = NULL,replace = T, proximity =F) [[alternative HTML version deleted]]
2008 Jun 15
1
randomForest, 'No forest component...' error while calling Predict()
Dear R-users, While making a prediction using the randomForest function (package randomForest) I'm getting the following error message: "Error in predict.randomForest(model, newdata = CV) : No forest component in the object" Here's my complete code. For reproducing this task, please find my 2 data sets attached ( http://www.nabble.com/file/p17855119/data.rar data.rar ).
2004 Apr 15
7
all(logical(0)) and any(logical(0))
Dear R-help, I was bitten by the behavior of all() when given logical(0): It is TRUE! (And any(logical(0)) is FALSE.) Wouldn't it be better to return logical(0) in both cases? The problem surfaced because some un-named individual called randomForest(x, y, xtest, ytest,...), and gave y as a two-level factor, but ytest as just numeric vector. I thought I check for that in my code by testing
2004 Jan 20
1
random forest question
Hi, here are three results of random forest (version 4.0-1). The results seem to be more or less the same which is strange because I changed the classwt. I hoped that for example classwt=c(0.45,0.1,0.45) would result in fewer cases classified as class 2. Did I understand something wrong? Christian x1rf <- randomForest(x=as.data.frame(mfilters[cvtrain,]),
2009 Dec 10
2
different randomForest performance for same data
Hello, I came across a problem when building a randomForest model. Maybe someone can help me. I have a training- and a testdataset with a discrete response and ten predictors (numeric and factor variables). The two datasets are similar in terms of number of predictor, name of variables and datatype of variables (factor, numeric) except that only one predictor has got 20 levels in the training
2006 Jul 26
3
memory problems when combining randomForests
Dear all, I am trying to train a randomForest using all my control data (12,000 cases, ~ 20 explanatory variables, 2 classes). Because of memory constraints, I have split my data into 7 subsets and trained a randomForest for each, hoping that using combine() afterwards would solve the memory issue. Unfortunately, combine() still runs out of memory. Is there anything else I can do? (I am not using
2009 Apr 04
1
error in trmesh (alphahull package)
Hello R community, I have cross-posted with r-sig-geo as this issue could fall under either interest group I believe. I just came accross the alphahull package and am very pleased I may not need to use CGAL anymore for this purpose. However, I am having a problem computing alpha shapes with my point data, and it seems to have to do with the spatial configuration of my points (which form
2011 Jun 02
1
aucRoc in caret package [SEC=UNCLASSIFIED]
Hi all, I used the following code and data to get auc values for two sets of predictions: library(caret) > table(predicted1, trainy) trainy hard soft 1 27 0 2 11 99 > aucRoc(roc(predicted1, trainy)) [1] 0.5 > table(predicted2, trainy) trainy hard soft 1 27 2 2 11 97 > aucRoc(roc(predicted2, trainy)) [1] 0.8451621 predicted1: 1 1 2
2012 Oct 10
2
lm on matrix data
Hi, I have a question about using lm on matrix, have to admit it is very trivial but I just couldn't find the answer after searched the mailing list and other online tutorial. It would be great if you could help. I have a matrix "trainx" of 492(rows) by 220(columns) that is my x, and trainy is 492 by 1. Also, I have the newdata testx which is 240 (rows) by 220 (columns). Here is
2012 Dec 03
2
Different results from random.Forest with test option and using predict function
Hello R Gurus, I am perplexed by the different results I obtained when I ran code like this: set.seed(100) test1<-randomForest(BinaryY~., data=Xvars, trees=51, mtry=5, seed=200) predict(test1, newdata=cbind(NewBinaryY, NewXs), type="response") and this code: set.seed(100) test2<-randomForest(BinaryY~., data=Xvars, trees=51, mtry=5, seed=200, xtest=NewXs, ytest=NewBinarY) The
2017 Aug 23
1
cross validation in random forest using rfcv functin
Hi all, I would like to do cross validation in random forest using rfcv function. As the documentation for this package says: rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) max(1, floor(sqrt(p))), recursive=FALSE, ...) however I don't know how to build trianx and trainy for my data set, and I could not understand the way trainx is built in the package
2017 Aug 23
2
cross validation in random forest rfcv functin
Hi all, I would like to do cross validation in random forest using rfcv function. As the documentation for this package says: rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) max(1, floor(sqrt(p))), recursive=FALSE, ...) however I don't know how to build trianx and trainy for my data set, and I could not understand the way trainx is built in the package
2010 Jan 02
1
Please help me!!!! Error in `[.data.frame`(x, , retained, drop = FALSE) : undefined columns selected
I am learning the package "caret", after I do the "rfe" function, I get the error ,as follows: Error in `[.data.frame`(x, , retained, drop = FALSE) : undefined columns selected In addition: Warning message: In predict.lm(object, x) : prediction from a rank-deficient fit may be misleading I try to that manual example, that is good, my data is wrong. I do not know what
2017 Aug 23
0
cross validation in random forest using rfcv functin
Any responds?! On Wednesday, August 23, 2017 5:50 AM, Elahe chalabi via R-help <r-help at r-project.org> wrote: Hi all, I would like to do cross validation in random forest using rfcv function. As the documentation for this package says: rfcv(trainx, trainy, cv.fold=5, scale="log", step=0.5, mtry=function(p) max(1, floor(sqrt(p))), recursive=FALSE, ...) however I
2005 Oct 11
1
a problem in random forest
Hi, there: I spent some time on this but I think I really cannot figure it out, maybe I missed something here: my data looks like this: > dim(trn3) [1] 7361 209 > dim(val3) [1] 7427 209 > mg.rf2<-randomForest(x=trn3[,1:208], y=trn3[,209], data=trn3, xtest=val3[, 1:208], ytest=val3[,209], importance=T) my test data has 7427 observations but after prediction, > dim(mg.rf2$votes)
2009 Sep 15
1
Boost in R
Hello, does any one know how to interpret this output in R? > Classification with logitboost > fit <- logitboost(xlearn, ylearn, xtest, presel=50, mfinal=20) > summarize(fit, ytest) Minimal mcr: 0 achieved after 6 boosting step(s) Fixed mcr: 0 achieved after 20 boosting step(s) What is "mcr" mean? Thanks [[alternative HTML version deleted]]
2006 Jul 24
2
RandomForest vs. bayes & svm classification performance
Hi This is a question regarding classification performance using different methods. So far I've tried NaiveBayes (klaR package), svm (e1071) package and randomForest (randomForest). What has puzzled me is that randomForest seems to perform far better (32% classification error) than svm and NaiveBayes, which have similar classification errors (45%, 48% respectively). A similar difference in
2013 Apr 15
1
Imputation with SOM using kohonen package
I have a data set with 10 variables, and about 8000 instances (or objects/rows/samples). In addition I have one more ('class') variable that I have about 10 instances for, but for which I wish to impute values for. I am a little confused how to go about doing this, mostly as I'm not well-versed in it. Do I train the SOM with a data object that contains just the first 10 variables
2002 Oct 04
1
items in Rd file
Dear R-devel, I'm encountering a strange problem in a Rd file that I'm working on. In the "Value" section, I have something like: ===================== \value{ An object of class \code{randomForest}, which is a list with the following components: \item{call}{the original call to \code{randomForest}} ... For classification problem, the following are also included: