Displaying 20 results from an estimated 1000 matches similar to: "different result from the same errorest() in library( ipred)"
2005 Jan 06
1
leave-one-out cross validation for randomForest
Dear all,
Can I get the leave-one-out cross validation error of randomForest in
R? I only found tune(), which got the 10-fold cross validation error.
Thanks for any information.
Xin LIU
This e-mail is from ArraDx Ltd
The e-mail and any files transmitted with it are confidentia...{{dropped}}
2005 Jun 23
1
errorest
Hi,
I am using errorest function from ipred package.
I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out".
According to the manual page for errorest, i use the following command:
ce632[i]<-errorest(ytrain ~., data=mydata, model=lda,
estimator=c("boot","632plus"), predict=mypredict.lda)$error
It didn't work. I then tried the
2009 Apr 25
1
Overlapping parameters "k" in different functions in "ipred"
Dear List,
I have a question regarding "ipred" package. Under 10-fold cv, for different knn ( = 1,3,...25), I am getting same misclassification errors:
#############################################
library(ipred)
data(iris)
cv.k = 10 ## 10-fold cross-validation
bwpredict.knn <- function(object, newdata) predict.ipredknn(object, newdata, type="class")
for (i in
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings,
I am having trouble calculating artificial neural network
misclassification errors using errorest() from the ipred package.
I have had no problems estimating the values with randomForest()
or svm(), but can't seem to get it to work with nnet(). I believe
this is due to the output of the predict.nnet() function within
cv.factor(). Below is a quick example of the problem I'm
2004 Jan 09
3
ipred and lda
Dear all,
can anybody help me with the program below? The function predict.lda
seems to be defined but cannot be used by errortest.
The R version is 1.7.1
Thanks in advance,
Stefan
----------------
library("MASS");
library("ipred");
data(iris3);
tr <- sample(1:50, 25);
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3]);
test <- rbind(iris3[-tr,,1],
2003 Jun 24
1
errorest: Error in cv.numeric()
Hi,
I am trying to get an error estimation
for a classification done using lda.
The examples work fine, however I don't get
my own code to work.
The data is in object d
> d
class hydrophobicity charge geometry
1 2 6490.0400 1434.9700 610.99902
2 2 1602.0601 400.6030 -5824.00000
3 2 969.0060 260.1360 -415.00000
4 1
2006 Oct 08
0
Problem in getting 632plus error using randomForest by ipred!
Hello!
I'm Taeho, a graduate student in South Korea.
In order to get .632+ bootstrap error using random forest, I have tried to use 'ipred' package; more specifically the function 'errorest' has been used.
Following the guidelines, I made a simple command line like below:
error<-errorest(class ~ ., data=data, model=randomForest, estimator = "632plus")$err
2005 Jan 10
0
Stadard errors and boxplots with 632plus error estimator, "errorest"
Dear R-users,
I'd like to estimate standard errors (for lda) and make a boxplot with the
"632plus" and "boot" error estimators included in package ipred (method:
errorest). The "boot" estimator returns only a standard deviation but not
the whole error data.
Thank you in advance,
regards,
Antoine
2004 Oct 13
1
random forest -optimising mtry
Dear R-helpers,
I'm working on mass spectra in randomForest/R, and following the
recommendations for the case of noisy variables, I don't want to use the
default mtry (sqrt of nvariables), but I'm not sure up to which
proportion mtry/nvariables it makes sense to increase mtry without
"overtuning" RF.
Let me tell my example: I have 106 spectra belonging to 4 classes, the
2011 Nov 17
1
tuning random forest. An unexpected result
Dear Researches,
I am using RF (in regression way) for analize several metrics extract from
image. I am tuning RF setting a loop using different range of mtry, tree
and nodesize using the lower value of MSE-OOB
mtry from 1 to 5
nodesize from1 to 10
tree from 1 to 500
using this paper as refery
Palmer, D. S., O'Boyle, N. M., Glen, R. C., & Mitchell, J. B. O. (2007).
Random Forest Models
2010 Dec 21
1
randomForest: tuneRF error
Just curious if anyone else has got this error before, and if so,
would know what I could do (if anything) to get past it:
> mtry <- tuneRF(training, trainingdata$class, ntreeTry = 500, stepFactor = 2, improve = 0.05, trace = TRUE, plot = TRUE, doBest = FALSE)
mtry = 13 OOB error = 0.62%
Searching left ...
mtry = 7 OOB error = 1.38%
-1.222222 0.05
Searching right ...
mtry = 26
2005 Mar 18
2
logistic model cross validation resolved
This post is NOT a question, but an answer. For readers please disregard all earlier posts by myself about this question.
I'm posting for two reasons. First to say thanks, especially to Dimitris, for suggesting the use of errorest in the ipred library. Second, so that the solution to this problem is in the archives in case it gets asked again.
If one wants to run a k-fold cross-validation
2009 Aug 13
2
randomForest question--problem with ntree
Hi,
I would like to use a random Forest model to get an idea about which variables from a dataset may have some prognostic significance in a smallish study. The default for the number of trees seems to be 500. I tried changing the default to ntree=2000 or ntree=200 and the results appear identical. Have changed mtry from mtry=5 to mtry=6 successfully. Have seen same problem on both a Windows
2007 Oct 11
1
random forest mtry and mse
I have been using random forest on a data set with 226 sites and 36
explanatory variables (continuous and categorical). When I use
"tune.randomforest" to determine the best value to use in "mtry" there
is a fairly consistent and steady decrease in MSE, with the optimum of
"mtry" usually equal to 1. Why would that occur, and what does it
signify? What I would
2005 Jul 21
4
RandomForest question
Hello,
I'm trying to find out the optimal number of splits (mtry parameter) for a randomForest classification. The classification is binary and there are 32 explanatory variables (mostly factors with each up to 4 levels but also some numeric variables) and 575 cases.
I've seen that although there are only 32 explanatory variables the best classification performance is reached when
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorrect results (PR#8554)
Full_Name: Noel O'Boyle
Version: 2.1.0
OS: Debian GNU/Linux Sarge
Submission from: (NULL) (131.111.8.96)
(1) Description of error
The 10-fold CV option for the svm function in e1071 appears to give incorrect
results for the rmse.
The example code in (3) uses the example regression data in the svm
documentation. The rmse for internal prediction is 0.24. It is expected the
10-fold CV rmse
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorre ct results (PR#8554)
1. This is _not_ a bug in R itself. Please don't use R's bug reporting
system for contributed packages.
2. This is _not_ a bug in svm() in `e1071'. I believe you forgot to take
sqrt.
3. You really should use the `tot.MSE' component rather than the mean of
the `MSE' component, but this is only a very small difference.
So, instead of spread[i] <- mean(mysvm$MSE), you
2003 Feb 27
2
PRESS again
Sorry for the repeat.
The PRESS statistic is defined as
sum(y-yhat(i))^2, where yhat(i) denotes the ith predicted value using
all the data except the ith case (as used typically in linear models).
Thanks again
Jacob
Jacob L van Wyk
Department of Mathematics and Statistics
Rand Afrikaans University
P O Box 524
Auckland Park 2006
South Africa
Tel: +27-11-489-3080
Fax: +27-11-489-2832
2005 Jun 24
1
mypredict.
Hi,
I am wondering what does "mypredict.lda<-function(object,
newdata)predict(object, newdata=newdata)$class" actually do?
I run a few errorest commands in the same function on the same dataset using
the same classifier lda. The only difference is some use "cv", other use
"boot" and "632plus". They all share one mypredict.lda.
Will it cause any
2010 Mar 23
1
caret package, how can I deal with RFE+SVM wrong message?
Hello,
I am learning caret package, and I want to use the RFE to reduce the
feature. I want to use RFE coupled Random Forest (RFE+FR) to complete this
task. As we know, there are a number of pre-defined sets of functions, like
random Forest(rfFuncs), however,I want to tune the parameters (mtr) when
RFE, and then I write code below, but there is something wrong message, How
can I deal with it?