Displaying 20 results from an estimated 23 matches for "errorest".
Did you mean:
errores
2005 Jan 06
1
different result from the same errorest() in library( ipred)
Dear all,
Does anybody can explain this: different results got when all the same parameters are used in the errorest() in library ipred, as the following?
errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv", est.para=control.errorest(k=3), mtry=2)$err
[1] 0.03333333
> errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv", est.para=control.errorest(k=3),...
2005 Jun 23
1
errorest
Hi,
I am using errorest function from ipred package.
I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out".
According to the manual page for errorest, i use the following command:
ce632[i]<-errorest(ytrain ~., data=mydata, model=lda,
estimator=c("boot","632plus&qu...
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings,
I am having trouble calculating artificial neural network
misclassification errors using errorest() from the ipred package.
I have had no problems estimating the values with randomForest()
or svm(), but can't seem to get it to work with nnet(). I believe
this is due to the output of the predict.nnet() function within
cv.factor(). Below is a quick example of the problem I'm
experien...
2009 Apr 25
1
Overlapping parameters "k" in different functions in "ipred"
...getting same misclassification errors:
#############################################
library(ipred)
data(iris)
cv.k = 10 ## 10-fold cross-validation
bwpredict.knn <- function(object, newdata) predict.ipredknn(object, newdata, type="class")
for (i in seq(1,25,2)){
set.seed(19)
a<-errorest(Species ~ ., data=iris, model=ipredknn, estimator="cv", est.para=control.errorest(k=cv.k), predict=bwpredict.knn, nk = i)$err
print(a)
}
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[1] 0.02666667
[...
2003 Jun 24
1
errorest: Error in cv.numeric()
...53.00000
182 1 428.8740 130.8020 -328.00000
183 1 287.5540 98.0767 34.00000
Since predict.lda does not return simply the classification
it is wrapped, as in the docs:
mypredict.lda <- function(object, newdata) predict(object, newdata = newdata)$class
In trying errorest() I get the message
> errorest(class ~ hydrophobicity + charge, data=d, model=lda, predict=mypredict.lda)
Error in cv.numeric(y, formula, data, model = model, predict = predict, :
predict does not return numerical values
even though a "manual" lda seems to provide the co...
2005 Mar 18
2
logistic model cross validation resolved
This post is NOT a question, but an answer. For readers please disregard all earlier posts by myself about this question.
I'm posting for two reasons. First to say thanks, especially to Dimitris, for suggesting the use of errorest in the ipred library. Second, so that the solution to this problem is in the archives in case it gets asked again.
If one wants to run a k-fold cross-validation using specified folds, and get misclassification error and root mean squared error this is what you do.
Below is a script that will d...
2005 Jan 10
0
Stadard errors and boxplots with 632plus error estimator, "errorest"
Dear R-users,
I'd like to estimate standard errors (for lda) and make a boxplot with the
"632plus" and "boot" error estimators included in package ipred (method:
errorest). The "boot" estimator returns only a standard deviation but not
the whole error data.
Thank you in advance,
regards,
Antoine
2004 Jan 09
3
ipred and lda
...is3[tr,,2], iris3[tr,,3]);
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3]);
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)));
z <- lda(train, cl);
predict(z, test)$class;
data.frame(class=cl, train);
flowers <- data.frame(class=cl, train);
errorest(class ~ ., data=flowers, model=lda, estimator="cv",
predict=predict.lda);
Error-Message is :
Error: Object "predict.lda" not found
2006 Oct 08
0
Problem in getting 632plus error using randomForest by ipred!
Hello!
I'm Taeho, a graduate student in South Korea.
In order to get .632+ bootstrap error using random forest, I have tried to use 'ipred' package; more specifically the function 'errorest' has been used.
Following the guidelines, I made a simple command line like below:
error<-errorest(class ~ ., data=data, model=randomForest, estimator = "632plus")$err
however, I got an error message saying that:
randomForest.default(m, y, ...) :
Can't have empty c...
2005 Jun 24
1
mypredict.
Hi,
I am wondering what does "mypredict.lda<-function(object,
newdata)predict(object, newdata=newdata)$class" actually do?
I run a few errorest commands in the same function on the same dataset using
the same classifier lda. The only difference is some use "cv", other use
"boot" and "632plus". They all share one mypredict.lda.
Will it cause any problem?
Regards, justin
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorrect results (PR#8554)
...le regression data in the svm
documentation. The rmse for internal prediction is 0.24. It is expected the
10-fold CV rmse should be bigger, but the result obtained using the "cross=10"
option is 0.07. When the 10-fold CV is conducted either 'by hand' (not shown
below) or using the errorest function in ipred (shown below) the answer is
closer to 0.27, a more reasonable value.
(2) Description of system
I'm using the Debian Sarge version of R:
R : Copyright 2005, The R Foundation for Statistical Computing
Version 2.1.0 (2005-04-18), ISBN 3-900051-07-0
svm is in the e1071 p...
2003 Feb 27
2
PRESS again
Sorry for the repeat.
The PRESS statistic is defined as
sum(y-yhat(i))^2, where yhat(i) denotes the ith predicted value using
all the data except the ith case (as used typically in linear models).
Thanks again
Jacob
Jacob L van Wyk
Department of Mathematics and Statistics
Rand Afrikaans University
P O Box 524
Auckland Park 2006
South Africa
Tel: +27-11-489-3080
Fax: +27-11-489-2832
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorre ct results (PR#8554)
...umentation. The rmse for internal prediction is 0.24. It
> is expected the
> 10-fold CV rmse should be bigger, but the result obtained
> using the "cross=10"
> option is 0.07. When the 10-fold CV is conducted either 'by
> hand' (not shown
> below) or using the errorest function in ipred (shown below)
> the answer is
> closer to 0.27, a more reasonable value.
>
> (2) Description of system
>
> I'm using the Debian Sarge version of R:
> R : Copyright 2005, The R Foundation for Statistical Computing
> Version 2.1.0 (2005-04-18),...
2012 Nov 09
0
10-Fold Cross Validation AND Random Forest
...;s not something I can do from scratch on my own. So I have spent all this morning trying to find a good example script of 10-Fold Cross-validation and RandomForest where the ratio of classes is maintained. I have done multiple google searches. The only thing I came across was the ipred package and errorest. But when I was going through the?errorest examples I did not see examples of actual classification output only error rates. So I turn to you guys. Do you have a script I can refer to and learn off of to accomplish what I need to accomplish?
Dan
[[alternative HTML version deleted]]
2004 Feb 25
2
LOOCV using R
Can someone help me with performing leave-out-one cross validation using
R (model built is a Cox model)? Thanks.
---------------------------------------------
David Verbel, MPH
Senior Biostatistician
Aureon Biosciences
28 Wells Avenue
Yonkers, NY 10701
Phone: (914) 377-4021
Fax: (914) 377-4001
---------------------------------------------
[[alternative HTML version deleted]]
2005 Jan 06
1
leave-one-out cross validation for randomForest
Dear all,
Can I get the leave-one-out cross validation error of randomForest in
R? I only found tune(), which got the 10-fold cross validation error.
Thanks for any information.
Xin LIU
This e-mail is from ArraDx Ltd
The e-mail and any files transmitted with it are confidentia...{{dropped}}
2005 Jan 21
2
cross validation
How to select training data set and test data set from the original data for performing cross-validation
---------------------------------
[[alternative HTML version deleted]]
2005 Jul 29
0
PLS component selection for GPLS question
...on of various
classifiers based solely on LOOCV classification errors may not be
reliable."
the authors use random splitting to determine the number of PLS
components in GPLS, but I'm still not sure how to
choose the right number of PLS components for my data set.
I used the function errorest() from package ipred to estimate the
error rates und gpls() with Firth procedure switched on.
The attached PDF Graphik illustrates the problem for my data set.
S_n is the model sensitivity and S_p the model specifity.
With 4 component I get the best crossvalidation error rate 17% and
with 5 co...
2009 Apr 11
2
leave-one-out in R
Hi Everyone,
I am new in using R and I was wondering if anybody knows how to do a leave-one-out cross-validation in R.
Thanks
Charles
Découvrez les photos les plus intéressantes du jour.
[[alternative HTML version deleted]]
2009 Dec 31
1
cross validation for species distribution
Dear,
I wanna make cross-validation for the species data of species distribution
models.
Please kindly suggest any package containing cross validation suiting the
purpose.
Thank you.
Elaine
[[alternative HTML version deleted]]