similar to: errorest

Displaying 20 results from an estimated 100 matches similar to: "errorest"

2003 Jun 24
1
errorest: Error in cv.numeric()
Hi, I am trying to get an error estimation for a classification done using lda. The examples work fine, however I don't get my own code to work. The data is in object d > d class hydrophobicity charge geometry 1 2 6490.0400 1434.9700 610.99902 2 2 1602.0601 400.6030 -5824.00000 3 2 969.0060 260.1360 -415.00000 4 1
2005 Jun 24
1
mypredict.
Hi, I am wondering what does "mypredict.lda<-function(object, newdata)predict(object, newdata=newdata)$class" actually do? I run a few errorest commands in the same function on the same dataset using the same classifier lda. The only difference is some use "cv", other use "boot" and "632plus". They all share one mypredict.lda. Will it cause any
2005 Jan 06
1
different result from the same errorest() in library( ipred)
Dear all, Does anybody can explain this: different results got when all the same parameters are used in the errorest() in library ipred, as the following? errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv", est.para=control.errorest(k=3), mtry=2)$err [1] 0.03333333 > errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv",
2004 Jan 09
3
ipred and lda
Dear all, can anybody help me with the program below? The function predict.lda seems to be defined but cannot be used by errortest. The R version is 1.7.1 Thanks in advance, Stefan ---------------- library("MASS"); library("ipred"); data(iris3); tr <- sample(1:50, 25); train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3]); test <- rbind(iris3[-tr,,1],
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings, I am having trouble calculating artificial neural network misclassification errors using errorest() from the ipred package. I have had no problems estimating the values with randomForest() or svm(), but can't seem to get it to work with nnet(). I believe this is due to the output of the predict.nnet() function within cv.factor(). Below is a quick example of the problem I'm
2005 Jan 10
0
Stadard errors and boxplots with 632plus error estimator, "errorest"
Dear R-users, I'd like to estimate standard errors (for lda) and make a boxplot with the "632plus" and "boot" error estimators included in package ipred (method: errorest). The "boot" estimator returns only a standard deviation but not the whole error data. Thank you in advance, regards, Antoine
2009 Apr 25
1
Overlapping parameters "k" in different functions in "ipred"
Dear List, I have a question regarding "ipred" package. Under 10-fold cv, for different knn ( = 1,3,...25), I am getting same misclassification errors: ############################################# library(ipred) data(iris) cv.k = 10 ## 10-fold cross-validation bwpredict.knn <- function(object, newdata) predict.ipredknn(object, newdata, type="class") for (i in
2006 Oct 08
0
Problem in getting 632plus error using randomForest by ipred!
Hello! I'm Taeho, a graduate student in South Korea. In order to get .632+ bootstrap error using random forest, I have tried to use 'ipred' package; more specifically the function 'errorest' has been used. Following the guidelines, I made a simple command line like below: error<-errorest(class ~ ., data=data, model=randomForest, estimator = "632plus")$err
2005 Mar 18
2
logistic model cross validation resolved
This post is NOT a question, but an answer. For readers please disregard all earlier posts by myself about this question. I'm posting for two reasons. First to say thanks, especially to Dimitris, for suggesting the use of errorest in the ipred library. Second, so that the solution to this problem is in the archives in case it gets asked again. If one wants to run a k-fold cross-validation
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorrect results (PR#8554)
Full_Name: Noel O'Boyle Version: 2.1.0 OS: Debian GNU/Linux Sarge Submission from: (NULL) (131.111.8.96) (1) Description of error The 10-fold CV option for the svm function in e1071 appears to give incorrect results for the rmse. The example code in (3) uses the example regression data in the svm documentation. The rmse for internal prediction is 0.24. It is expected the 10-fold CV rmse
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorre ct results (PR#8554)
1. This is _not_ a bug in R itself. Please don't use R's bug reporting system for contributed packages. 2. This is _not_ a bug in svm() in `e1071'. I believe you forgot to take sqrt. 3. You really should use the `tot.MSE' component rather than the mean of the `MSE' component, but this is only a very small difference. So, instead of spread[i] <- mean(mysvm$MSE), you
2003 Feb 27
2
PRESS again
Sorry for the repeat. The PRESS statistic is defined as sum(y-yhat(i))^2, where yhat(i) denotes the ith predicted value using all the data except the ith case (as used typically in linear models). Thanks again Jacob Jacob L van Wyk Department of Mathematics and Statistics Rand Afrikaans University P O Box 524 Auckland Park 2006 South Africa Tel: +27-11-489-3080 Fax: +27-11-489-2832
2012 Nov 09
0
10-Fold Cross Validation AND Random Forest
Hi, I am using the Random Forest package to classify observations into one of two classes. My data is unbalanced with the minority class accounting for 7% of total data set. I have heard the 10-Fold Cross validation can help me with improving classification. But being new at most of this it's not something I can do from scratch on my own. So I have spent all this morning trying to find a
2011 Mar 23
2
predict.rpart help
Hi Everyone, Is there a way to get predict.rpart() to return the nodes reached by the new examples in addition to the predicted probabilities it already returns? In other words, I would like to know the leaf node in the tree object that each new example data drops down to. Thanks in advance for your help. Osei
2005 Jul 29
0
PLS component selection for GPLS question
How to select the number of PLS components for GPLS for data sets with few samples? Concrete problem: My data set: 9 samples of class A and 37 of class B with 254 descriptors. In the paper: "Classification Using Generalized Partial Least Squares", Beiying Ding, Robert Gentleman, Bioconductor Project Working Papers, year 2004, paper 5 Section 2.6 Assessing Prediction: Cite:
2005 Jan 06
1
leave-one-out cross validation for randomForest
Dear all, Can I get the leave-one-out cross validation error of randomForest in R? I only found tune(), which got the 10-fold cross validation error. Thanks for any information. Xin LIU This e-mail is from ArraDx Ltd The e-mail and any files transmitted with it are confidentia...{{dropped}}
2009 Apr 11
2
leave-one-out in R
Hi Everyone, I am new in using R and I was wondering if anybody knows how to do a leave-one-out cross-validation in R. Thanks Charles Découvrez les photos les plus intéressantes du jour. [[alternative HTML version deleted]]
2004 Feb 25
2
LOOCV using R
Can someone help me with performing leave-out-one cross validation using R (model built is a Cox model)? Thanks. --------------------------------------------- David Verbel, MPH Senior Biostatistician Aureon Biosciences 28 Wells Avenue Yonkers, NY 10701 Phone: (914) 377-4021 Fax: (914) 377-4001 --------------------------------------------- [[alternative HTML version deleted]]
2005 Jan 21
2
cross validation
How to select training data set and test data set from the original data for performing cross-validation --------------------------------- [[alternative HTML version deleted]]
2009 Dec 31
1
cross validation for species distribution
Dear, I wanna make cross-validation for the species data of species distribution models. Please kindly suggest any package containing cross validation suiting the purpose. Thank you. Elaine [[alternative HTML version deleted]]