similar to: leave-one-out in R

Displaying 20 results from an estimated 1000 matches similar to: "leave-one-out in R"

2009 Apr 25
1
Overlapping parameters "k" in different functions in "ipred"
Dear List, I have a question regarding "ipred" package. Under 10-fold cv, for different knn ( = 1,3,...25), I am getting same misclassification errors: ############################################# library(ipred) data(iris) cv.k = 10 ## 10-fold cross-validation bwpredict.knn <- function(object, newdata) predict.ipredknn(object, newdata, type="class") for (i in
2005 Jan 06
1
different result from the same errorest() in library( ipred)
Dear all, Does anybody can explain this: different results got when all the same parameters are used in the errorest() in library ipred, as the following? errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv", est.para=control.errorest(k=3), mtry=2)$err [1] 0.03333333 > errorest(Species ~ ., data=iris, model=randomForest, estimator = "cv",
2004 Jan 09
3
ipred and lda
Dear all, can anybody help me with the program below? The function predict.lda seems to be defined but cannot be used by errortest. The R version is 1.7.1 Thanks in advance, Stefan ---------------- library("MASS"); library("ipred"); data(iris3); tr <- sample(1:50, 25); train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3]); test <- rbind(iris3[-tr,,1],
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings, I am having trouble calculating artificial neural network misclassification errors using errorest() from the ipred package. I have had no problems estimating the values with randomForest() or svm(), but can't seem to get it to work with nnet(). I believe this is due to the output of the predict.nnet() function within cv.factor(). Below is a quick example of the problem I'm
2005 Jun 23
1
errorest
Hi, I am using errorest function from ipred package. I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out". According to the manual page for errorest, i use the following command: ce632[i]<-errorest(ytrain ~., data=mydata, model=lda, estimator=c("boot","632plus"), predict=mypredict.lda)$error It didn't work. I then tried the
2005 Jan 06
1
leave-one-out cross validation for randomForest
Dear all, Can I get the leave-one-out cross validation error of randomForest in R? I only found tune(), which got the 10-fold cross validation error. Thanks for any information. Xin LIU This e-mail is from ArraDx Ltd The e-mail and any files transmitted with it are confidentia...{{dropped}}
2003 Feb 27
2
PRESS again
Sorry for the repeat. The PRESS statistic is defined as sum(y-yhat(i))^2, where yhat(i) denotes the ith predicted value using all the data except the ith case (as used typically in linear models). Thanks again Jacob Jacob L van Wyk Department of Mathematics and Statistics Rand Afrikaans University P O Box 524 Auckland Park 2006 South Africa Tel: +27-11-489-3080 Fax: +27-11-489-2832
2004 Feb 25
2
LOOCV using R
Can someone help me with performing leave-out-one cross validation using R (model built is a Cox model)? Thanks. --------------------------------------------- David Verbel, MPH Senior Biostatistician Aureon Biosciences 28 Wells Avenue Yonkers, NY 10701 Phone: (914) 377-4021 Fax: (914) 377-4001 --------------------------------------------- [[alternative HTML version deleted]]
2005 Jan 21
2
cross validation
How to select training data set and test data set from the original data for performing cross-validation --------------------------------- [[alternative HTML version deleted]]
2009 Dec 31
1
cross validation for species distribution
Dear, I wanna make cross-validation for the species data of species distribution models. Please kindly suggest any package containing cross validation suiting the purpose. Thank you. Elaine [[alternative HTML version deleted]]
2005 Jan 06
1
multiple trees
Hi, there: I made a function to do k-fold cross-validation as below. Basically whenever I call cv(test) for example, an error message like: 20Fold 1 Error in model.frame(formula, rownames, variables, varnames, extras, extranames, : variable lengths differ please help. My test dataset has 142 variables, the last one is a categorical response variable. also, i am not sure how to save
2008 Feb 27
7
Cross Validation
Hello, How can I do a cross validation in R? Thank You!
2003 Apr 16
2
Jackknife and rpart
Hi, First, thanks to those who helped me see my gross misunderstanding of randomForest. I worked through a baging tutorial and now understand the "many tree" approach. However, it is not what I want to do! My bagged errors are accpetable but I need to use the actual tree and need a single tree application. I am using rpart for a classification tree but am interested in a more unbaised
2003 Jun 24
1
errorest: Error in cv.numeric()
Hi, I am trying to get an error estimation for a classification done using lda. The examples work fine, however I don't get my own code to work. The data is in object d > d class hydrophobicity charge geometry 1 2 6490.0400 1434.9700 610.99902 2 2 1602.0601 400.6030 -5824.00000 3 2 969.0060 260.1360 -415.00000 4 1
2009 Jan 22
4
dimnames in pkg "ipred"
Hello List, I`m trying to make prediction using a bagged tree with the package ipred. I tried to follow the manual but I`m getting an error message. Also browsing through the list-archive I didn`t find any hint. Maybe someone can help me? selbag <- bagging(SOIL_UNIT ~., data=traindat.bin, coob=TRUE) Error in dimnames(X) <- list(dn[[1L]], unlist(collabs, use.names = FALSE)) :
2006 Nov 02
1
avoiding a loop: "cumsum-like"
Hello Rhelpers, I need to run the following loop over a large number of data-sets, and was wondering if it could somehow be vectorized. It's more or less a cumulative sum, but slightly more complex. Here's the code, and an example dataset (called tab in my code) follows. Thanks in advance for any suggestions! res<-0 for (i in min(tab$Date):max(tab$Date)) { if
2005 Jun 24
1
mypredict.
Hi, I am wondering what does "mypredict.lda<-function(object, newdata)predict(object, newdata=newdata)$class" actually do? I run a few errorest commands in the same function on the same dataset using the same classifier lda. The only difference is some use "cv", other use "boot" and "632plus". They all share one mypredict.lda. Will it cause any
2006 Oct 08
0
Problem in getting 632plus error using randomForest by ipred!
Hello! I'm Taeho, a graduate student in South Korea. In order to get .632+ bootstrap error using random forest, I have tried to use 'ipred' package; more specifically the function 'errorest' has been used. Following the guidelines, I made a simple command line like below: error<-errorest(class ~ ., data=data, model=randomForest, estimator = "632plus")$err
2008 Nov 15
2
Update to 2.8 and problem with liblapack
Hello To update from R 2.6 to 2.8 (on Ubuntu 8.04 both) I had to install new tcl and liblapack packages (excuse me it is in french): > sudo apt-get install r-base-dev > Lecture des listes de paquets... Fait > Construction de l'arbre des d?pendances > Lecture des informations d'?tat... Fait > Les paquets suppl?mentaires suivants seront install?s : >
2006 Jan 18
2
Loading of namespace on load of .Rdata (was strange behaviour of load)
Last week Giovanni Parrinello posted a message asking why various packages were loaded when he loaded an .Rdata file. Brian Ripley replied saying he thought it was because the saved workspace contained a reference to the namespace of ipred. (Correspondence copied below). This begs the question: how did the reference to the namespace of ipred come to be in the .Rdata file? Brian did say it is