similar to: [Q] How to extract cross validation results from e1071's svm model

Displaying 20 results from an estimated 20000 matches similar to: "[Q] How to extract cross validation results from e1071's svm model"

2012 Dec 02
1
e1071 SVM: Cross-validation error confusion matrix
Hi, I ran two svm models in R e1071 package: the first without cross-validation and the second with 10-fold cross-validation. I used the following syntax: #Model 1: Without cross-validation: > svm.model <- svm(Response ~ ., data=data.df, type="C-classification", > kernel="linear", cost=1) > predict <- fitted(svm.model) > cm <- table(predict,
2012 Mar 02
1
e1071 SVM: Cross-validation error confusion matrix
Hi, I ran two svm models in R e1071 package: the first without cross-validation and the second with 10-fold cross-validation. I used the following syntax: #Model 1: Without cross-validation: > svm.model <- svm(Response ~ ., data=data.df, type="C-classification", > kernel="linear", cost=1) > predict <- fitted(svm.model) > cm <- table(predict,
2007 Oct 27
1
problems in cross validation of SVM in pakage "e1071"
Hi: I am a newer in using R for data mining, and find the "e1071" pakage an excellent tool in doing data mining work! what frustrated me recently is that when I using the function "svm" and using the "cross=10" parameters, I got all the "accuracies" of the model greater than 1. Isn't that the accuracy should be smaller than 1? so I wander how, the
2013 Jan 15
0
e1071 SVM, cross-validation and overfitting
I am accustomed to the LIBSVM package, which provides cross-validation on training with the -v option % svm-train -v 5 ... This does 5 fold cross validation while building the model and avoids over-fitting. But I don't see how to accomplish that in the e1071 package. (I learned that svm(... cross=5 ...) only _tests_ using cross-validation -- it doesn't affect the training.) Can
2010 Nov 23
5
cross validation using e1071:SVM
Hi everyone I am trying to do cross validation (10 fold CV) by using e1071:svm method. I know that there is an option (?cross?) for cross validation but still I wanted to make a function to Generate cross-validation indices using pls: cvsegments method. ##################################################################### Code (at the end) Is working fine but sometime caret:confusionMatrix
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorre ct results (PR#8554)
1. This is _not_ a bug in R itself. Please don't use R's bug reporting system for contributed packages. 2. This is _not_ a bug in svm() in `e1071'. I believe you forgot to take sqrt. 3. You really should use the `tot.MSE' component rather than the mean of the `MSE' component, but this is only a very small difference. So, instead of spread[i] <- mean(mysvm$MSE), you
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorrect results (PR#8554)
Full_Name: Noel O'Boyle Version: 2.1.0 OS: Debian GNU/Linux Sarge Submission from: (NULL) (131.111.8.96) (1) Description of error The 10-fold CV option for the svm function in e1071 appears to give incorrect results for the rmse. The example code in (3) uses the example regression data in the svm documentation. The rmse for internal prediction is 0.24. It is expected the 10-fold CV rmse
2009 Jul 08
1
SVM cross validation in e1071
Hi list, Could someone help me to explain why the leave-one-out cross validation results I got from svm using the internal option "cross" are different from those I got manually? It seems using "cross" to do cross validation, the results are always better. Please see the code below. I also include lda as a comparison. I'm using WinXP, R-2.9.0, and e1071_1.5-19. Many
2007 Sep 25
1
10- fold cross validation for naive bayes(e1071)
Hallo! I would need a code for 10-fold cross validation for the classifiers Naive Bayes and svm (e1071) package. Has there already been done something like that? I tried to do it myself by applying the tune function first: library(e1071) tune.control <- tune.control(random =F, nrepeat=1, repeat.aggregate=min.,sampling=c("cross"),sampling.aggregate=mean, cross=10, best.model=T,
2011 Jul 24
0
repeated execution of svm(e1071) gives different results, if probability = TRUE is set
Hello, Connoisseurs! Please explain to novices, why svm model gives different results in the loop with the same data? As a result, I can not find the best gamma and cost parameters. Also tune.svm yields results that can not be repeated. How can I avoid this? My sessionInfo: R version 2.11.1 (2010-05-31) x86_64-pc-linux-gnu locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
2011 Sep 27
0
Workflow for binary classification problem using svm via e1071 package
Dear R-list! I am using the e1071 package in R to solve a binary classification problem in a dataset of round 180 predictor variables (blood metabolites) of two groups of subjects (patients and healthy controls). I am confused regarding the correct way to assess the classification accuracy of the trained svm. (A) The svm command allows to specificy via the 'cross=k' parameter to specify a
2010 Jun 15
1
cross validation of SVM
hi, could you please tell me what kind of cross validation that SVM of e1071 uses? Cheers, Amy _________________________________________________________________ View photos of singles in your area! Looking for a hot date? [[alternative HTML version deleted]]
2008 Apr 09
0
How do I get the parameters out of e1071's svm?
Hi all, I'm trying to get a simple, linear decision surface from e1071's svm. I've run it like this: svm(as.factor(slow) ~ SLICE.3 + PSGR.7 + SOLUTIONS.6 + DR.10, y, kernel='linear', cost=1e6, class.weights=c('FALSE'=1, 'TRUE'=10)) According to the docs, kernel='linear' has a kernel u'v. Since I have 4 independent variables, I'd expect to
2013 Apr 05
0
Cross Validation with SVM
Good morning. I am using package e1071 to develop a SVM model. My code is: x <- subset(dataset, select = -Score) y <- dataset$Score model <- svm(x, y,cross=10) print(model) summary(model) As 10-CV produces 10 models, I need two things: 1) To have access to each model from 10-CV. 2) To predict new instances with each model to know which one does the best performance.
2017 Jul 06
0
svm.formula versus svm.default - different results
Dear community, I'm performing svm-regression with svm at library e1071. As I wrote in another post: "svm e1071 call - different results", I get different results if I use the svm.default rather than the svm.formula, being better the ones at svm.formula I've debugged both options. While debugging the svm.formula, I've seen that when I reach the call: ret <-
2004 Dec 16
2
reading svm function in e1071
Hi, If I try to read the codes of functions in e1071 package, it gives me following error message. >library(e1071) > svm function (x, ...) UseMethod("svm") <environment: namespace:e1071> > predict.svm Error: Object "predict.svm" not found > Can someone help me on this how to read the codes of the functions in the e1071 package? Thanks. Raj
2017 Sep 02
0
problem in testing data with e1071 package (SVM Multiclass)
Hello all, this is the first time I'm using R and e1071 package and SVM multiclass (and I'm not a statistician)! I'm very confused, then. The goal is: I have a sentence with sunny; it will be classified as "yes" sentence; I have a sentence with cloud, it will be classified as "maybe"; I have a sentence with rainy il will be classified as "no". The
2005 Jun 29
2
Running SVM {e1071}
Dear David, Dear Friends, After any running svm I receive different results of Error estimation of 'svm' using 10-fold cross validation. What is the reason ? It is caused by the algorithm, libsvm , e1071 or something els? Which value can be optimal one ? How much run can reach to the optimality.And finally, what is difference between Error estimation of svm using 10-fold cross validation
2009 May 11
1
Problems to run SVM regression with e1071
Hi R users, I'm trying to run a SVM - regression using e1071 package but the function svm() all the time apply a classification method rather than a regression. svm.m1 <- svm(st ~ ., data = train, cost = 1000, gamma = 1e-03) Parameters: SVM-Type: C-classification SVM-Kernel: radial cost: 1000 gamma: 0.001 Number of Support Vectors: 209
2009 Mar 12
0
e1071 SVM one-classification tune problem
Hello all, I am using the e1071 SVM with the tune options for classification, which work pretty well, given the examples of using tune.svm function for classification. But I have not found any example to tune the SVM novelty detection (one-classification) parameters (gamma, cost, nu), for example this are some of the options I have tried with no success: obj<-tune(svm, x,y, type