similar to: e1071 SVM: Cross-validation error confusion matrix

Displaying 20 results from an estimated 2000 matches similar to: "e1071 SVM: Cross-validation error confusion matrix"

2012 Dec 02
1
e1071 SVM: Cross-validation error confusion matrix
Hi, I ran two svm models in R e1071 package: the first without cross-validation and the second with 10-fold cross-validation. I used the following syntax: #Model 1: Without cross-validation: > svm.model <- svm(Response ~ ., data=data.df, type="C-classification", > kernel="linear", cost=1) > predict <- fitted(svm.model) > cm <- table(predict,
2010 Jul 14
1
question about SVM in e1071
Hi, I have a question about the parameter C (cost) in svm function in e1071. I thought larger C is prone to overfitting than smaller C, and hence leads to more support vectors. However, using the Wisconsin breast cancer example on the link: http://planatscher.net/svmtut/svmtut.html I found that the largest cost have fewest support vectors, which is contrary to what I think. please see the scripts
2010 May 14
0
bootstrapping an svm
Hello I am playing around trying to bootstrap an svm model using a training set and a test set. I've written another function, auc, which I call here, and am bootstrapping. I did this successfully with logistic regression, but I am getting an error from the starred ** line which I determined with print statements. How do I tune an svm in a bootstrap? I can't find sample code
2012 Mar 29
1
TR: [e1071] Load an SVM model exported with write.svm
Un texte encapsul? et encod? dans un jeu de caract?res inconnu a ?t? nettoy?... Nom : non disponible URL : <https://stat.ethz.ch/pipermail/r-help/attachments/20120329/cfdd2be3/attachment.pl>
2004 Dec 16
2
reading svm function in e1071
Hi, If I try to read the codes of functions in e1071 package, it gives me following error message. >library(e1071) > svm function (x, ...) UseMethod("svm") <environment: namespace:e1071> > predict.svm Error: Object "predict.svm" not found > Can someone help me on this how to read the codes of the functions in the e1071 package? Thanks. Raj
2009 Jul 07
2
Question in using e1071 svm routine
Hi all, I've got the following error message in using e1071 svm routine... Could anybody please help me? Thank you! --------------------------------- model <- svm(y=factor(mytraindata[, 1]), x=mytraindata[, -1], probability=T) Error in if (any(co)) { : missing value where TRUE/FALSE needed In addition: Warning message: In FUN(newX[, i], ...) : NAs introduced by coercion
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorrect results (PR#8554)
Full_Name: Noel O'Boyle Version: 2.1.0 OS: Debian GNU/Linux Sarge Submission from: (NULL) (131.111.8.96) (1) Description of error The 10-fold CV option for the svm function in e1071 appears to give incorrect results for the rmse. The example code in (3) uses the example regression data in the svm documentation. The rmse for internal prediction is 0.24. It is expected the 10-fold CV rmse
2011 Aug 05
1
e1071 ver 1.5-27 and older - SVM bug report
Dear All: I found a problem with the SVM internal cross-validation (CV) accuracy estimation in the e1071 package. File: Rsvm.c Line: 120 Today, it is: int j = rand()%(prob->l-i); Should be: int j = i + rand()%(prob->l-i); The erroneous code doesn't shuffle objects. Instead, it "randomly" moves objects from beginning to the end. In hope for a prompt response from the
2004 Dec 18
1
erro in SVM (packsge "e1071")
Hello, I am using SVM under e1071 package for nu-regression with 18 parameters. The variables are ordered factors, factors, date or numeric datatypes. I use the linear kernel. It gives the following error that I cannot solve. I tryed debug, browser and all that stuff, but no way. The error is: Error in get(ctr, mode = "function", envir = parent.frame())(levels(x), :
2005 Jun 29
2
Running SVM {e1071}
Dear David, Dear Friends, After any running svm I receive different results of Error estimation of 'svm' using 10-fold cross validation. What is the reason ? It is caused by the algorithm, libsvm , e1071 or something els? Which value can be optimal one ? How much run can reach to the optimality.And finally, what is difference between Error estimation of svm using 10-fold cross validation
2010 Jul 09
1
interpretation of svm models with the e1071 package
Dear all, after having calibrated a svm model through the svm() command of the e1071 package, is there a way to i) represent the modeled relationships between the y and X variables (response variable vs. predictors)? ii) rank the influence of the predictors used in the model? Right now I am more interested in regression models, but I guess this would be useful for classification too. Thank
2003 Dec 10
3
e1071:svm - default epsilon = 0.1 (NOT 0.5) (PR#5671)
In e1071 package/svm default epsilon value is set to 0.1 and not 0.5 as documentation says. R
2017 Sep 02
0
problem in testing data with e1071 package (SVM Multiclass)
Hello all, this is the first time I'm using R and e1071 package and SVM multiclass (and I'm not a statistician)! I'm very confused, then. The goal is: I have a sentence with sunny; it will be classified as "yes" sentence; I have a sentence with cloud, it will be classified as "maybe"; I have a sentence with rainy il will be classified as "no". The
2006 Feb 02
0
crossvalidation in svm regression in e1071 gives incorre ct results (PR#8554)
1. This is _not_ a bug in R itself. Please don't use R's bug reporting system for contributed packages. 2. This is _not_ a bug in svm() in `e1071'. I believe you forgot to take sqrt. 3. You really should use the `tot.MSE' component rather than the mean of the `MSE' component, but this is only a very small difference. So, instead of spread[i] <- mean(mysvm$MSE), you
2011 May 25
1
help with tune.svm() e1071
Hi, I am trying to use tune.svm in e1071 package. the command i use is tobj <- tune.svm(labels, data= data, cost = 10^(1:2)) Should the last column of the 'data' contain the labels as well? I want to use the linear kernel. But it gives me the error "Error in model.frame.default(formula, data) : 'data' must be a data.frame, not a matrix or an array" Do you know why
2013 Jan 15
0
e1071 SVM, cross-validation and overfitting
I am accustomed to the LIBSVM package, which provides cross-validation on training with the -v option % svm-train -v 5 ... This does 5 fold cross validation while building the model and avoids over-fitting. But I don't see how to accomplish that in the e1071 package. (I learned that svm(... cross=5 ...) only _tests_ using cross-validation -- it doesn't affect the training.) Can
2009 Mar 12
0
e1071 SVM one-classification tune problem
Hello all, I am using the e1071 SVM with the tune options for classification, which work pretty well, given the examples of using tune.svm function for classification. But I have not found any example to tune the SVM novelty detection (one-classification) parameters (gamma, cost, nu), for example this are some of the options I have tried with no success: obj<-tune(svm, x,y, type
2009 May 11
1
Problems to run SVM regression with e1071
Hi R users, I'm trying to run a SVM - regression using e1071 package but the function svm() all the time apply a classification method rather than a regression. svm.m1 <- svm(st ~ ., data = train, cost = 1000, gamma = 1e-03) Parameters: SVM-Type: C-classification SVM-Kernel: radial cost: 1000 gamma: 0.001 Number of Support Vectors: 209
2006 Jul 07
1
Polynomial kernel in SVM in e1071 package
Dear list, In some places (for example, http://en.wikipedia.org/wiki/Support_vector_machine) , the polynomail kernel in SVM is written as (u'*v + 1)^d, while in the document of svm() in e1071 package, the polynomial kernel is written as (gamma*u'*v + coef0)^d. I am a little confused here: When doing parameter optimization (grid search or so) for polynomial kernel, does it need to tune
2007 Oct 27
1
problems in cross validation of SVM in pakage "e1071"
Hi: I am a newer in using R for data mining, and find the "e1071" pakage an excellent tool in doing data mining work! what frustrated me recently is that when I using the function "svm" and using the "cross=10" parameters, I got all the "accuracies" of the model greater than 1. Isn't that the accuracy should be smaller than 1? so I wander how, the