Displaying 20 results from an estimated 20000 matches similar to: "Strange question/result about SVM"
2006 Jul 24
2
RandomForest vs. bayes & svm classification performance
Hi
This is a question regarding classification performance using different methods.
So far I've tried NaiveBayes (klaR package), svm (e1071) package and
randomForest (randomForest). What has puzzled me is that randomForest seems to
perform far better (32% classification error) than svm and NaiveBayes, which
have similar classification errors (45%, 48% respectively). A similar
difference in
2011 Jan 07
2
Stepwise SVM Variable selection
I have a data set with about 30,000 training cases and 103 variable.
I've trained an SVM (using the e1071 package) for a binary classifier
{0,1}. The accuracy isn't great.
I used a grid search over the C and G parameters with an RBF kernel to
find the best settings.
I remember that for least squares, R has a nice stepwise function that
will try combining subsets of variables to find
2011 Jul 13
1
Scaling in SVM
Dear Community!
I'm using the svm method of package e1071 for classifying my data. This
really works fine, but however I have to work after creating the support
vectors and the parameters with unscaled data. So the problem is when I try
to train the classifier with the option "scale=F" the result is quite poor,
so training with scaled data is essential. The rescaling of the support
2009 Jul 18
1
svm works but tune.svm give error
Hello,
I'm using the e1071 library for SVM functions.
I can quickly train an SVM with:
svm(formula = label ~ ., data = testdata)
That works well.
I want to tune the parameters, so I tried:
tune.svm(label ~ ., data=testdata[1:2000, ], gamma=10^(-6:3), cost=10^(1:2))
THIS FAILS WITH AN ERROR:
'names' attribute [199] must be the same length as the vector [184]
I don't
2003 Apr 03
1
SVM module: scaling data applied to new test set without using SVM again
Hello!
We are new in using R. We use the SVM module from the library ''e1071''
for training.
Problem formulation:
a classification has been performed using SVM module (linear kernel).
Later, a new data set (test set) comparable to the training data shall be
scaled in the same way as the training set (using the same scaling
parameter set, but without using the SVM again
2009 Aug 12
5
Nominal variables in SVM?
Hi,
The answers to my previous question about nominal variables has lead me
to a more important question.
What is the "best practice" way to feed nominal variable to an SVM.
For example:
color = ("red, "blue", "green")
I could translate that into an index so I wind up with
color= (1,2,3)
But my concern is that the SVM will now think that the values are
2007 Feb 26
1
training svm
Hello. I'm new to R and I'm trying to solve a classification problem. I have
a training dataset of about 40,000 rows and 50 columns. When I try to train
support vector machine, it gives me this error after a few seconds:
Error in predict.svm(ret, xhold) : Model is empty!
This is the code I use:
ne_span_data <- as.matrix(read.table('ne_span.data.R.txt', header=TRUE,
2006 Mar 30
1
Predict function for 'newdata' of different dimension in svm
I am using the "predict" function on a support vector machine (svm)
object, and I don't understand why I can't predict on a dataset with more
observations than the training dataset.
I think this problem is a generic "predict" problem, but I'm not sure.
The original svm was fit on 50 observations.
2009 Sep 06
2
Regarding SVM using R
Hi Abbas,
Before I try to give you answers, I just want to mention that you
should send R related reqests to the R-help list, and not me
personally because (i) there's a greater likelihood that it will get
answered in a timely manner, and (ii) people who might have a similar
problem down the road might benefit from any answer via searching the
list archives ... anyway:
On Sep 5, 2009, at
2009 Aug 04
1
Save model and predictions from svm
Hello,
I'm using the e1071 package for training an SVM. It seems to be working
well.
This question has two parts:
1) Once I've trained an SVM model, I want to USE it within R at a later
date to predict various new data. I see the write.svm command, but
don't know how to LOAD the model back in so that I can use it tomorrow.
How can I do this?
2) I would like to add the
2006 Feb 16
2
getting probabilities from SVM
I am using SVM to classify categorical data and I would like the
probabilities instead of the classification. ?predict.svm says that its
only enabled when you train the model with it enabled, so I did that, but it
didn't work. I can't even get it to work with iris. The help file shows
that probability = TRUE when training the model, but doesn't show an
example. Then I try to
2008 Oct 12
1
svm models in a loop
I want to train svm models on increasingly large training data subsets
of some zrr as follows:
> m <- sapply(1:5,function(i)
svm(person_oid~.,data=zrr[1:100*i,])) # (*)
However, when I inspect m[1], it literally shows
> m[1]
[[1]]
svm(formula = person_oid ~ ., data = zrr[1:N, ])
-- as opposed to
> m1 <- svm(person_oid~.,data=zrr[1:100,])
> m1
> m1
Call:
2013 Jan 08
1
Levels in new data fed to SVM
Hi all,
I've encountered an issue using svm (e1071) in the specific case of
supplying new data which may not have the full range of levels that
were present in the training data.
I've constructed this really primitive example to illustrate the point:
> library(e1071)
> training.data <- data.frame(x = c("yellow","red","yellow","red"), a =
2011 Apr 09
3
In svm(), how to connect quantitative prediction result to categorical result?
Hi,
I am studying using SVM functions of e1071 package to do prediction, and I found during the training data are "factor" type, then svm.predict() can predict data directly by categories; but if response variables are "numerical", the predicted value from svm will be continuous quantitative numbers, then how can I connect these quantitative numbers to categories? (for
2004 Dec 01
1
tuning SVM's
Hi
I am doing this sort of thing:
POLY:
> > obj = best.tune(svm, similarity ~., data = training, kernel =
"polynomial")
> summary(obj)
Call:
best.tune(svm, similarity ~ ., data = training, kernel = "polynomial")
Parameters:
SVM-Type: eps-regression
SVM-Kernel: polynomial
cost: 1
degree: 3
gamma: 0.04545455
coef.0: 0
2005 Jan 20
2
Cross-validation accuracy in SVM
Hi all -
I am trying to tune an SVM model by optimizing the cross-validation
accuracy. Maximizing this value doesn't necessarily seem to minimize the
number of misclassifications. Can anyone tell me how the
cross-validation accuracy is defined? In the output below, for example,
cross-validation accuracy is 92.2%, while the number of correctly
classified samples is (1476+170)/(1476+170+4) =
2009 Oct 21
2
SVM probability output variation
Dear R:ers,
I'm using the svm from the e1071 package to train a model with the
option "probabilities = TRUE". I then use "predict" with "probabilities
= TRUE" and get the probabilities for the data point belonging to either
class. So far all is well.
My question is why I get different results each time I train the model,
although I use exactly the same data.
2010 Jun 29
2
Need help for SVM code for microarray classification
Hi I am Aadhithya I am trying to write a code to classify microarray data
(AML and ALL) using SVM in R
my code goes like this :
library(e1071)
train<-read.table("Z:/Documents/train.txt",header=T);
test<-read.table("Z:/Documents/test.txt",header=T);
cl <- c(c(rep("ALL",10), rep("AML",10)));
model<- svm(train,cl);
pred <-
2010 Oct 21
1
SVM classification based on pairwise distance matrix
Dear all,
I am exploring the possibilities for automated classification of my
data. I have successfully used KNN, but was thinking about looking at
SVM (which I did nto use before).
I have a pairwise distance matrix of training observations which are
classified in set classes, and a distance matrix of new observations to
the training ones.
Is it possible to use distance matrices for SVM, and
2010 May 05
2
probabilities in svm output in e1071 package
svm.fit<-svm(as.factor(out) ~ ., data=all_h, method="C-classification",
kernel="radial", cost=bestc, gamma=bestg, cross=10) # model fitting
svm.pred<-predict(svm.fit, hh, decision.values = TRUE, probability = TRUE) #
find the probability, but can not find.
attr(svm.pred, "probabilities")
> attr(svm.pred, "probabilities")
1 0
1 0 0
2 0