similar to: Help in SVM prediction

Displaying 20 results from an estimated 6000 matches similar to: "Help in SVM prediction"

2011 Oct 04
2
handling constant factors in prediction using svm
Hi users! I am fitting a model with several factor variables as independents using svm. since there are lots of categorical variables,the training and test data sets have been created using dummy.data.frame option from dummies package. I have a factor A in the training data set with 2 levels (0,1).In the test set, this factor A has only 1 level (1) and hence when applying dummy.data.frame, the
2011 Sep 07
1
predictive modeling and extremely large data
Hi, I am new to R and here is what I am doing in it now. I am using machine learning technique (svm) to do predictive modeling. The data that I am using is one that is bound to grow perpetually. what I want to know is, say, I fed in a data set with 5000 data points to svm initially. The algorithm derives a certain intelligence (i.e.,output) based on these 5000 data points. I have an additional
2011 Oct 11
4
Problem executing function
Hello All, I have a series of steps that needs to be run many times. Hence I put them all into a function. There is no problem in function creation, but when I call the function, the steps are not getting executed or only the first step gets executed. What possibly could be the reason? Sample Function and the result: fun <- function () { # Package load into R; a <-
2010 May 14
4
Categorical Predictors for SVM (e1071)
Dear all, I have a question about using categorical predictors for SVM, using "svm" from library(e1071). If I have multiple categorical predictors, should they just be included as factors? Take a simple artificial data example: x1<-rnorm(500) x2<-rnorm(500) #Categorical Predictor 1, with 5 levels x3<-as.factor(rep(c(1,2,3,4,5),c(50,150,130,70,100))) #Catgegorical Predictor
2011 Sep 24
0
Assessing prediction performance of SVM using e1071 package
Dear R-Users! I am using the svm function (e1071 package) to classify two groups using a set of 180 indicator variables. Now I am confused about the cross-validation procedure. (A) On one hand I use the setting cross=10 in the svm function to run 10 cross-validation iterations and to get an estimate of the svm's performance in prediction. (B) On the other hand most tutorials I found
2016 Sep 23
2
How to enable the svm cpu flag inside a vm?
Hello, I'm trying to get the Android Emulator to run inside a kvm vm on CentOS-6. Apparently the latest Android Emulators cannot run without hardware acceleration so I am trying to get the vm to see the svm cpu flag. Host: $ grep model\ name /proc/cpuinfo | sort -u model name : AMD Phenom(tm) II X4 965 Processor $ grep svm /proc/cpuinfo | sort -u flags : fpu vme de pse tsc msr pae mce cx8
2011 May 26
0
R svm prediction kernlab
Hi All, I am using ksvm method in kernlab R package for support vector machines. I learned the multiclass one-against-one svm from training data and using it to classify new datapoints. But I want to update/finetune the 'svm weights' based on some criteria and use the updated svm weights in the predict method framework. I don't know if its possible or not, how do classify new
2011 Feb 02
2
SVM Prediction and Plot
Hi I'm trying to predict using a model I fitted with SVM. I constructed the model (called Svm) using a training set, and now I want to use a test set (called BankTest) for prediction. The response variable is in the first column of BankTest. > SvmPred = predict(Svm, BankTest[,-1], probability=TRUE) > SvmPredRes = table(Pred = SvmPred, True = BankTest[,1]) I get this error: Error in
2005 Mar 16
0
decision values and probability in SVM
Hi, I am using SVM from e1071 package. I can get decision values very easily. But whenever, I try to get the probability measure, it returns NULL. I use the following codes to generate decision.values and probability. Is there anything wrong in it? predictor<-svm(train[,c(x1, x2, x3)], train[,x4], probability=TRUE) pred<-predict(predictor, test[,c(x1, x2, x3)], probability=TRUE,
2011 Apr 09
3
In svm(), how to connect quantitative prediction result to categorical result?
Hi, I am studying using SVM functions of e1071 package to do prediction, and I found during the training data are "factor" type, then svm.predict() can predict data directly by categories; but if response variables are "numerical", the predicted value from svm will be continuous quantitative numbers, then how can I connect these quantitative numbers to categories? (for
2017 Jul 06
0
svm.formula versus svm.default - different results
Dear community, I'm performing svm-regression with svm at library e1071. As I wrote in another post: "svm e1071 call - different results", I get different results if I use the svm.default rather than the svm.formula, being better the ones at svm.formula I've debugged both options. While debugging the svm.formula, I've seen that when I reach the call: ret <-
2020 Feb 07
0
[RFC PATCH v7 17/78] KVM: svm: pass struct kvm_vcpu to set_msr_interception()
From: Nicu?or C??u <ncitu at bitdefender.com> This is needed in order to handle clients controlling the MSR related VM-exits. Signed-off-by: Nicu?or C??u <ncitu at bitdefender.com> Signed-off-by: Adalbert Laz?r <alazar at bitdefender.com> --- arch/x86/kvm/svm.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/svm.c
2009 Jul 18
1
svm works but tune.svm give error
Hello, I'm using the e1071 library for SVM functions. I can quickly train an SVM with: svm(formula = label ~ ., data = testdata) That works well. I want to tune the parameters, so I tried: tune.svm(label ~ ., data=testdata[1:2000, ], gamma=10^(-6:3), cost=10^(1:2)) THIS FAILS WITH AN ERROR: 'names' attribute [199] must be the same length as the vector [184] I don't
2009 May 11
1
Problems to run SVM regression with e1071
Hi R users, I'm trying to run a SVM - regression using e1071 package but the function svm() all the time apply a classification method rather than a regression. svm.m1 <- svm(st ~ ., data = train, cost = 1000, gamma = 1e-03) Parameters: SVM-Type: C-classification SVM-Kernel: radial cost: 1000 gamma: 0.001 Number of Support Vectors: 209
2020 Jul 21
0
[PATCH v9 13/84] KVM: svm: add support for descriptor-table exits
From: Nicu?or C??u <ncitu at bitdefender.com> This function is needed for the KVMI_EVENT_DESCRIPTOR event. Signed-off-by: Nicu?or C??u <ncitu at bitdefender.com> Signed-off-by: Adalbert Laz?r <alazar at bitdefender.com> --- arch/x86/kvm/svm/svm.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index
2020 Aug 24
0
[PATCH v6 02/76] KVM: SVM: Add GHCB definitions
From: Tom Lendacky <thomas.lendacky at amd.com> Extend the vmcb_safe_area with SEV-ES fields and add a new 'struct ghcb' which will be used for guest-hypervisor communication. Signed-off-by: Tom Lendacky <thomas.lendacky at amd.com> Signed-off-by: Joerg Roedel <jroedel at suse.de> --- arch/x86/include/asm/svm.h | 45 +++++++++++++++++++++++++++++++++++++-
2011 Feb 18
1
segfault during example(svm)
If do: > library("e1071") > example(svm) I get: svm> data(iris) svm> attach(iris) svm> ## classification mode svm> # default with factor response: svm> model <- svm(Species ~ ., data = iris) svm> # alternatively the traditional interface: svm> x <- subset(iris, select = -Species) svm> y <- Species svm> model <- svm(x, y) svm>
2020 Aug 24
0
[PATCH v6 01/76] KVM: SVM: nested: Don't allocate VMCB structures on stack
From: Joerg Roedel <jroedel at suse.de> Do not allocate a vmcb_control_area and a vmcb_save_area on the stack, as these structures will become larger with future extenstions of SVM and thus the svm_set_nested_state() function will become a too large stack frame. Signed-off-by: Joerg Roedel <jroedel at suse.de> --- arch/x86/kvm/svm/nested.c | 47
2009 Mar 12
0
e1071 SVM one-classification tune problem
Hello all, I am using the e1071 SVM with the tune options for classification, which work pretty well, given the examples of using tune.svm function for classification. But I have not found any example to tune the SVM novelty detection (one-classification) parameters (gamma, cost, nu), for example this are some of the options I have tried with no success: obj<-tune(svm, x,y, type
2003 Nov 03
1
svm in e1071 package: polynomial vs linear kernel
I am trying to understand what is the difference between linear and polynomial kernel: linear: u'*v polynomial: (gamma*u'*v + coef0)^degree It would seem that polynomial kernel with gamma = 1; coef0 = 0 and degree = 1 should be identical to linear kernel, however it gives me significantly different results for very simple data set, with linear kernel