similar to: Plot.svm error

Displaying 20 results from an estimated 1000 matches similar to: "Plot.svm error"

2008 Jan 04
3
Plot error
Hi all, I'm trying to plot an svm model and I'm the following error: > plot(model, data= dados[,-1], formula=formula(dados[,2]~dados[,3]),svSymbol = 1, dataSymbol = 2, symbolPalette = rainbow(4),color.palette = terrain.colors) Error in terms.default(x) : no terms component Anyone knows how to solve this??? Best regards, Pedro Marques
2008 Jan 03
0
Svm formula
Hi all, I don't know how to choose the formula to use when plotting an svm model, I think I'm using the wrong one and so that is why I'm having trouble. I should be very grateful if someone could help me on this.. > dados<-read.table("b.txt",sep="",nrows=30000) >
2006 Jan 18
2
Help with plot.svm from e1071
Hi. I'm trying to plot a pair of intertwined spirals and an svm that separates them. I'm having some trouble. Here's what I tried. > library(mlbench) > library(e1071) Loading required package: class > raw <- mlbench.spirals(200,2) > spiral <- data.frame(class=as.factor(raw$classes), x=raw$x[,1], y=raw$x[,2]) > m <- svm(class~., data=spiral) > plot(m,
2006 Jan 19
0
Using svm.plot with mlbench.spirals.
Hi. I'm trying to plot a pair of intertwined spirals and an svm that separates them. I'm having some trouble. Here's what I tried. > library(mlbench) > library(e1071) Loading required package: class > raw <- mlbench.spirals(200,2) > spiral <- data.frame(class=as.factor(raw$classes), x=raw$x[,1], y=raw$x[,2]) > m <- svm(class~., data=spiral) > plot(m,
2006 Jul 07
1
Polynomial kernel in SVM in e1071 package
Dear list, In some places (for example, http://en.wikipedia.org/wiki/Support_vector_machine) , the polynomail kernel in SVM is written as (u'*v + 1)^d, while in the document of svm() in e1071 package, the polynomial kernel is written as (gamma*u'*v + coef0)^d. I am a little confused here: When doing parameter optimization (grid search or so) for polynomial kernel, does it need to tune
2015 Dec 20
2
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
Jonathan Lennox wrote: > +opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32) > +{ > + int32x4_t coef0 = vld1q_s32(coef32); > + int32x4_t coef1 = vld1q_s32(coef32 + 4); > + int32x4_t coef2 = vld1q_s32(coef32 + 8); > + int32x4_t coef3 = vld1q_s32(coef32 + 12); > + > + int32x4_t a0 = vld1q_s32(buf32 -
2003 Nov 03
1
svm in e1071 package: polynomial vs linear kernel
I am trying to understand what is the difference between linear and polynomial kernel: linear: u'*v polynomial: (gamma*u'*v + coef0)^degree It would seem that polynomial kernel with gamma = 1; coef0 = 0 and degree = 1 should be identical to linear kernel, however it gives me significantly different results for very simple data set, with linear kernel
2015 Nov 23
1
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
On Nov 23, 2015, at 12:04 PM, John Ridges <jridges at masque.com<mailto:jridges at masque.com>> wrote: Hi Jonathan. I really, really hate to bring this up this late in the game, but I just noticed that your NEON code doesn't use any of the "high" intrinsics for ARM64, e.g. instead of: int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); you could use: int32x4_t coef1
2004 Nov 29
1
tune()
Hi I am trying to tune an svm by doing the following: tune(svm, similarity ~., data = training, degree = 2^(1:2), gamma = 2^(-1:1), coef0 = 2^(-1:1), cost = 2^(2:4), type = "polynomial") but I am getting Error in svm.default(x, y, scale = scale, ...) : wrong type specification! > I have to admit I am not sure what I am doing wrong. Could anyone tell me why the
2003 Dec 10
3
e1071:svm - default epsilon = 0.1 (NOT 0.5) (PR#5671)
In e1071 package/svm default epsilon value is set to 0.1 and not 0.5 as documentation says. R
2015 Nov 20
2
[Aarch64 00/11] Patches to enable Aarch64
> On Nov 19, 2015, at 5:47 PM, John Ridges <jridges at masque.com> wrote: > > Any speedup from the intrinsics may just be swamped by the rest of the encode/decode process. But I think you really want SIG2WORD16 to be (vqmovns_s32(PSHR32((x), SIG_SHIFT))) Yes, you?re right. I forgot to run the vectors under qemu with my previous version (oh, the embarrassment!) Fixed forthcoming
2015 Dec 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
> On Dec 19, 2015, at 10:07 PM, Timothy B. Terriberry <tterribe at xiph.org> wrote: > > Jonathan Lennox wrote: >> +opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32) >> +{ >> + int32x4_t coef0 = vld1q_s32(coef32); >> + int32x4_t coef1 = vld1q_s32(coef32 + 4); >> + int32x4_t coef2 =
2015 Aug 05
0
[PATCH 7/8] Add Neon intrinsics for Silk noise shape feedback loop.
--- silk/NSQ.c | 18 ++------------- silk/NSQ.h | 27 ++++++++++++++++++++++ silk/arm/NSQ_neon.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.h | 10 ++++++++ 4 files changed, 105 insertions(+), 16 deletions(-) diff --git a/silk/NSQ.c b/silk/NSQ.c index d8513dc..ec81f3b 100644 --- a/silk/NSQ.c +++ b/silk/NSQ.c @@ -205,7 +205,7 @@ void
2015 Nov 21
0
[Aarch64 v2 06/18] Add Neon intrinsics for Silk noise shape feedback loop.
--- silk/NSQ.c | 18 ++------------- silk/NSQ.h | 27 ++++++++++++++++++++++ silk/arm/NSQ_neon.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.h | 10 ++++++++ 4 files changed, 105 insertions(+), 16 deletions(-) diff --git a/silk/NSQ.c b/silk/NSQ.c index d8513dc..ec81f3b 100644 --- a/silk/NSQ.c +++ b/silk/NSQ.c @@ -205,7 +205,7 @@ void
2015 Nov 23
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
Hi Jonathan. I really, really hate to bring this up this late in the game, but I just noticed that your NEON code doesn't use any of the "high" intrinsics for ARM64, e.g. instead of: int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); you could use: int32x4_t coef1 = vmovl_high_s16(coef16); and instead of: int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0));
2015 Nov 21
12
[Aarch64 v2 00/18] Patches to enable Aarch64 (version 2)
As promised, here's a re-send of all my Aarch64 patches, following comments by John Ridges. Note that they actually affect more than just Aarch64 -- other than the ones specifically guarded by AARCH64_NEON defines, the Neon intrinsics all also apply on armv7; and the OPUS_FAST_INT64 patches apply on any 64-bit machine. The patches should largely be independent and independently useful, other
2009 May 11
1
Problems to run SVM regression with e1071
Hi R users, I'm trying to run a SVM - regression using e1071 package but the function svm() all the time apply a classification method rather than a regression. svm.m1 <- svm(st ~ ., data = train, cost = 1000, gamma = 1e-03) Parameters: SVM-Type: C-classification SVM-Kernel: radial cost: 1000 gamma: 0.001 Number of Support Vectors: 209
2015 Aug 05
0
[PATCH 6/8] Add Neon intrinsics for Silk noise shape quantization.
--- Makefile.am | 8 +++-- silk/NSQ.c | 37 ++++++++-------------- silk/NSQ.h | 70 +++++++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.c | 64 +++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.h | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++++ silk/x86/NSQ_sse.c | 2 +- silk/x86/main_sse.h | 3 +- silk_headers.mk | 2 ++ silk_sources.mk
2015 Nov 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
--- Makefile.am | 5 +-- silk/NSQ.c | 37 ++++++++-------------- silk/NSQ.h | 70 +++++++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.c | 64 +++++++++++++++++++++++++++++++++++++ silk/arm/NSQ_neon.h | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++++ silk/x86/NSQ_sse.c | 2 +- silk/x86/main_sse.h | 3 +- silk_headers.mk | 2 ++ silk_sources.mk
2005 Apr 26
3
Error using e1071 svm: NA/NaN/Inf in foreign function call
Hello, As far I saw in archive mailing list, I am not the first person with this problem. Anyway I was not able to pass this error once the information I got from the archive it is not very conclusive for this case. I have used linear, radial and sigmoid kernels for the same data in the same conditions and everything is ok. This problem just happens with the polynomial kernel. I send the