similar to: e1071 - tuning is not giving the best within the range

Displaying 20 results from an estimated 1000 matches similar to: "e1071 - tuning is not giving the best within the range"

2007 Apr 09
1
Could not fit correct values in discriminant analysis by bruto.
Shuji, I suspect that bruto blows up because your data are linearly separable. To see this (if you didn't already know), try library(lattice) splom(~x, groups = y) and look at the first row. If you are trying to do classification, there are a few methods that would choke on this (logistic regression) and a few that won't (trees, svms etc). I would guess that bruto is in the latter
2006 Jan 27
1
Classifying Intertwined Spirals
I'm using an SVM as I've seen a paper that reported extremely good results. I'm not having such luck. I'm also interested in ideas for other approaches to the problem that can also be applied to general problems (no assuming that we're looking for spirals). Here is my code: library(mlbench) library(e1071) raw <- mlbench.spirals(194, 2) spiral <-
2005 Jun 28
2
svm and scaling input
Dear All, I've a question about scaling the input variables for an analysis with svm (package e1071). Most of my variables are factors with 4 to 6 levels but there are also some numeric variables. I'm not familiar with the math behind svms, so my assumtions maybe completely wrong ... or obvious. Will the svm automatically expand the factors into a binary matrix? If I add numeric
2003 Jun 07
3
Error Compiling e1071
Dear all, I am trying to compile the package e1071 (version 1.3-11) with R CMD INSTALL. I tried with R 1.7.0 on Redhat Linux 2.4.7-10 and R 1.6.2 on Linux 2.4.9-34smp but keep getting the same error message during configure : WARNING: g++ 2.96 cannot reliably be used with this package. Please use a different compiler. Can anyone help me with this or at least point me in the right direction ?
2010 Oct 25
1
online course: SVM in R with Lutz Hamel at statistics.com
Support vector machines (SVMs) have established themselves as one of the preeminent machine learning models for classification and regression over the past decade or so, frequently outperforming artificial neural networks in task such as text mining and bioinformatics. Dr. Lutz Hamel, author of "Knowledge Discovery with Support Vector Machines" from Wiley will present his online course
2015 Dec 10
3
SVM hadoop
Estimados Un día leí algo en el siguiente hipervínculo, pero nunca lo use. http://blog.revolutionanalytics.com/2015/06/using-hadoop-with-r-it-depends.html Javier Rubén Marcuzzi De: Carlos J. Gil Bellosta Enviado: miércoles, 9 de diciembre de 2015 14:33 Para: MªLuz Morales CC: r-help-es Asunto: Re: [R-es] SVM hadoop No, no correrán en paralelo si usas los SVM de paquetes como e1071. No
2011 Jan 07
2
Stepwise SVM Variable selection
I have a data set with about 30,000 training cases and 103 variable. I've trained an SVM (using the e1071 package) for a binary classifier {0,1}. The accuracy isn't great. I used a grid search over the C and G parameters with an RBF kernel to find the best settings. I remember that for least squares, R has a nice stepwise function that will try combining subsets of variables to find
2012 Nov 02
1
An idea: Extend mclapply's mc.set.seed with an initial seed value?
Hello, Have been thinking that sometimes users may want each process to initialize its random seed with a specific value rather then the current seed. This could be keyed off depending whether mc.set.seed is logical, preserving the current behaviour, or numerical, using the value in a call to set.seed. Does this make sense? If you wonder how I came up with the idea: I spent a couple of hours
2003 Jan 31
1
svm regression in R
Hallo, I have a question concerning SVM regression in R. I intend to use SVMs for feature selection (and knowledge discovery). For this purpose I will need to extract the weights that are associated with my features. I understand from a previous thread on SVM classification, that predictive models can be derived from SVs, coefficiants and rhos, but it is unclear for me how to transfer this
2005 Apr 04
3
Error in save.image(): image could not be renamed
Hello, I am doing intensive tests on SVMs parameter selection. Once a while I got the error: Error in save.image(): image could not be renamed and is left in .RDataTmp1 I cannot use the information saves in .RDataTmp1. When that happens I loose several hours of tests. It happens, ussualy when the computer is locked, i.e., there is not other relevant processes running on. I can do tests and get
2015 Dec 10
2
SVM hadoop
Hola, Puedes poner un RStudio en Amazon, poner "caret" y a correr.... No sé si tendrás suficiente con lo que te pueda ofrecer Amazon para tu problema... creo que sí... ;-).... O directamente hacerlo aquí, que toda esta instalación ya la tienen hecha: http://www.teraproc.com/front-page-posts/r-on-demand/ Gracias, Carlos. El 10 de diciembre de 2015, 14:43, MªLuz Morales <mlzmrls
2015 Dec 09
2
SVM hadoop
Buenos días, alguien sabe si hay alguna manera de implementar una máquina de soporte vectorial (svm) con R-hadoop?? Mi interés es hacer procesamiento big data con svm. Se que en R, existen los paquetes {RtextTools} y {e1071} que permiten hacer svm. Pero no estoy segura de que el algoritmo sea paralelizable, es decir, que pueda correr en paralelo a través de la plataforma R-hadoop. Muchas
2011 Sep 02
2
misclassification rate
Hi users I'm student who is struggling with basic R programming. Would you please help me with this problem. "My english is bad" I hope that my question is clear: I have a matrix in wich there are two colmns( yp, yt) Yp: predicted values from my model. yt: true values ( my dependante variable y is a categorical;3 modalities (0,1,2) I don't know how to procede to calculate the
2005 Jun 23
1
errorest
Hi, I am using errorest function from ipred package. I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out". According to the manual page for errorest, i use the following command: ce632[i]<-errorest(ytrain ~., data=mydata, model=lda, estimator=c("boot","632plus"), predict=mypredict.lda)$error It didn't work. I then tried the
2010 Apr 30
1
how is xerror calculated in rpart?
Hi, I've searched online, in a few books, and in the archives, but haven't seen this. I believe that xerror is scaled to rel error on the first split. After fitting an rpart object, is it possible with a little math to determine the percentage of true classifications represented by a xerror value? -seth -- View this message in context:
2010 Nov 22
1
using rpart with a tree misclassification condition
Hello I want to build a classification tree for a binary response variable while the condition for the final tree should be : The total misclassification for each group (zero or one) will be less then 10% . for example: if I have in the root 100 observations, 90 from group 0 and 10 from group 1, I want that in the final tree a maximum of 9 and 1 observations out of group 0 and 1, respectively,
2012 Aug 02
1
Naive Bayes in R
I'm developing a naive bayes in R. I have the following data and am trying to predict on returned (class). dat = data.frame(home=c(0,1,1,0,0), gender=c("M","M","F","M","F"), returned=c(0,0,1,1,0)) str(dat) dat$home <- as.factor(dat$home) dat$returned <- as.factor(dat$returned) library(e1071) m <- naiveBayes(returned ~ ., dat) m
2008 Feb 24
1
what missed ----- CART
Hi all, Can anyone who is familar with CART tell me what I missed in my tree code? library (MASS) myfit <- tree (y ~ x1 + x2 + x3 + x4 ) # tree.screens () # useless plot(myfit); text (myfit, all= TRUE, cex=0.5, pretty=0) # tile.tree (myfit, fgl$type) # useless # close.screen (all= TRUE) # useless My current tree plot resulted from above code shows as:
2008 Apr 09
0
How do I get the parameters out of e1071's svm?
Hi all, I'm trying to get a simple, linear decision surface from e1071's svm. I've run it like this: svm(as.factor(slow) ~ SLICE.3 + PSGR.7 + SOLUTIONS.6 + DR.10, y, kernel='linear', cost=1e6, class.weights=c('FALSE'=1, 'TRUE'=10)) According to the docs, kernel='linear' has a kernel u'v. Since I have 4 independent variables, I'd expect to
2015 Dec 11
2
SVM hadoop
Hola Mª Luz, Te cuento un poco mi visión: Lo primero de todo es tener claro qué quiero hacer exactamente en paralelo, se me ocurren 3 escenarios: (1) Aplicar un modelo en este caso SVM sobre unos datos muy grandes y por eso necesito hadoop/spark (2) Realizar muchos modelos SVM sobre datos pequeños (por ejemplo uno por usuario) y por eso necesito hadoop/spark para parelilizar estos procesos