similar to: best.svm

Displaying 20 results from an estimated 300 matches similar to: "best.svm"

2006 Dec 11
1
cohen kappa for two-way table
Greetings, I am a bit confused by the results returned by the functions: cohen.kappa {concord} classAgreement {e1071} when using a two-way table. for example, if I have an matrix A, and a similar matrix B (same dimensions), then: matrix A and B can be found: http://casoilresource.lawr.ucdavis.edu/drupal/files/a_40.txt http://casoilresource.lawr.ucdavis.edu/drupal/files/b_40.txt A <-
2005 May 27
1
logistic regression
Hi I am working on corpora of automatically recognized utterances, looking for features that predict error in the hypothesis the recognizer is proposing. I am using the glm functions to do logistic regression. I do this type of thing: * logistic.model = glm(formula = similarity ~., family = binomial, data = data) and end up with a model: > summary(logistic.model) Call:
2008 Mar 02
2
listing components of an object
Is there a method to list the components of an object, instead of looking at the help for that method? Let me be more clear with an example data(iris) ## tune `svm' for classification with RBF-kernel (default in svm), ## using one split for training/validation set obj <- tune(svm, Species~., data = iris, ranges = list(gamma = 2^(-1:1), cost = 2^(2:4)),
2005 Oct 06
1
how to use tune.knn() for dataset with missing values
Hi Everybody, i again have the problem in using tune.knn(), its giving an error saying missing values are not allowed.... again here is the script for BreastCancer Data, library(e1071) library(mda) trdata<-data.frame(train,row.names=NULL) attach(trdata) xtr <- subset(trdata, select = -Class) ytr <- Class bestpara <-tune.knn(xtr,ytr, k = 1:25, tunecontrol = tune.control(sampling
2006 Mar 25
2
R gets slow
Hello, I have R as a socket server that computes R code sent by some scripts (the clients). These scrips send R code to generate models (SVM). The problem is that first models are generated in less than one second and one hour later, the same models are generated in more than ten seconds (even training with same data). If I restart the server , then it works well (fast). I don't know if I have
2012 Aug 19
1
e1071 - tuning is not giving the best within the range
Hi everybody, I am new in e1071 and with SVMs. I am trying to understand the performance of SVMs but I face with a situation that I thought as not meaningful. I added the R code for you to see what I have done. /set.seed(1234) data <- data.frame( rbind(matrix(rnorm(1500, mean = 10, sd = 5),ncol = 10), matrix(rnorm(1500, mean = 5, sd = 5),ncol = 10))) class <- as.factor(rep(1:2,
2005 Apr 27
1
making table() work
I am trying to do some verification across a large dataset, cuData, that has 23 columns. Column 23 (similarity) is the outcome 0 or 1 and the other columns are the features. I do this: verificationglm.model <- glm(formula = similarity ~ ., family=binomial, data=cuData[1:1000,]) and produce the model: > summary(verificationglm.model) Call: glm(formula = similarity ~ ., family =
2010 May 14
0
bootstrapping an svm
Hello I am playing around trying to bootstrap an svm model using a training set and a test set. I've written another function, auc, which I call here, and am bootstrapping. I did this successfully with logistic regression, but I am getting an error from the starred ** line which I determined with print statements. How do I tune an svm in a bootstrap? I can't find sample code
2006 Jan 08
1
Clustering and Rand Index - VS-KM
Dear WizaRds, I have been trying to compute the adjusted Rand index as by Hubert/ Arabie, and could not correctly approach how to define a partition object as in my last request yesterday. With package fpc I try to work around the problem, using my original data: mat <- matrix( c(6,7,8,2,3,4,12,14,14, 14,15,13,3,1,2,3,4,2, 15,3,10,5,11,7,13,6,1, 15,4,10,6,12,8,12,7,1), ncol=9, byrow=T )
2011 May 28
0
how to train ksvm with spectral kernel (kernlab) in caret?
Hello all, I would like to use the train function from the caret package to train a svm with a spectral kernel from the kernlab package. Sadly a svm with spectral kernel is not among the many methods in caret... using caret to train svmRadial: ------------------ library(caret) library(kernlab) data(iris) TrainData<- iris[,1:4] TrainClasses<- iris[,5] set.seed(2)
2004 Mar 23
4
statistical significance test for cluster agreement
I was wondering, whether there is a way to have statistical significance test for cluster agreement. I know that I can use classAgreement() function to get Rand index, which will give me some indication whether the clusters agree or not, but it would be interesting to have a formal test. Thanks.
2011 Apr 12
0
Help required
Hi Sadaf, Out of curiosity, what sorts of things have you tried to fix this? For example, after playing around with this a bit, if I remove your "eps" parameter from your `ranges` list, it works. Perhaps you should try tweaking the values you pick for your parameters. You don't even have to put it in the `tune` function to get an idea of the ranges you should use: R>
2001 Oct 04
0
new version of e1071 on CRAN
A new version of e1071 has been released to CRAN which should be much easier to install on a lot of platforms because reading/writing PNM images has been moved to the pixmap package, hence there are no longer dependencies on external libraries and no configure mechanism. For the authors, Fritz Leisch ********************************************************** Changes in Version 1.2-0: o
2009 Mar 04
0
Error in -class : invalid argument to unary operator
Hi guys I have been using R for a few months now and have come across an error that I have been trying to fix for a week or so now.I am trying to build a classifer that will classify the wine dataset using Naive Bayes. My code is as follows library (e1071) wine<- read.csv("C:\\Rproject\\Wine\\wine.csv") split<-sample(nrow(wine), floor(nrow(wine) * 0.5)) wine_training <-
2005 Aug 03
0
Asterisk on FreeBSD-5.4 RELEASE : H323 audio problem
Dear All. I have installed Asterisk-1.0.6 on FreeBSD-5.4 RELEASE via port, so the chan_h323.so modules already included. When i try the SIP channel, asterisk works fine. In this case, i use the XLite softphone as client, i can hear the voice transfered through clearly. Asterisk is good!! :D The problem is when i try the H323 channel, the voice cannot be transferred through. I have
2006 Mar 25
1
There were 25 warnings (use warnings() to see them)
I am trying to use bagging like this: > bag.model <- bagging(as.factor(nextDay) ~ ., data = spi[1:1250,]) > pred = predict(bag.model, spi[1251:13500,-9]) There were 25 warnings (use warnings() to see them) > t = table(pred, spi[1251:13500,9]) > t pred 0 1 0 42 40 1 12 22 > classAgreement(t) but I get the warning. The warnings run like this: >
2006 Jun 25
5
FW: Asterisk Quintum A800 SIP Mode
Hello, I got Quintum A800 with the P5-2-1 firmware. I configure my asterisk trunk as followed: [SIP_BD1] type=peer qualify=yes host=192.168.0.254 disallow=all context=from-pstn allow=h723 and inside the quantum I change the config sip useragent to 5060. Up to this part if I run sip show peers, I got: asterisk1*CLI> sip show peers Name/username????????????? Host??????????? Dyn Nat ACL
2010 Sep 24
0
kernlab:ksvm:eps-svr: bug?
Hi, A. In a nutshell: The training error, obtained as "error (ret)", from the return value of a ksvm () call for a eps-svr model is (likely) being computed wrongly. "nu-svr" and "eps-bsvr" suffer from this as well. I am attaching three files: (1) ksvm.R from the the kernlab package, un-edited, (2) ksvm_eps-svr.txt: (for easier reading) containing only eps-svr
2013 Jan 08
0
bagging SVM Ensemble
Dear Sir, I got a problem with my program. I would like to classify my data using bagging support vector machine ensemble. I split my data into training data and test data. For a given data sets TR(X), K replicated training data sets are first randomly generated by bootstrapping technique with replacement. Next, Support Vector Mechine (SVM) is applied for each bootstrap data sets. Finally, the
2005 Oct 25
1
selecting every nth item in the data
I want to make a glm and then use predict. I have a fairly small sample (4000 cases) and I want to train on 90% and test on 10% but I want to do it in slices so I test on every 10th case and train on the others. Is there some simple way to get these elements? Stephen -- 21/10/2005 [[alternative HTML version deleted]]