Displaying 20 results from an estimated 3000 matches similar to: "Confusion matrix from cross validation in R:"
2007 Oct 23
1
Compute R2 and Q2 in PLS with pls.pcr package
Dear list
I am using the mvr function of the package pls.pcr to compute PLS
resgression using a X matrix of gene expression variables and a Y matrix
of medical varaibles.
I would like to obtain the R2 (sum of squares captured by the model) and
Q2 (proportion of total sum of squares captured in leave-one-out cross
validation) of the model.
I am not sure if there are specific slots in the
2010 Aug 16
0
Help for using nnet in R for NN training and testing
Hello,
I want to use nnet package in R, to train and simulate a NN and get the
value of MSE.
I am reading in a file which has 19 input variables and one output variable
and has a total of 2000 observations. The first column in the file is a
column just for giving the serial numbers of the observations.
I have already read in the file and also extracted the different values into
the matrices to
2004 Nov 15
0
how to obtain predicted labels for test data using "kerne lpls"
You need to do some extra work if you want to do classification with a
regression method. One simple way to do classification with PLS is to code
the classes as 0s and 1s (assuming there are only two classes) or -1s and
1s, fit the model, then threshold the prediction; e.g., those with predicted
values < 0.5 (in the 0/1 coding) get labeled as 0s. There's a predict()
method for mvr
2004 Nov 15
0
how to obtain predicted labels for test data using "kernelpls"
Dear members,
My name is Seungho Huh. I am a statistician who tries to use the Kernel
PLS method in a classification problem. I am sending this email to ask
you something about the "kernelpls" function in R (pls.pcr package).
I would like to obtain the predicted Y values for test data, using the
Kernel PLS method. Let's take the example in the R help:
> data(NIR)
>
2011 May 24
1
seeking help on using LARS package
Hi,
I am writing to seek some guidance regarding using Lasso regression with the
R package LARS. I have introductory statistics background but I am trying to
learn more. Right now I am trying to duplicate the results in a paper for
shRNA prediction "An accurate and interpretable model for siRNA efficacy
prediction, Jean-Philippe Vert et. al, Bioinformatics" for a Bioinformatics
project
2007 Jan 22
0
Recursive-SVM (R-SVM)
I am trying to implement a simple r-svm example using the iris data (only two of the classes are taken and data is within the code). I am running into some errors. I am not an expert on svm's. If any one has used it, I would appreciate their help. I am appending the code below.
Thanks../Murli
#######################################################
### R-code for R-SVM
### use leave-one-out
2012 Sep 13
0
I need help for svm package kernlab in R
I use the svm package kernlab .I have two question.
In R
library(kernlab)
m=ksvm(xtrain,ytrain,type="C-svc",kernel=custom function, C=10)
alpha(m)
alphaindex(m)
I can get alpha value and alpha index about package.
1.
Assumption that number of sample are 20.
number of support vectors are 15.
then rest 5`s alphas are 0?
2. I want use kernelMatrix
xtrain=as.matrix(xtrain)
2018 Feb 19
0
questions regarding the svmpath package (functions svmpath and predict)
Hello,
I have two questions.
The svmpath package provides a svmpath function:
---
fit <- svmpath(xtrain, ytrain, kernel.function = radial.kernel, param.kernel = 0.8)
---
1) How to get the optimal lambda value out of this result?
The svmpath package also provides a predict function:
---
ytest <- predict(fit, xtest)
---
How to get a score (or a probability of belonging to one of the two
2008 Feb 05
0
Uninformative error msgs w/ svm.default - Error in svm.default ... y must be a vector or a factor -
Hello,
I'm using recursive SVM script (rSVM - http://www.stanford.edu/group/wonglab/RSVMpage/R-SVM.html ) on some microarray data. The data to be input are log2, as numeric matrix w/ attributes --
str(svm_num_mat)
num [1:10, 1:12340] 13.1 13.1 13.1 13.1 13.0 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:10] "rma_log2_con_sample_1"
2012 Aug 27
0
kernlab`s custom kernel of ksvm freeze
Hello, together
I'm trying to use user defined kernel. I know that kernlab offer user
defined kernel(custom kernel functions) in R.
I used data spam including package kernlab.
(number of variables=58 number of examples =4061)
i'm user defined kernel's form,
kp=function(d,e){
as=v*d
bs=v*e
cs=as-bs
cs=as.matrix(cs)
exp(-(norm(cs,"F")^2)/2)
}
2004 Dec 01
1
tuning SVM's
Hi
I am doing this sort of thing:
POLY:
> > obj = best.tune(svm, similarity ~., data = training, kernel =
"polynomial")
> summary(obj)
Call:
best.tune(svm, similarity ~ ., data = training, kernel = "polynomial")
Parameters:
SVM-Type: eps-regression
SVM-Kernel: polynomial
cost: 1
degree: 3
gamma: 0.04545455
coef.0: 0
2005 Jun 23
1
errorest
Hi,
I am using errorest function from ipred package.
I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out".
According to the manual page for errorest, i use the following command:
ce632[i]<-errorest(ytrain ~., data=mydata, model=lda,
estimator=c("boot","632plus"), predict=mypredict.lda)$error
It didn't work. I then tried the
2010 Jun 23
1
Probabilities from survfit.coxph:
Hello:
In the example below (or for a censored data) using survfit.coxph, can
anyone point me to a link or a pdf as to how the probabilities appearing in
bold under "summary(pred$surv)" are calculated? Do these represent
acumulative probability distribution in time (not including censored time)?
Thanks very much,
parmee
*fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian)*
2009 Jun 17
0
nls with weights
Hi there,
I don't have much experience with fitting at all and I'd like to get
some advice how to use the "weights"-argument with nls correctly.
I have created some data with a sigmoidal curve shape. Each y-Value was
generated by the mean of three values. A standard deviation was
calculated too.
Now, I'd like to weight the data points respective to its standard
2013 Jan 08
0
bagging SVM Ensemble
Dear Sir,
I got a problem with my program. I would like to classify my data using
bagging support vector machine ensemble. I split my data into training data
and test data. For a given data sets TR(X), K replicated training data sets
are first randomly generated by bootstrapping technique with replacement.
Next, Support Vector Mechine (SVM) is applied for each bootstrap data sets.
Finally, the
2009 Sep 17
2
SVM
Hello,
I have 12 sample each sample has got 1000 observation, i.e I have a matrix X with 1000 rows and 12 columns!
m <- svm(t(X))
p <- predict (m)
Can anyone tell me how to use svmtrain() in R!
Many Yhanks,
Samuel
[[alternative HTML version deleted]]
2007 Mar 16
0
help on sigmoid curve fitting
Hi list,
I was wondering how I should go about fitting a sigmoid curve to a dataset. More specifically how I estimate parameters a and b in the following equation:
1 / 1+exp(-(x-a)*b)
with b the steepness of the sigmoid curve and a the shift of the center of the sigmoid curve relative to the center of your dataframe. The fit is in function of x, the location within the input vector and y, an
2010 Jul 06
2
numerical derivative R help
I fit my CDF to sum of exponentials and now I want to take the numerical
derivative of this function to obtain probability density.I will really
appreciate your help reagrding the error messages I am getting which I don't
understand.
*
*
> fitterma <- function(xtime) {
a <- -0.09144115
b <- -0.01335756
c <- -2.368057
d <- -0.00600052
2011 Jan 21
1
help! complete the reviewer's suggest: carry out GA+GP (gaussian process)!
Hello, all experts,
My major is computer-aied drug design ( main QSAR).
Now, my paper need be reviesed, and one reviewer ask me do genetic algorithm
coupled with gaussian process method (GA+GP).
my data:
training set: 191*106
test set: 73*106
here, I need use GA+GP to do variable selection when building the model.
In R, there are not GA package like in matlab
2006 Apr 03
0
Weighted Sensitivity, PPV etc.
All,
Appreciate any leads on the following:
In a recent blind-validation study of a depression screening instrument
we used a two-stage sampling design.
In stage 1, we used a broad paper-and-pencil screen to identify likely
positives (say 30% of entire sample). In stage 2 we conducted in-depth
interviews with the 30% of likely positives plus another 20% of the
negatives as controls.