Displaying 20 results from an estimated 67 matches for "misclassif".
Did you mean:
misclassify
2011 Sep 02
2
misclassification rate
...this problem.
"My english is bad" I hope that my question is clear:
I have a matrix in wich there are two colmns( yp, yt)
Yp: predicted values from my model.
yt: true values ( my dependante variable y is a categorical;3 modalities
(0,1,2)
I don't know how to procede to calculate the misclassification rate and the
error Types.
Thank you for answring
Doussa
--
View this message in context: http://r.789695.n4.nabble.com/misclassification-rate-tp3787075p3787075.html
Sent from the R help mailing list archive at Nabble.com.
2010 Nov 22
1
using rpart with a tree misclassification condition
Hello
I want to build a classification tree for a binary response variable
while the condition for the final tree should be :
The total misclassification for each group (zero or one) will be less then
10% .
for example: if I have in the root 100 observations, 90 from group 0 and 10
from group 1, I want that in the final tree a maximum of 9 and 1
observations out of group 0 and 1, respectively, will be misclassified.
Does anyone know what code...
2002 Jan 05
1
computing misclassification table for tree objects
I have a classification tree that I computed via the tree function
(in the tree package). I'd like to compute a misclassification table
(if that's the right term) on the data used to compute the tree. That
is, I want to compute a table with the different classes (i.e.,levels of the
response factor) on the rows and the columns, and where entry [i,j] is
the number of times the tree classified an observation of class...
2007 Jun 16
0
Function for misclassification rate/type I,II error??
HI
Is there any function in R that tells us error rate(misclassification rate)
for logistic regression type classification?
i also want to know the function to determine type I and type II error.
I have found a link where "misclass" and "confusion" are used. But I dont
know the package name.
http://alumni.media.mit.edu/~tpminka/courses/36-...
2008 Feb 24
1
what missed ----- CART
...ree.screens () # useless
plot(myfit); text (myfit, all= TRUE, cex=0.5, pretty=0)
# tile.tree (myfit, fgl$type) # useless
# close.screen (all= TRUE) # useless
My current tree plot resulted from above code shows as:
1. overlapped #s caused by unsuitable length of branch.
2. no misclassification rates: 'misclass.tree' only brings up the error of ' misclassification error rate is appropriate for factor responses only', but my response y is 0/1 data.
3. Unsuitable location of notations: there are not two notation of splitting criteria on the two branches when a node...
2010 Apr 30
1
how is xerror calculated in rpart?
Hi,
I've searched online, in a few books, and in the archives, but haven't seen
this. I believe that xerror is scaled to rel error on the first split.
After fitting an rpart object, is it possible with a little math to
determine the percentage of true classifications represented by a xerror
value? -seth
--
View this message in context:
2005 Jun 23
1
errorest
Hi,
I am using errorest function from ipred package.
I am hoping to perform "bootstrap 0.632+" and "bootstrap leave one out".
According to the manual page for errorest, i use the following command:
ce632[i]<-errorest(ytrain ~., data=mydata, model=lda,
estimator=c("boot","632plus"), predict=mypredict.lda)$error
It didn't work. I then tried the
2011 Oct 25
2
Logistic Regression - Variable Selection Methods With Prediction
Hello,
I am pretty new to R, I have always used SAS and SAS products. My
target variable is binary ('Y' and 'N') and i have about 14 predictor
variables. My goal is to compare different variable selection methods
like Forward, Backward, All possible subsests. I am using
misclassification rate to pick the winner method.
This is what i have as of now,
Reg <- glm (Graduation ~., DFtrain,family=binomial(link="logit"))
step <- extractAIC(Reg, direction="forward")
pred <- predict(Reg, DFtest,type="response")
mis <- mean({pred >...
2005 Oct 14
1
Predicting classification error from rpart
...p. I'm using rpart to analyse data on
skull base morphology, essentially predicting sex from one or several
skull base measurements. The sex of the people whose skulls are being
studied is known, and lives as a factor (M,F) in the data. I want to
get back predictions of gender, and particularly misclassification
rates.
rpart produces output like this :-
> printcp(rpart.LFM)
Classification tree:
rpart(formula = Sex ~ LFM, data = Brides2)
Variables actually used in tree construction: LFM
Root node error: 44/104 = 0.42308 n= 104
CP nsp...
2012 Aug 19
1
e1071 - tuning is not giving the best within the range
...(model, data, probability=TRUE, decision.values = TRUE)
tab<-table(predicts, data$class)
tab/
This is what I face:
/Parameters:
SVM-Type: C-classification
SVM-Kernel: linear
cost: 0.26
gamma: 0.1
Number of Support Vectors: 61/
But, when I try cost=0.31, I get a lower misclassification error rate than
when I get with cost=0.26 .
Is this difference because the error used while tuning is different from the
misclassification value?
Thanks in advance.
--
View this message in context: http://r.789695.n4.nabble.com/e1071-tuning-is-not-giving-the-best-within-the-range-tp46...
2009 Apr 27
1
question about adaboost.
Hello,
I would like to know how to obtain the misclassification error when performing a boosting analisis with ADABAG package?
With:
> prop.table(Tesis.boostcv$confusion)
I obtain the confusion matrix, but not the overall missclassification error.
Thanks in advance,
BSc. Cecilia Lezama
Facultad de Ciencias - UDELAR
Montevideo - Uruguay.
[[alternat...
2007 Jun 12
3
Appropriate regression model for categorical variables
Dear users,
In my psychometric test i have applied logistic regression on my data. My
data consists of 50 predictors (22 continuous and 28 categorical) plus a
binary response.
Using glm(), stepAIC() i didn't get satisfactory result as misclassification
rate is too high. I think categorical variables are responsible for this
debacle. Some of them have more than 6 level (one has 10 level).
Please suggest some better regression model for this situation. If possible
you can suggest some article.
thanking you.
Tirtha
--
View this message in...
2009 Mar 11
2
Couple of Questions about Classification trees
...ade of 5 different numbers. They
do represent something but it would take too long to explain.
I want to try and find a classification rule for the 5 numbers in the rows
based on the columns so I created a classification tree and plotted that and
then pruned it. My question is how do you print the misclassification rate
at each node on the actual diagram of the classification tree. I can't seem
to get it up there. In my notes it uses gmistext() but I have a feeling that
it's for Splus rather than R as gmistext() doesn.t work for me either.
Second question is when I try using the predict.tree...
2005 Jan 20
2
Cross-validation accuracy in SVM
Hi all -
I am trying to tune an SVM model by optimizing the cross-validation
accuracy. Maximizing this value doesn't necessarily seem to minimize the
number of misclassifications. Can anyone tell me how the
cross-validation accuracy is defined? In the output below, for example,
cross-validation accuracy is 92.2%, while the number of correctly
classified samples is (1476+170)/(1476+170+4) = 99.7% !?
Thanks for any help.
Regards - Ton
---
Parameters:
SVM-Type:...
2007 Jan 29
3
comparing random forests and classification trees
...nstead I have produced a table for both
analyses comparing the observed and predicted response.
E.g. table(data$dependent,predict(model,type="class"))
I am looking for confirmation that (a) it is incorrect to compare the error
estimates for the two techniques and (b) that comparing the
misclassification rates is an appropriate method for comparing the two
techniques.
Thanks
Amy
Amelia Koch
University of Tasmania
School of Geography and Environmental Studies
Private Bag 78 Hobart
Tasmania, Australia 7001
Ph: +61 3 6226 7454
ajkoch@utas.edu.au
[[alternative HTML version d...
2007 Jul 12
1
Package for .632 (and .632+) bootstrap and the cross-validation of ROC Parameters
Hi users,
I need to calculate .632 (and .632+) bootstrap and the cross-validation of
area under curve (AUC) to compare my models. Is there any package for the
same. I know about 'ipred' and using it i can calculate misclassification
errors.
Please help. It's urgent.
--
View this message in context: http://www.nabble.com/Package-for-.632-%28and-.632%2B%29-bootstrap-and-the-cross-validation-of-ROC-Parameters-tf4068544.html#a11561405
Sent from the R help mailing list archive at Nabble.com.
2006 Sep 27
1
MSM modeling and transition rates in R
...ngs,
I'm using MSM (mutli-state markov modeling) package to study the
progression of fibrosis in U.S hepatitis C population. I find this is a
very fascinating tool for an applied researcher like myself.
I have a four stage progression only model without any absorbing stage,
also assuming no misclassification error in the data for the time being.
I also have a couple covariates in the model. so I get these three
transition rates. and I was wondering how I can compare these transition
rates to be able to say that they are not equal statistically. (I guess
I might be thinking too hard on this, can...
2008 May 21
1
How to use classwt parameter option in RandomForest
...st}, and
predictor variables X, with continuous and factor variables using
random forests in R. The variable Y acts like an ordinal variable, but
I recoded it as factor variable.
I ran a simulation and got OOB estimate of error rate 60%. I validated
against some external datasets and got about 59% misclassification
error. I would like to tinker with classwt option in the function
randomForest to see if I can get a better performance the model. My
confusion arises from how to define these weights. If I say, classwt =
c(3,6,9,1,2,3), how exactly the levels get weighted. If this is a 6X6
matrix, I can put...
2012 Aug 02
1
Naive Bayes in R
...turned)
library(e1071)
m <- naiveBayes(returned ~ ., dat)
m
predict(m, dat[1:5,-3])
table(predict=predict(m, dat[1:5,-3]), true=dat[1:5,3])
predict(m, dat[1:5,-3], type = "raw")
So far, so good I think (???).
I want to know if there is any diagnostic test to determine the overall
misclassification rate
of a NB classifier, and if there is a function in R that is available to
implement it?
Thanks,
Abraham
--
*Abraham Mathew
Statistical Analyst
www.amathew.com
720-648-0108
@abmathewks*
[[alternative HTML version deleted]]
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings,
I am having trouble calculating artificial neural network
misclassification errors using errorest() from the ipred package.
I have had no problems estimating the values with randomForest()
or svm(), but can't seem to get it to work with nnet(). I believe
this is due to the output of the predict.nnet() function within
cv.factor(). Below is a quick example of...