search for: printcp

Displaying 19 results from an estimated 19 matches for "printcp".

Did you mean: printcap
2012 Feb 17
1
Different cp values in rpart() using plotcp() and printcp()
hi, I have a question regarding cp values in rpart(). When I use plotcp() I get a figure with cp values on the x-axsis, but then I use printcp() the cp values in that list are different from the values in the figure by plotcp(). Does someone know why? Silje [[alternative HTML version deleted]]
2010 Oct 12
2
repeating an analysis
...pick the appropriate tree size and post it to a datafile where I can then look at the frequency distribution of tree sizes. Here is the code and output from a single run > fit1 <- rpart(CHAB~.,data=chabun, method="anova", control=rpart.control(minsplit=10, cp=0.01, xval=10)) > printcp(fit1) Regression tree: rpart(formula = CHAB ~ ., data = chabun, method = "anova", control = rpart.control(minsplit = 10, cp = 0.01, xval = 10)) Variables actually used in tree construction: [1] EXP LAT POC RUG Root node error: 35904/33 = 1088 n= 33 CP nsplit rel error xerror...
2009 May 12
1
questions on rpart (tree changes when rearrange the order of covariates?!)
...h ties? Here is the codes for running the two trees. library(mlbench) data(PimaIndiansDiabetes2) mydata<-PimaIndiansDiabetes2 library(rpart) fit2<-rpart(diabetes~., data=mydata,method="class") plot(fit2,uniform=T,main="CART for original data") text(fit2,use.n=T,cex=0.6) printcp(fit2) table(predict(fit2,type="class"),mydata$diabetes) ## misclassifcation table: rows are fitted class neg pos neg 437 68 pos 63 200 #Klimt(fit2,mydata) pmydata<-data.frame(mydata[,c(1,6,3,4,5,2,7,8,9)]) fit3<-rpart(diabetes~., data=pmydata,method="class") p...
2009 May 22
1
bug in rpart?
...d. Your professional opinion is very much appreciated. library(mlbench) data(PimaIndiansDiabetes2) mydata<-PimaIndiansDiabetes2 library(rpart) fit2<-rpart(diabetes~., data=mydata,method="class") plot(fit2,uniform=T,main="CART for original data") text(fit2,use.n=T,cex=0.6) printcp(fit2) table(predict(fit2,type="class"),mydata$diabetes) ## misclassifcation table: rows are fitted class neg pos neg 437 68 pos 63 200 pmydata<-data.frame(mydata[,c(1,6,3,4,5,2,7,8,9)]) fit3<-rpart(diabetes~., data=pmydata,method="class") plot(fit3,uniform=T,...
2006 Sep 25
2
rpart
Dear r-help-list: If I use the rpart method like cfit<-rpart(y~.,data=data,...), what kind of tree is stored in cfit? Is it right that this tree is not pruned at all, that it is the full tree? If so, it's up to me to choose a subtree by using the printcp method. In the technical report from Atkinson and Therneau "An Introduction to recursive partitioning using the rpart routines" from 2000, one can see the following table on page 15: CP nsplit relerror xerror xstd 1 0.105 0 1.00000 1.0000 0.108 2 0.056 3 0....
2008 Oct 01
0
xpred.rpart() in library(mvpart)
...sych.upenn.edu/R/library/mvpart/html/xpred.rpart.html says: data(car.test.frame) fit <- rpart(Mileage ~ Weight, car.test.frame) xmat <- xpred.rpart(fit) xerr <- (xmat - car.test.frame$Mileage)^2 apply(xerr, 2, sum) # cross-validated error estimate # approx same result as rel. error from printcp(fit) apply(xerr, 2, sum)/var(car.test.frame$Mileage) printcp(fit) I carried out the R object: function () { # library(mvpart) xx1 <- c(1,2,3,4,5,6,7,8,9,10) xx2 <- c(5,2,1,4,3,6,2,8,2,2) xx3 <- c(9,8,3,7,2,3,1,9,1,6) yy <- c(1,8,2,7,4,3,1,2,2,8) data1 <- data.frame(x1=xx...
2010 May 03
1
rpart, cross-validation errors question
...idation errors increase instead of decrease (same thing happens with an unrelated data set). Why does this happen? Am I doing something wrong? # Classification Tree with rpart library(rpart) # grow tree fit <- rpart(Kyphosis ~ Age + Number + Start, method="class", data=kyphosis) printcp(fit) # display the results plotcp(fit) # visualize cross-validation results Thank you, Claudia [[alternative HTML version deleted]]
2010 May 11
1
how to extract the variables used in decision tree
HI, Dear R community, How to extract the variables actually used in tree construction? I want to extract these variables and combine other variable as my features in next step model building. > printcp(fit.dimer) Classification tree: rpart(formula = outcome ~ ., data = p_df, method = "class") Variables actually used in tree construction: [1] CT DP DY FC NE NW QT SK TA WC WD WG WW YG Root node error: 608/1743 = 0.34882 n= 1743 CP nsplit rel error xerror xstd 1 0.185033...
2004 Jun 04
1
rpart
...;t seem too stupid. 1.) My first question concerns the rpart() method. Which method does rpart use in order to get the best split - entropy impurity, Bayes error (min. error) or Gini index? Is there a way to make it use the entropy impurity? The second and third question concern the output of the printcp() function. 2.) What exactly are the cps in that sense here? I assumed them to be the treshold complexity parameters as in Breiman et al., 1998, Section 3.3? Are they the same as the treshold niveaus of alpha? I have read somewhere that the cps here are the treshold alphas divided by the root node...
2004 Jun 11
1
Error when I try to build / plot a tree using rpart()
...hod = "class") n= 8063 CP nsplit rel error 1 0.009451796 0 1 Error in yval[, 1] : incorrect number of dimensions > print(nhg3.rp) n= 8063 node), split, n, loss, yval, (yprob) * denotes terminal node 1) root 8063 3703 1 (0.4592583 0.5407417) * > printcp(nhg3.rp) Classification tree: rpart(formula = profitresp ~ ., data = nhg3, method = "class") Variables actually used in tree construction: character(0) Root node error: 3703/8063 = 0.45926 n= 8063 CP nsplit rel error 1 0.0094518 0 1 Any help is appreciated. Th...
2003 Sep 29
1
CP for rpart
...My data have are two classes with 138 observations and 129 attributes. Here is what I did: >dim(man.dat[,c(1,8:136)]) [1] 138 130 >man.dt1 <- rpart(Target~.,data=man.dat[,c(1,8:136)], >method='class',cp=1e-5, parms=list(split='information')) >plotcp(man.dt1) >printcp(man.dt1) Classification tree: rpart(formula = Target ~ ., data = man.dat[, c(1, 8:136)], method = "class", parms = list(split = "information"), cp = 1e-05) Variables actually used in tree construction: [1] CHX.V CYN.Cu SPF.Bi Root node error: 25/138 = 0.18116 n= 138...
2005 Oct 14
1
Predicting classification error from rpart
...sentially predicting sex from one or several skull base measurements. The sex of the people whose skulls are being studied is known, and lives as a factor (M,F) in the data. I want to get back predictions of gender, and particularly misclassification rates. rpart produces output like this :- > printcp(rpart.LFM) Classification tree: rpart(formula = Sex ~ LFM, data = Brides2) Variables actually used in tree construction: LFM Root node error: 44/104 = 0.42308 n= 104 CP nsplit rel error xerror xstd 1 0.227273 0 1.00000 1.0000...
2006 Oct 17
1
Some questions on Rpart algorithm
...s in advance. (1) I'd like text(.rpart) to print percentages of each class rather then counts. I don't see an option for this so would like to modify the text.rpart. However, I can't find the source since it is a method that's "hidden". How can I find the source? (2) printcp prints a table with columns cp, nsplit, rel error, xerror, xstd. I am guessing that cp is complexity, nsplit is the number of the split, rel error is the error on test set, xerror is cross-validation error and xstd is standard deviation of error across the cross-validation sets. Is there any docume...
2010 Sep 26
4
How to update an old unsupported package
Hi all, I have a package that is specific to a task I was repetitively using a few years ago. I now needed to run it again with new data. However I am told it was built with an older version or R and will not work. How can I tweak the package so it will run on 11.1? It was a one-off product and has not been maintained. Is there a way to "unpackage" it and repackage it to work? I
2006 Nov 02
1
Question on cross-validation in rpart
Hi R folks, I am using R version 2.2.1 for Unix. I am exploring the rpart function, in particular the rpart.control parameter. I have tried using different values for xval (0, 1, 10, 20) leaving other parameters constant but I receive the same tree after each run. Is the10 fold cross-validation default still running every time? I would expect the trees to change at least a little when I
2010 Aug 03
1
R: classification tree model!
Un texte encapsul? et encod? dans un jeu de caract?res inconnu a ?t? nettoy?... Nom : non disponible URL : <https://stat.ethz.ch/pipermail/r-help/attachments/20100803/9fb28807/attachment.pl>
2011 May 25
0
Fw: questions about rpart - cont.
...s possible to interpret the relative or cross-validation error for ex by the number of samples. I know that they are scaled to 1 at the root node of the tree but for any number of splits, how much error we make for each sample (but we don't know the number of sample in each split retured by printcp). Any other information is welcome. Look forward to your reply, Carol
2012 Dec 07
0
loop for calculating 1-se in rpart
...quot; < xerror < "(minimum xerror + xstd error)" and which also has the smallest "nsplit". > cp50 <- replicate(50,{ fit1 <- rpart(chbiomsq~HC+BC+POC+RUG+Depth+Exp+DFP+FI+LAT, data=ch,method="anova", control=rpart.control(minsplit=10,cp=0.01,xval=10));x=printcp(fit1);x[which.min(x[,"xerror"]),"nsplit"]}) > hist(cp50,main="optimal splits for tree",xlab= "no. of optimal tree splits", ylab= "frequency") Any help appreciated. Andy -- Andrew Halford Ph.D Adjunct Research Scientist University of Guam &am...
2010 Oct 15
0
nomianl response model
...I think > > answer = replicate(50,{fit1 <- rpart(CHAB~.,data=chabun, method="anova", > > control=rpart.control(minsplit=10, > cp=0.01, xval=10)); > x = printcp(fit1); > x[which.min(x[,'xerror']),'nsplit']}) > > will put the numbers you want into answer, but there was no reproducible > example to test it on. Unfortunately, I don't know of any way to surpress > the printing from printcp...