Forgot to specify that the cross-val error cannot be decreased lower than 0.91. Note that for smaller values of cp than 0.01, the cross-val error increases. Is the cross-val error sum of squared error or relative error for classification problem (method = "class" in rpart function) or another type of error? Is it possible to determine the true positive, false positive using rpart? Thanks ----- Forwarded Message ---- From: carol white <wht_crl at yahoo.com> To: r-help at stat.math.ethz.ch Sent: Wed, May 25, 2011 9:06:15 AM Subject: questions about rpart Hi, I have applied rpart to my data set and for cp=.01, the cross-validation error (xerr) is less (min 0.05) than for other cp. However, in the final tree, an important predictor is not retained. Moreover, another predictor contains missing values in 40% of samples. So I don't know if the important predictor is not retained as the result of missing values or if I should have selected other values of cp. Note that the data contains binary class. Another question is that how it is possible to interpret the relative or cross-validation error for ex by the number of samples. I know that they are scaled to 1 at the root node of the tree but for any number of splits, how much error we make for each sample (but we don't know the number of sample in each split retured by printcp). Any other information is welcome. Look forward to your reply, Carol