Displaying 20 results from an estimated 100000 matches similar to: "rpart, resolution"
2011 Jul 29
1
help with predict.rpart
? data=read.table("http://statcourse.com/research/boston.csv", ,
sep=",", header = TRUE)
? library(rpart)
? fit=rpart (MV~ CRIM+ZN+INDUS+CHAS+NOX+RM+AGE+DIS+RAD+TAX+
PT+B+LSTAT)
predict(fit,data[4,])
plot only reveals part of the tree in contrast to the results on obtains
with CART or C5
-------- Original Message --------
Subject: Re: [R] help with rpart
From: Sarah
2011 Jul 28
2
help with rpart
1. How can I plot the entire tree produced by rpart?
2. How can I submit a vector of values to a tree produced by rpart and
have
it make an assignment?
Mark
2011 Jul 29
3
help with plot.rpart
? data=read.table("http://statcourse.com/research/boston.csv", , sep=",",
header = TRUE)
? library(rpart)
? fit=rpart (MV~ CRIM+ZN+INDUS+CHAS+NOX+RM+AGE+DIS+RAD+TAX+ PT+B+LSTAT)
Please: Show me the tree.
Mark
-------- Original Message --------
Subject: Re: [R] help with rpart
From: "Stephen Milborrow" <[1]milbo at sonic.net>
2008 Dec 17
1
pruning trees using rpart
Hi,
I am using the packages tree and rpart to build a classification tree to
predict a 0/1 outcome. The package rpart has the advantage that the function
plotcp gives a visual representation of the cross-validation results with a
horizontal line indicating the 1 standard error rule, i.e. the
recommendation to select the most parsimonious model (the smallest tree)
whose error is not more than one
2009 May 22
1
bug in rpart?
Greetings,
I checked the Indian diabetes data again and get one tree for the data with
reordered columns and another tree for the original data. I compared these
two trees, the split points for these two trees are exactly the same but the
fitted classes are not the same for some cases. And the misclassification
errors are different too. I know how CART deal with ties --- even we are
using the
2010 Dec 30
2
unexpected input in rpart
Hi all, I'm a newbee using R. I need to do a classification tree using the
rpart package. Basically I have a set of birds of known sex and several
morphological measurements and we want to predict the sex using the
morphology. I read my csv file and it shows up in R no problem, looks fine
but when I execute the following rpart command
2008 Feb 26
1
predict.rpart question
Dear All,
I have a question regarding predict.rpart. I use
rpart to build classification and regression trees and I deal with data with
relatively large number of input variables (predictors). For example, I build an
rpart model like this
rpartModel <- rpart(Y ~ X, method="class",
minsplit =1, minbucket=nMinBucket,cp=nCp);
and get predictors used in building the model like
2004 Mar 19
2
How to collect trees grown by rpart
Jonathan,
Try making a list instead of an array. See ?list. Also, did you look into
random forests? I'm not sure what you want to do, but there might be
methods there to do some of the work for you.
Sean
On 3/19/04 1:12 PM, "Jonathan Williams"
<jonathan.williams at pharmacology.oxford.ac.uk> wrote:
> I would like to collect the trees grown by rpart fits in an array,
2001 Jul 26
0
tree and rpart
There have been various messages about packages tree and rpart whilst
I have been travelling, and I have now prepared updates.
tree
====
Tree is one of the oldest packages on CRAN (Feb 2000 apart from adding
the maintainer field), and I had been hoping that it would fade away
in favour of rpart.
1) sys.parent needed to be replaced by parent.frame in all but the
most recent R (post 1.3.0).
2009 May 12
1
questions on rpart (tree changes when rearrange the order of covariates?!)
Greetings,
I am using rpart for classification with "class" method. The test data is
the Indian diabetes data from package mlbench.
I fitted a classification tree firstly using the original data, and then
exchanged the order of Body mass and Plasma glucose which are the
strongest/important variables in the growing phase. The second tree is a
little different from the first one. The
2005 Oct 18
1
Memory problems with large dataset in rpart
Dear helpers,
I am a Dutch student from the Erasmus University. For my Bachelor thesis I
have written a script in R using boosting by means of classification and
regression trees. This script uses the function the predefined function
rpart. My input file consists of about 4000 vectors each having 2210
dimensions. In the third iteration R complains of a lack of memory,
although in each iteration
2012 Aug 01
1
rpart package: why does predict.rpart require values for "unused" predictors?
After fitting and pruning an rpart model, it is often the case that one or
more of the original predictors is not used by any of the splits of the
final tree. It seems logical, therefore, that values for these "unused"
predictors would not be needed for prediction. But when predict() is called
on such models, all predictors seem to be required. Why is that, and can it
be easily
2009 May 08
1
Get (feature, threshold) from Output of rpart() for Stump Tree
Hi,
I have a question regarding how to get some partial information
from the output of rpart, which could be used as the first argument to
predict. For example, in my code, I try to learn a stump tree (decision
tree of depth 2):
"fit <- rpart(y~bx, weights = w/mean(w), control = cntrl)
print(fit)
btest[1,] <- predict(fit, newdata = data.frame(bx)) "
I found
2011 Dec 31
1
Cross-validation error with tune and with rpart
Hello list,
I'm trying to generate classifiers for a certain task using several
methods, one of them being decision trees. The doubts come when I want to
estimate the cross-validation error of the generated tree:
tree <- rpart(y~., data=data.frame(xsel, y), cp=0.00001)
ptree <- prune(tree,
cp=tree$cptable[which.min(tree$cptable[,"xerror"]),"CP"])
ptree$cptable
2002 Nov 06
1
predict.rpart and large datasets
Hi folks,
I'm getting the following error message when using predict.rpart on a
large dataset (say 100,000 records) within Splus:
Terminating S Session: Signal: bad address signal
Has anyone run into this error, discovered a work-around, etc.?
More generally, I'm trying to use predict.rpart to generate predictions
from very large, geographic datasets (we're talking all of
2005 Aug 04
1
Puzzled at rpart prediction
I'm in a situation where I say:
> predict(m.rpart, newdata=D[N1+t,])
0 1
173 0.8 0.2
which I interpret as meaning: an 80% chance of "0" and a 20% chance of
"1". Okay. This is consistent with:
> predict(m.rpart, newdata=D[N1+t,], type="class")
[1] 0
Levels: 0 1
But I'm puzzled at the following. If I say:
> predict(m.rpart,
2007 Feb 15
2
Does rpart package have some requirements on the original data set?
Hi,
I am currently studying Decision Trees by using rpart package in R. I
artificially created a data set which includes the dependant variable
(y) and a few independent variables (x1, x2...). The dependant variable
y only comprises 0 and 1. 90% of y are 1 and 10% of y are 0. When I
apply rpart to it, there is no splitting at all.
I am wondering whether this is because of the
2004 May 13
2
R 1.9.0 and pred.rpart
I have just upgraded from R 1.7.3 to R 1.9.0 and have found that the
predict function no longer works for rpart:
> predict(hmmm,sim3[1:10,])
Error in predict.rpart(hmmm, sim3[1:10, ]) :
couldn't find function "pred.rpart"
I have re-installed the rpart package to no avail. Any ideas?
Giles Hooker
2006 Apr 07
1
rpart.predict error--subscript out of bounds
Hi,
I am using rpart to do leave one out cross validation, but met some problem,
Data is a data frame, the first column is the subject id, the second column is the group id, and the rest columns are numerical variables,
> Data[1:5,1:10]
sub.id group.id X3262.345 X3277.402 X3369.036 X3439.895 X3886.935 X3939.054 X3953.777 X3970.352
1 32613 HAM_TSP 417.7082 430.4895 619.4776 720.8246
2010 Nov 18
1
predict() an rpart() model: how to ignore missing levels in a factor
I am using an algorigm to split my data set into two random sections
repeatedly and constuct a model using rpart() on one, test on the other and
average out the results.
One of my variables is a factor(crop) where each crop type has a code. Some
crop types occur infrequently or singly. when the data set is randomly
split, it may be that the first data set has a crop type which is not
present in