Displaying 4 results from an estimated 4 matches for "kfold".
Did you mean:
fold
2010 Dec 09
1
Bivariate kernel density bandwidth selection
...t for very small sample sizes (up to a few hundred)
and my sample sizes are quite large (up to a few thousand). I've reviewed
help files, vignettes, previous postings on this list, and the JSS paper
describing ks and haven't found much mention of constraints on sample size
other than using kfold cross-validation to speed calculation:unfortunately,
that option is listed but not enabled for Hscv.
An example illustrates my problem. Each of the following expressions
returns the time elapsed to estimate a bandwidth matrix. The first is for a
sample of 100 x and y coordinates, the second is f...
2011 May 05
1
[caret package] [trainControl] supplying predefined partitions to train with cross validation
Hi all,
I run R 2.11.1 under ubuntu 10.10 and caret version 2.88.
I use the caret package to compare different models on a dataset. In
order to compare their different performances I would like to use the
same data partitions for every models. I understand that using a LGOCV
or a boot type re-sampling method along with the "index" argument of
the trainControl function, one is able to
2009 Aug 02
0
rpart: which is correct?
I am using rpart in classification mode and am confused about this
particular model's predictions.
> predict(fit, train[8,])
-1 1
8 0.5974089 0.4025911
> predict(fit, train[8,], type="class")
1
Levels: -1 1
So, it seems like there is a 60% change of being class -1 according the
the "prob" output (which is the default for classification) but gives
2009 Jun 19
1
cut with floating point, a bug?
With floating point numbers I'm seeing 'cut' putting values in the wrong
bands. An example below places 0.3 in (0.3,0.6] i.e. 0.3 > 0.3.
> x = 1:5*.1
> x
[1] 0.1 0.2 0.3 0.4 0.5
> cut(x, br=c(0,.3,.6))
[1] (0,0.3] (0,0.3] (0.3,0.6] (0.3,0.6] (0.3,0.6]
Levels: (0,0.3] (0.3,0.6]
I'm sure this is probably the same issue documented in the FAQ (7.31 Why
doesn't R