search for: ntrain

Displaying 7 results from an estimated 7 matches for "ntrain".

Did you mean: strain
2007 May 10
3
how to control the sampling to make each sample unique
I have a dataset of 10000 records which I want to use to compare two prediction models. I split the records into test dataset (size = ntest) and training dataset (size = ntrain). Then I run the two models. Now I want to shuffle the data and rerun the models. I want many shuffles. I know that the following command sample ((1:10000), ntrain) can pick ntrain numbers from 1 to 10000. Then I just use these rows as the training dataset. But how can I make sure each run of...
2008 Oct 15
1
Forecasting using ARIMAX
...1998,245.490 Feb 1998,670,Feb 1998,421.25,Feb 1998,288.170 Mar 1998,642.5,Mar 1998,395,Mar 1998,254.950 Apr 1998,610,Apr 1998,377.5,Apr 1998,230.640 : > (nrowDepVar <- nrow(depVar)) [1] 545 > (nTest <- nInstance + nHorizon - 1) #number of latest points reserved for testing [1] 13 > (nTrain <- nrowDepVar - nTest) [1] 532 First I use auot.arima to find the best (p,d,q). > modArima <- auto.arima(depVar[1:nTrain,], trace=TRUE) ARIMA(2,1,2) with drift : 4402.637 ARIMA(0,1,0) with drift : 4523.553 ARIMA(1,1,0) with drift : 4410.036 ARIMA(0,1,1) with...
2005 Jan 18
1
Interpretation of randomForest results
...e training set? The results you showed above are out-of-bag (OOB) results. If you don't know what that means, you should read the documentation, and perhaps the references. > But when I run below command to test the performance of > classification in the same training set. > > ntrain <- read.table("train10.dat", header = T) > ntrain.pred <- predict(oz.rf, ntrain) > table(observed = ntrain[, "LESION"], predicted = ntrain.pred) > > I got the following results. It seemed that the > classification rates for 'lesion' and 'noninf...
2014 Jul 02
0
How do I call a C++ function (for k-means) within R?
...interaction.depth=as.integer(interaction.depth), n.minobsinnode=as.integer(n.minobsinnode), n.classes = as.integer(nClass), shrinkage=as.double(shrinkage), bag.fraction=as.double(bag.fraction), nTrain=as.integer(nTrain), fit.old=as.double(NA), n.cat.splits.old=as.integer(0), n.trees.old=as.integer(0), verbose=as.integer(verbose), PACKAGE = "gbm") names(gbm.obj) <- c("initF&qu...
2013 Mar 24
3
Parallelizing GBM
...it for speed reasons, and I usually call it this way gbm_model <- gbm.fit(trainRF,prices_train, offset = NULL, misc = NULL, distribution = "multinomial", w = NULL, var.monotone = NULL, n.trees = 50, interaction.depth = 5, n.minobsinnode = 10, shrinkage = 0.001, bag.fraction = 0.5, nTrain = (n_train/2), keep.data = FALSE, verbose = TRUE, var.names = NULL, response.name = NULL) Does anybody know an easy way to parallelize the model (in this case it means simply having 4 cores on the same machine working on the problem)? Any suggestion is welcome. Cheers Lorenzo
2013 Jun 23
1
Which is the final model for a Boosted Regression Trees (GBM)?
...[5] "oobag.improve" "trees" [7] "c.splits" "bag.fraction" [9] "distribution" "interaction.depth" [11] "n.minobsinnode" "n.trees" [13] "nTrain" "response.name" [15] "shrinkage" "train.fraction" [17] "var.levels" "var.monotone" [19] "var.names" "var.type" [21] "verbose"...
2008 Sep 16
0
Warning messages after auto.arima
...ng thru the value at the console screen. For those models with drift, how can I find out what are the drifts? For example, eyeballing thru, my 2nd best model is ARIMA(1,1,1) with drift but it didn't state what is the drift. Many thanks. > modelarima <- auto.arima(Price[1:nTrain], trace=TRUE) ARIMA(2,1,2) with drift : 4417.541 ARIMA(0,1,0) with drift : 4538.817 ARIMA(1,1,0) with drift : 4424.534 ARIMA(0,1,1) with drift : 4457.507 ARIMA(1,1,2) with drift : 4416.070 ARIMA(1,1,1) with drift : 4414.328...