Displaying 6 results from an estimated 6 matches for "x_test".
Did you mean:
_test
2017 Jun 11
1
Memory leak in nleqslv()
...<- function(x){
rows_x <- length(x)/2
x_1 <- x[1:rows_x]
x_2 <- x[(rows_x+1):(rows_x*2)]
eq1<- x_1-100
eq2<-x_2*10-40
return(c(eq1,eq2))
}
model_test <- function()
{
reserves<-(c(0:200)/200)^(2)*2000
lambda <- numeric(NROW(reserves))+5
res_ext <- pmin((reserves*.5),5)
x_test <- c(res_ext,lambda)
#print(x_test)
for(test_iter in c(1:1000))
nleqslv(x_test,cons_ext_test,jacobian=NULL)
i<- sort( sapply(ls(),function(x){object.size(get(x))}))
print(i[(NROW(i)-5):NROW(i)])
}
model_test()
When I run this over 1000 iterations, memory use ramps up to over 2.4 GB
Whil...
2023 May 09
1
RandomForest tuning the parameters
Hi Sacha,
On second thought, perhaps this is more the direction that you want ...
X2 = cbind(X_train,y_train)
colnames(X2)[3] = "y"
regr2<-randomForest(y~x1+x2, data=X2,maxnodes=10, ntree=10)
regr
regr2
#Make prediction
predictions= predict(regr, X_test)
predictions2= predict(regr2, X_test)
HTH,
Eric
On Tue, May 9, 2023 at 6:40?AM Eric Berger <ericjberger at gmail.com> wrote:
> Hi,
> One problem you have is with the command:
> regr<-randomForest(y~x1+x2, data=X_train, proximity=TRUE)
>
> What you need is something like...
2023 May 08
1
RandomForest tuning the parameters
...1,1,0,1,1,0,0,1,0,0,0,0,0,1,1,1,1,1,0,0,0,1,0,0,1,0,0,0,1,1,0,1,0,0,0,1,1,1,1,0,1,0,1,0,0,1,1,0,0,1,0,0,1,1)
?
y=as.numeric(y)
x1=as.numeric(x1)
x2=as.factor(x2)
?
X=data.frame(x1,x2)
y=y
?
#Split data into training and test sets
index=createDataPartition(y, p=0.75, list=FALSE)
X_train = X[index, ]
X_test = X[-index, ]
y_train= y[index ]
y_test = y[-index ]
?
#Train de model
regr=randomForest (x=X_train, y=y_train, maxnodes=10, ntree=10)
regr<-randomForest(y~x1+x2, data=X_train, proximity=TRUE)
regr
?
#Make prediction
predictions= predict(regr, X_test)
?
result= X_test
result['y'] = y_te...
2009 Mar 23
0
Scaled MPSE as a test for regressors?
...i,
This is really more a stats question than a R one, but....
Does anyone have any familiarity with using the mean prediction
squared error scaled by the variance of the response, as a 'scale
free' criterion for evaluating different regression algorithms.
E.g.
Generate X_train, Y_train, X_test, Y_test from true f. X_test/Y_test
are generated without noise, maybe?
Use X_train, Y_train and the algorithm to make \hat{f}
Look at var(Y_test - \hat{f}(X_test))/var(Y_test)
(Some of these var maybe should be replaced with mean squared values instead.)
It seems sort of reasonable to me. You...
2009 Mar 04
0
Error in -class : invalid argument to unary operator
...aive Bayes.
My code is as follows
library (e1071)
wine<- read.csv("C:\\Rproject\\Wine\\wine.csv")
split<-sample(nrow(wine), floor(nrow(wine) * 0.5))
wine_training <- wine[split, ]
wine_testing <- iris[-split, ]
naive_bayes <-naiveBayes(class~.,data=wine_training)
x_testing <- subset(wine_testing, select = -class)
y_testing <- wine_testing$class # just grab Species variable of
iris_training
pred <- predict(naive_bayes, x_testing)
tab<-table(pred, y_testing)
ca <- classAgreement(tab)
print(tab)
print(ca)
when I enter this code in I get the erro...
2001 Nov 19
3
WineLib Seg Fault?
A question for the WineLib guru's :)
I am using the wine-20011108 build with Mandrake 8.0 and with this
version of wine clean compiled and installed I can run several windows
programs very successfully :).
Then I use winemaker to create a WineLib 'so' file and the compile and
link again runs clean.
But when I run the resulting 'so' file using this command line:
$