search for: obninsk

Displaying 5 results from an estimated 5 matches for "obninsk".

Did you mean: objinst
2003 Aug 20
2
RandomForest
Hello, When I plot or look at the error rate vector for a random forest (rf$err.rate) it looks like a descending function except for a few first points of the vector with error rates values lower(sometimes much lower) than the general level of error rates for a forest with such number of trees when the error rates stop descending. Does it mean that there is a tree(s) (that is built the first in
2001 Jun 09
0
Classification Trees
I apologize if you receive multiple copies of this letter. This is the first time I've written to this mailing list, so please be kind:-) Hello everyone! I'm trying to make a programme which grows a classification tree. I use APL programming language and I use R to compare and test results. I have a classification tree and I have a sequence of cost-comlexity parameters(alphas):
2002 Jan 19
0
ESS configuration
Hi! I faced a few problems with configuring ESS. I sent my questions to the ESS mailing list but recieved no replies, so i'm sending them here in hope someone will help me. I've just installed ESS and I'm quite a novice at it. In the docs it is said that setting "ess-insert-function-templates" to non-nil value will result in the edit buffer containing a skeleton function
2002 May 14
2
quantile() and boxplot.stats()
Hello, I faced something I can't understand. When I use boxplot.stats(1:10) and quantiles(1:10) the results are different for 25% and 75%: > boxplot.stats(1:10) $stats [1] 1.0 3.0 5.5 8.0 10.0 > quantile(1:10) 0% 25% 50% 75% 100% 1.00 3.25 5.50 7.75 10.00 Actually, I expected the value 3 for 25% and 8 for 75% as results of quantile(1:10). Can you please explain me
2001 Feb 15
2
deviance vs entropy
Hello, The question looks like simple. It's probably even stupid. But I spent several hours searching Internet, downloaded tons of papers, where deviance is mentioned and... And haven't found an answer. Well, it is clear for me the using of entropy when I split some node of a classification tree. The sense is clear, because entropy is an old good measure of how uniform is distribution.