I feel like a dope for asking this, but after reading various articles I'm
still not quite sure I get it. What is the relationship or similarity
between a "probability" prediction from a classfication random forest
and a
Bayesian posterior probability? Can they be considered similar, or would I
need to take that probability and project it through a likelihood (I'm
thinking, perhaps wrongly, of computing this from the confusion matrix) to
get a posterior probability?
Something about that feels wrong, and not being a real statistician (Just a
shrink with some applied stat training) I can't quite connect the dots.
Before, this, I've been playing with RF as a way to get a feel for
what's
going on in the data, rather than for making any real-world predictions. I
just don't want to over-interpret (or misinterpret) those probabilities.
>From Breiman's site, this is probably clear to a real statistician, but
someone like me needs to be clobbered over the head with it :-/.
Wisdom Welcome!
Humbly,
DVB
[[alternative HTML version deleted]]