Hi all,
I know this a general question, not specific for any R package, even so I
hope someone may give me his/her opinion on this.
I have a set of 20 candidate models in a binomial GLM. The global model has
52 estimable parameters and sample size is made of about 1500 observations.
The global model seems not to have problems of parameters estimability nor
get troubles with the convergence of these models.
I have run all the models and at the end I get the global model as the more
parsimonious one (least QAICc -> I have set c-hat=1.15 according to goodness
of fit on the global model).
This is the first time it occurs to me and I am somehow confused.
I believe the data set is not that poor with respect to the number of
parameters (it should not be the Friedman paradox) and the only thing it
seems logic (to me) it is that the candidate set of models could be not
adequate or that simply the best approximation to the natural process I am
trying to analyze is made by the global model.
I am not that experienced with modeling and I'd like to get other opinions
from more skilled people.
Cheers
--
View this message in context:
http://r.789695.n4.nabble.com/Global-model-more-parsimonious-minor-QAICc-tp4213467p4213467.html
Sent from the R help mailing list archive at Nabble.com.