similar to: All possible subset selection?

Displaying 20 results from an estimated 9000 matches similar to: "All possible subset selection?"

2005 May 09
1
question about k in step
>?step .... 'step' uses 'add1' and 'drop1' repeatedly; it will work for any method for which they work, and that is determined by having a valid method for 'extractAIC'. When the additive constant can be chosen so that AIC is equal to Mallows' Cp, this is done and the tables are labelled appropriately. so my question is :what constant
2011 Feb 23
1
request for patch in "drop1" (add.R)
By changing three lines in drop1 from access based on $ to access based on standard accessor methods (terms() and residuals()), it becomes *much* easier to extend drop1 to work with other model types. The use of $ rather than accessors in this context seems to be an oversight rather than a design decision, but maybe someone knows better ... In particular, if one makes these changes (which I am
2017 Jun 08
1
stepAIC() that can use new extractAIC() function implementing AICc
I would like test AICc as a criteria for model selection for a glm using stepAIC() from MASS package. Based on various information available in WEB, stepAIC() use extractAIC() to get the criteria used for model selection. I have created a new extractAIC() function (and extractAIC.glm() and extractAIC.lm() ones) that use a new parameter criteria that can be AIC, BIC or AICc. It works as
2007 Dec 07
1
AIC v. extractAIC
Hello, I am using a simple linear model and I would like to get an AIC value. I came across both AIC() and extractAIC() and I am not sure which is best to use. I assumed that I should use AIC for a glm and extractAIC() for lm, but if I run my model in glm the AIC value is the same if I use AIC() on an lm object. What might be going on? Did I interpret these functions incorrectly? Thanks,
2008 Nov 28
2
AIC function and Step function
I would like to figure out the equations for calculating "AIC" in both "step() function" and "AIC () function". They are different. Then I just type "step" in the R console, and found the "AIC" used in "step() function" is "extractAIC". I went to the R help, and found: "The criterion used is AIC = - 2*log L + k *
2011 Jun 20
1
Stepwise model comparisons for mlogit
I am trying to perform a backwards stepwise variable selection with an mlogit model. The usual functions, step(), drop1(), and dropterm() do not work for mlogit models. Update() works but I am only able to use it manually, i.e. I have to type in each variable I wish to remove by hand on a separate line. My goal is to write some code that will systematically remove a certain set of variables
2006 Aug 06
1
extractAIC using surf.ls
Although the 'spatial' documentation doesn't mention that extractAIC works, it does seem to give an output. I may have misunderstood, but shouldn't the following give at least the same d.f.? > library(spatial) > data(topo, package="MASS") > extractAIC(surf.ls(2, topo)) [1] 46.0000 437.5059 > extractAIC(lm(z ~ x+I(x^2)+y+I(y^2)+x:y, topo)) [1]
2010 Aug 27
1
step
Hi, how can I change the significance level in test F to select variable in step command? I used step(model0, ~x1+x2+x3+x4, direction=c("forward"), test='F', alpha=.05) but it does't work. -------------------------------------- Silvano Cesar da Costa Departamento de Estat?stica Universidade Estadual de Londrina Fone: 3371-4346
2008 Apr 29
1
AIC extract and comparison
Hi, I need to fit models and use AIC method to campare the best fitted model manually. When i extract AIC by using extractAIC, it gave me the df and AIC values. Now the problem is, how can I compare the AIC values from two models? is there anyway to extract AIC with no df so that I can compare directly? Thank you! > extractAIC(coxout) [1] 1.000 1723.038 [[alternative HTML version
2010 Feb 10
1
using step() with package geepack
I'm using the package geepack to fit GEE models. Does anyone know of methods for add1 and drop1 for a 'geeglm' model object, or perhaps a method for extractAIC based on the QIC of Pan 2001? I see there has been some mention of this on R-help a few years ago (RSiteSearch("QIC")). The package does provide an anova method for its model objects, and update() seems to work:
2012 Sep 29
1
Problems with stepAIC
Dear help community, I'm a R-beginner and use it for my master thesis. I've got a mixed model and want to analyse it with lme. There are a lot Cofactors that coult be relevant. To extract the important ones I want to do the stepAIC, but always get an error warning. Structure of my data: data.frame': 72 obs. of 54 variables: $ Block : Factor w/ 3 levels
2011 May 21
2
unbalanced anova with subsampling (Type III SS)
Hello R-users, I am trying to obtain Type III SS for an ANOVA with subsampling. My design is slightly unbalanced with either 3 or 4 subsamples per replicate. The basic aov model would be: fit <- aov(y~x+Error(subsample)) But this gives Type I SS and not Type III. But, using the drop() option: drop1(fit, test="F") I get an error message: "Error in
2007 Jun 01
1
AIC consistency with S-PLUS
Hello- I understand that log-likelihoods are bound to differ by constants, but if i estimate AIC for a set of simple nested linear models using the following 4 methods, shouldn't at least two of them produce the same ordering of models? in R: extractAIC AIC in S-PLUS: AIC n*log(deviance(mymodel)/n) + 2*p I find it troubling that these methods all give me different answers as to the best
2007 Aug 15
1
AIC and logLik for logistic regression in R and S-PLUS
Dear R users, I am using 'R' version 2.2.1 and 'S-PLUS' version 6.0; and I loaded the MASS library in 'S-PLUS'. I am running a logistic regression using glm: --------------------------------------------------------------------------- > mydata.glm<-glm(COMU~MeanPycUpT+MeanPycUpS, family=binomial, data=mydata)
2010 Dec 26
1
Calculation of BIC done by leaps-package
Hi Folks, I've got a question concerning the calculation of the Schwarz-Criterion (BIC) done by summary.regsubsets() of the leaps-package: Using regsubsets() to perform subset-selection I receive an regsubsets object that can be summarized by summary.regsubsets(). After this operation the resulting summary contains a vector of BIC-values representing models of size i=1,...,K. My problem
2007 Mar 13
3
inconsistent behaviour of add1 and drop1 with a weighted linear model
Dear R Help, I have noticed some inconsistent behaviour of add1 and drop1 with a weighted linear model, which affects the interpretation of the results. I have these data to fit with a linear model, I want to weight them by the relative size of the geographical areas they represent. _________________________________________________________________________________________ > example
2000 Jun 07
1
forward stepwise selection
Dear R-Help, My problem/bug came to light,when fitting a linear model using stepwise selection. I'd started with the straightfoward command step(lm(y~., dataset)) This worked fine, but because this starts with all the possible explanatory variables, it results in a model with too many explanatory variables. Hence I wanted to start with just a constant and do forward selection, to get a
2000 Oct 18
1
AIC in glm()
Hi all, I am trying to understand how is calculated the AIC returned by glm(). I have a model object m1 which fitting results are: > summary(m1) [...] (Dispersion parameter for gaussian family taken to be 3.735714) Null deviance: 1439.8 on 15 degrees of freedom Residual deviance: 52.3 on 14 degrees of freedom AIC: 70.357 Since there are 2 parameters, I would naively compute: AIC
2010 Sep 22
2
speeding up regressions using ddply
Hi, I have a data set that I'd like to run logistic regressions on, using ddply to speed up the computation of many models with different combinations of variables. I would like to run regressions on every unique two-variable combination in a portion of my data set, but I can't quite figure out how to do using ddply. The data set looks like this, with "status" as
2009 Oct 28
3
variable labels to accompany data.frame
Often it is useful to keep a "codebook" to document the contents of a dataset. (By "dataset" I mean a rectangular structure such as a dataframe.) The codebook has as many rows as the dataset has columns (variables, fields). The columns (fields) of the codebook may include: ? variable name ? type (character, factor, integer, etc) ? variable label