On Jun 17, 2009, at 1:45 AM, jagat at cmi.ac.in wrote:
> Hi All,
>
> I am using "glm" function to build logistic regression. I noticed
> that glm
> function glm function is computing many other statistics which are not
> required for our analysis. As our dataset is very big and we have to
> run
> logistic regression on several samples the run time drastically
> increases
> if all those statistics are computed. Is these any way to skip
> computation
> in glm function? I am just a beginner of R and hence I am not able to
> modify the glm function.
> Can anybody give me an alternative way to fit logistic regression
> which
> computes only the estimates(coefficients) of variables.
>
> Waiting for your favourable response.
>
> Regards,
> Jagat
If all you need are the coefficients, you may observe greater
efficiency by using glm.fit() directly instead of glm(), where you
have pre-constructed the model design matrix and response vector.
For example, using the 'infert' dataset:
MM <- model.matrix( ~ spontaneous + induced, data = infert)
> coef(glm.fit(MM, infert$case, family = binomial()))
(Intercept) spontaneous induced
-1.7078601 1.1972050 0.4181294
That gives you the same output as:
> coef(glm(case ~ spontaneous + induced, data = infert, family =
binomial()))
(Intercept) spontaneous induced
-1.7078601 1.1972050 0.4181294
In this simple example, the time savings is negligible, but with much
larger datasets, you may observe enough savings to make it worthwhile
to consider.
See ?glm.fit and ?model.matrix for more information. Note that
glm.fit() does not return an object of class 'glm' which restricts the
use of other functions with glm methods (eg. summary(), anova(),
predict(), ...) which may or may not be of value to you. So there are
tradeoffs...
I have not compared Frank's lrm() function in the Design package
relative to any time savings in comparison to using glm() on large
datasets, but that may also be something to look into.
HTH,
Marc Schwartz