Hi, I'm having trouble with glmmPQL. I'm fitting a 2 level random intercept model, with 90,000 cases and about 330 groups. I'm unable to get any results on the full data set. I can get it to work if I sample down to about 30,000 cases. But for models with N's much larger than that I get the following warning message: m3=glmmPQL(prepfood~iage+iemployed+iwhite+ieduclevl+imarried+servcomm+leadgrup+leadsty4, family=binomial, random=~1|congrega1,data=data) Error: cannot allocate vector of size 4135 Kb In addition: Warning message: Reached total allocation of 253Mb: see help(memory.size) I've tried increasing my virtual memory size, and also defragmenting my hard drive. It hasn't helped. I've seen other people asking similar questions on the archive, but it seems that this problem should have gone away after earlier versions of R, is that right? Is this a data problem, am I fitting a bad model, or is it a memory size problem. I'm hoping the last one, and any help is appreciated. Thanks, Matt [[alternative HTML version deleted]]
On Wed, 17 Mar 2004, Matt Loveland wrote:> I'm having trouble with glmmPQL.I think you are having trouble with memory limits, actually. As the author of glmmPQL, I don't appreciate my code being blamed for something else.> I'm fitting a 2 level random intercept model, with 90,000 cases and about 330 groups. I'm unable to get any results on the full data set. I can get it to work if I sample down to about 30,000 cases. But for models with N's much larger than that I get the following warning message: > > m3=glmmPQL(prepfood~iage+iemployed+iwhite+ieduclevl+imarried+servcomm+leadgrup+leadsty4, family=binomial, random=~1|congrega1,data=data) > Error: cannot allocate vector of size 4135 Kb > In addition: Warning message: > Reached total allocation of 253Mb: see help(memory.size) > > I've tried increasing my virtual memory size, and also defragmenting my > hard drive. It hasn't helped. I've seen other people asking similar > questions on the archive, but it seems that this problem should have > gone away after earlier versions of R, is that right?Do read the page it asks you too. You are on Windows, and you need to use the --max-mem-size flag when starting R to increase the memory available to R. However, if you do swapping may make your machine nigh unusable. What did you not understand about help(memory.size)? This is also in the rw-FAQ: what in that did you not understand?> Is this a data problem, am I fitting a bad model, or is it a memory size > problem. I'm hoping the last one, and any help is appreciated.Yes, so try a machine with 2Gb RAM. -- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
"Matt Loveland" <loveland.1 at nd.edu> writes:> I'm having trouble with glmmPQL. > > I'm fitting a 2 level random intercept model, with 90,000 cases and > about 330 groups. I'm unable to get any results on the full data > set. I can get it to work if I sample down to about 30,000 cases. > But for models with N's much larger than that I get the following > warning message: > > m3=glmmPQL(prepfood~iage+iemployed+iwhite+ieduclevl+imarried+servcomm+leadgrup+leadsty4, family=binomial, random=~1|congrega1,data=data) > Error: cannot allocate vector of size 4135 Kb > In addition: Warning message: > Reached total allocation of 253Mb: see help(memory.size)It may be possible to fit the model on your current machine with the current setting using function GLMM from package lme4. This function by default uses essentially the same algorithm as glmmPQL from MASS (iteratively weighted calls to lme) but it employs a different version of lme (glmmPQL calls lme from the nlme package while GLMM calls a more efficient representation in the lme4 package itself). Finally, there is yet another implementation of lme in development - a version that is more economical in storage, more flexible in the model structures that can be fit (for those who have been waiting, yes it can fit models with crossed and partially crossed random effects using a reasonable syntax and data representation), and is fast. On models fit to large data sets this version is remarkably fast. Our schedule is to release new versions of the lme4 and Matrix packages with R-1.9.0 (2004-04-04). Please contact me off-list if you want to participate in testing these packages or if you can provide data for our testing. -- Douglas Bates bates at stat.wisc.edu Statistics Department 608/262-2598 University of Wisconsin - Madison http://www.stat.wisc.edu/~bates/