I'm working with large data frames and running out of memory. I hope some of you may be able to suggest a more efficient approach. I have grid/lattice data representing a time series of 1 m2 quadrats in a grassland: Each 1 cm2 cell or pixel contains one ecological state (ie grass or bare ground). The goal is to calculate, for each cell, the transition probabilities to all available states (given that a cell is occupied by grass, what are the probabilities it will change to grass or bare ground in the next time step?). I am using multinom (in package nnet, MASS) to calculate these transitions as a function of the density of each state in some defined neighborhood around the focal cell. So I generate a data frame with the following columns: quadrat, year, x coordinate, y coordinate, state at time t, state at time t+1, density of state 1 at time t, density of state 2,...state n. Thus each quadrat to quadrat transition, using 100x100 cell quadrats, can generate 10,000 records. Right now I import, make calculations and store the data for each quadrat-to-quadrat transition in a temporary array, then use rbind to append this array on to the whole (final) data frame, then repeat for the next year (re-using the same temporary array). I use up my max memory allocation (1024Mb) after about 130 quadrat-years of data. I could increase my max memory allocation some more, but this will simply raise the ceiling, not solve the problem. I don't understand why R runs out of memory so soon, since text files containing this same data are much smaller. For example, 35 years of data for one quadrat uses only 11Mb when stored in a text file on my hard drive, but when I import it into R it occupies over 100Mb (according to memory.size() ). Should I think about exporting the data to text files as I go? Thanks for your help, Peter ------------------------ Peter Adler, PhD Dept. Ecology, Evolution and Marine Biology University of California Santa Barbara, CA 93106 tel: (805) 893-7416 http://www.lifesci.ucsb.edu/~adler/