Is there something unusual about the way BATCH jobs are run? I ran a job like this: nice R CMD BATCH program.R It ran for a little while and then it starting eating up huge amounts of resources. Here is the entry from top: top 17:34:10 up 36 days, 8:10, 4 users, load average: 13.11, 6.85, 3.7 0 Tasks: 173 total, 6 running, 167 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 99.4%sy, 0.0%ni, 0.0%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32712300k total, 32565372k used, 146928k free, 856k buffers Swap: 34766840k total, 34766840k used, 0k free, 9812k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28829 lupp 26 10 53.3g 30g 128 R 7.9 98.6 134:23.54 R Note that it's taking about 54GB of memory. Right after this the load average went up to over 23 and the system started killing off processes, including mine. And this is on a system with 32GB of physical memory and 2 quad-core Xenon processors. However, when I run the job as an inferior process in emacs under ess, it is very well behaved. Here is the output from top: top - 22:22:48 up 37 days, 12:58, 1 user, load average: 1.01, 1.04, 1.07 Tasks: 133 total, 2 running, 131 sleeping, 0 stopped, 0 zombie Cpu(s): 25.0%us, 0.0%sy, 0.0%ni, 75.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32712300k total, 31120472k used, 1591828k free, 466180k buffers Swap: 34766840k total, 86160k used, 34680680k free, 10935976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3768 lupp 25 0 17.8g 17g 4060 R 96.2 56.6 154:42.43 R It only uses 17.8GB of memory and the load stays right around 1.0. Admittedly, this is a rather big job: lmer with 2,200,000 records crossed by 125,000 students and 10,000 teachers. But I don't understand why it should consume resources so avariciously when run as a BATCH job. Can anyone explain this to me? TIA -- Stuart Luppescu -*-*- slu <at> ccsr <dot> uchicago <dot> edu CCSR in UEI at U of C