James Tucker
2009-Jul-22 08:38 UTC
[Mongrel] Mongrel_rails memory usage ballooning rapidly
200-300MB is not unusual for a rails application. Is it growing past that? Maybe use 3 or 4 mongrels, rather than 8. On 22 Jul 2009, at 09:18, Navneet Aron wrote:> Hi Folks, > I''ve a rails application in production environment.The rails app > has both a website as well as REST apis that are called by mobile > devices. It has a apache front end and 8 mongrel_rails running when > I start the cluster. Each of them stabalize around 60 MB initially. > After that pretty rapidly one or two of the mongrel_rails process''s > memory usage climbs upto 300 MB (within 30 min). If I leave the > system for 2 hours, pretty much all the processes would have reached > upwards of 300 MB . (Also there are times when I can leave the > system running pretty much the whole day and memory usage will NOT > go upto 300MB). > > The entire site becomes really slow and I''ve to restart the server. > We wanted to profile the app, but we couldn''t find a good profiling > tool for Ruby 1.8.7 > > Can someone please suggest how should we go about troubleshooting > this? > > Thanks, > Navneet > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
mcr at simtone.net
2009-Jul-22 14:50 UTC
[Mongrel] Mongrel_rails memory usage ballooning rapidly
>>>>> "Piyush" == Piyush Ranjan <piyush.pr at gmail.com> writes:Piyush> 8 mongrels are way too much capacity. Do you need that many ? Mongrels Piyush> taking 300MB is not unheard of as James said. Are you using loads of Piyush> libraries or cache in memory ? Moreover counting per mongrel memory is not Piyush> so easy. If they use shared libraries the memory per process shoots up in Piyush> the "ps auwx" command but actual memory usage is not that Piyush> high. ps auwx output: xdsott003-[~] mcr 1256 %ps auwx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 1704 540 ? Ss Jun18 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S Jun18 0:00 [migration/0] ... The VSZ gives the total virtual size. The RSS gives the resident set size. If RSS is staying small-ish, and VSZ is climbing, then you have a memory leak (inside the process) of some kind. The leaked memory will be pushed into swap. If you run "lsof -p XXX" on the PIDs of the mongrels: mustang-[~] mcr 1093 %lsof -p 15305 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mongrel_r 15305 xas2 cwd unknown /proc/15305/cwd (readlink: Permission denied) mongrel_r 15305 xas2 rtd unknown /proc/15305/root (readlink: Permission denied) mongrel_r 15305 xas2 txt unknown /proc/15305/exe (readlink: Permission denied) mongrel_r 15305 xas2 mem REG 8,1 3336 266044 /usr/bin/ruby1.8 mongrel_r 15305 xas2 mem REG 0,0 0 [heap] (stat: No such file or directory) mongrel_r 15305 xas2 mem REG 8,1 38372 170534 /lib/tls/i686/cmov/libnss_files-2.3.6.so mongrel_r 15305 xas2 mem REG 8,1 76548 170528 /lib/tls/i686/cmov/libnsl-2.3.6.so You can see the size of each shared object. Mostly, they will all be loaded by the time that the mongrel has served a few requests. (It is certainly possible that you only load some other things later on, if so, there is an easy out) I have not seen my mongrels grow to 300M each. Typically, they are 90 to 100M, but I do not do a lot of caching, etc. I think it is worth writing some perl scripts to try to characterize if your mongrel grows to 300M quickly, or slowly. You might do this externally (in perl/shell/awk/), or you could do it inside of mongrel by using /proc/self to self-examine and lot sizes on each request. I agree that having 8 mongrels may not be the best idea --- depends upon how much CPU and RAM you have. I planned to have have 4 per single-CPU virtual machine, and scale horizontally with additional application servers. (Someone has a patch to have capistrano turn off one mongrel at a time during a deployment, but I do not recall where) -- Michael Richardson <mcr at simtone.net> Director -- Consumer Desktop Development, Simtone Corporation, Ottawa, Canada Personal: http://www.sandelman.ca/mcr/ SIMtone Corporation fundamentally transforms computing into simple, secure, and very low-cost network-provisioned services pervasively accessible by everyone. Learn more at www.simtone.net and www.SIMtoneVDU.com
mcr at simtone.net
2009-Jul-22 15:09 UTC
[Mongrel] Mongrel_rails memory usage ballooning rapidly
>>>>> "Navneet" == Navneet Aron <navneetaron at gmail.com> writes:Navneet> Piyush, James,Thanks for your replies. Navneet> 1. The memory keeps increasing beyond 300MB and eventually the mongrel_rails Navneet> process goes away one by one. Check "dmesg" output. You may be getting OOM killed. Do you have enough swap? -- Michael Richardson <mcr at simtone.net> Director -- Consumer Desktop Development, Simtone Corporation, Ottawa, Canada Personal: http://www.sandelman.ca/mcr/ SIMtone Corporation fundamentally transforms computing into simple, secure, and very low-cost network-provisioned services pervasively accessible by everyone. Learn more at www.simtone.net and www.SIMtoneVDU.com
Navneet Aron <navneetaron at gmail.com> wrote:> Hi Folks, I''ve a rails application in production environment.The rails app > has both a website as well as REST apis that are called by mobile > devices. It has a apache front end and 8 mongrel_rails running when I start > the cluster. Each of them stabalize around 60 MB initially. After that > pretty rapidly one or two of the mongrel_rails process''s memory usage climbs > upto 300 MB (within 30 min). If I leave the system for 2 hours, pretty much > all the processes would have reached upwards of 300 MB . (Also there are > times when I can leave the system running pretty much the whole day and > memory usage will NOT go upto 300MB).Hi, It this sort of stuff depends on your application, too: * Rule #1: Don''t slurp in your application: - LIMIT all your SELECT statements in SQL, use will_paginate to display results (or whatever pagination helper is hot these days) - don''t read entire files into memory, read in blocks of 8K - 1M at depending on your IO performance; mongrel itself tries to read off the socket in 16K chunks. - if you run commands that output a lot of crap, read them incrementally with IO.popen or redirect them to a tempfile and read them incremementally there, `command` will slurp all of that into memory. A huge class of memory usage problems can be solved by avoiding slurping. * Do you have slow actions that could cause a lot of clients to bunch up behind it? Make those actions faster, and then set num_processors to a low-ish number (1-30) in your mongrel config if you have to. Otherwise one Mongrel could have 900+ threads queued up waiting on one slow one. Make all your actions fast(ish). The _only_ way Mongrel itself can be blamed for memory growth like that is to have too many threads running; in all other cases it''s solely application/framework''s fault :) I assume you log your requests, look at your logs and find out if certain requests are taking a long time. Or, see if there''s a sudden burst of traffic within a short time period ("short time period" meaning around the time it takes the longest request to finish on your site). If all requests finish pretty quickly and there were no traffic spike, then it could be one or a few bad requests passed that cause your application to eat memory like mad. For your idempotent requests, it would be worth it to setup an isolated instance with one Mongrel to replay request logs against and log memory growth before/after each request made. Back to Rule #1, I semi-recently learned of a change to glibc malloc that probably caused a lot of issues for slurpers: http://www.canonware.com/~ttt/2009/05/mr-malloc-gets-schooled.html Since Ruby doesn''t expose the malloc(3) method, I''ve released a (very lightly tested) gem here: http://bogomips.org/mall/ ( gem install mall )> The entire site becomes really slow and I''ve to restart the server. We > wanted to profile the app, but we couldn''t find a good profiling tool for > Ruby 1.8.7Evan''s bleak_house was alright the last time I needed it (ages ago) but not the easiest to get going. I haven''t needed to use anything lately but I haven''t been doing much Ruby. Other things to look out for in your app: OpenStruct - just avoid them, use Struct or Hash instead. I can''t remember exactly what''s wrong with them, even, but they were BAD. finalizers - make sure the blocks you pass to them don''t have the object you''re finalizing bound to them, a common mistake. -- Eric Wong