Hi all, My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now, i have a problem on memory leaks with mongrel. My site is running 5 mongrel processes on a 2G RAM machine, the memory of each process grows from about 20M to about 250M, but it never recover to the initial 20M, so i had to restart the mongrel processes once per day. The load is about 1M hits per day. Waiting for your help, thanks. Ken. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/584bf2c7/attachment-0001.html
On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:> Hi all, > > My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now, i > have a problem on memory leaks with mongrel. My site is running 5 mongrel > processes on a 2G RAM machine, the memory of each process grows from about > 20M to about 250M, but it never recover to the initial 20M, so i had to > restart the mongrel processes once per day. The load is about 1M hits per > day. >Hello Ken, A few things you share about your site are: 1) Using RMagick? or perform any other image manupulation/thumbnailing? 2) have fastthread gem installed? 3) perform file uploads thru rails? Or use send_file or file sending of dynamic files from rails. With that information other users on the list could share their experiences, since every application uses different gem, libraries or utilities, and offer a generic answer without deeper knowledge of the problem is impossible. Regards, -- Luis Lavena Multimedia systems - Leaders are made, they are not born. They are made by hard effort, which is the price which all of us must pay to achieve any goal that is worthwhile. Vince Lombardi
On 3/5/07, Ken Wei <2828628 at gmail.com> wrote:> > My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now, > i have a problem on memory leaks with mongrel. My site is running 5 mongrel > processes on a 2G RAM machine, the memory of each process grows from about > 20M to about 250M, but it never recover to the initial 20M, so i had to > restart the mongrel processes once per day. The load is about 1M hits per > day.Are you running in production or development mode? jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070305/4742b397/attachment.html
Hi, First of all thanks everyone for their responses.> 1) Using RMagick? or perform any other image manupulation/thumbnailing?nope> 2) have fastthread gem installed?yes, installed fastthread (0.6.4.1)> 3) perform file uploads thru rails? Or use send_file or file sending ofdynamic files from rails. nope and, i''m running on production mode. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/b33dc2ed/attachment.html
installed MySQL/Ruby (2.7) OS is CentOS release 4.2 (Final) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/4dd5e12d/attachment.html
Even test only a single url which reads a few records from mysql and then display, the memory usage still keep growing and never recover again. The test tool is httperf, this is the command i used: httperf --server 192.168.1.1 --port 81 --rate 15 --uri /user/ken --num-call 1 --num-conn 1000 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/a3f08d05/attachment.html
Do you use ferret at all? I found that ferret (and acts_as_ferret) cause the memory usage to grow to ~200MB a process. Can you list all your installed gems or at least all the gems that you are using? -carl On 3/5/07, Ken Wei <2828628 at gmail.com> wrote:> Even test only a single url which reads a few records from mysql and then > display, the memory usage still keep growing and never recover again. > > The test tool is httperf, this is the command i used: > httperf --server 192.168.1.1 --port 81 --rate 15 --uri /user/ken --num-call > 1 --num-conn 1000 > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
*** LOCAL GEMS *** actionmailer (1.3.2, 1.2.5, 1.1.5) Service layer for easy email delivery and testing. actionpack (1.13.2, 1.12.5, 1.11.2) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.2, 1.1.6, 1.0.0) Web service support for Action Pack. activerecord (1.15.2, 1.14.4, 1.13.2) Implements the ActiveRecord pattern for ORM. activesupport (1.4.1, 1.3.1, 1.2.5) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.5, 0.4.4) A toolkit to create and control daemons in different ways fastthread (0.6.4.1) Optimized replacement for thread.rb primitives gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only hoe (1.1.7) Hoe is a way to write Rakefiles much easier and cleaner. hpricot (0.5) a swift, liberal HTML parser with a fantastic library htmltokenizer (1.0) A class to tokenize HTML. mechanize (0.6.4) Mechanize provides automated web-browsing memcache-client (1.2.0, 1.0.3) A Ruby memcached client mongrel (1.0.1) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. rails (1.2.2, 1.1.6, 1.0.0) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.1, 0.7.0) Ruby based make-like utility. rubyforge (0.4.0) A simplistic script which automates a limited set of rubyforge operations rubyzip (0.9.1) rubyzip is a ruby module for reading and writing zip files sources (0.0.1) This package provides download sources for remote gem installation -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/85452c99/attachment.html
If you are able try doing a > gem cleanup to remove old versions of gems. I''ve seen one case where that fixed such an issue. ~Wayne On Mar 06, 2007, at 02:50 , Ken Wei wrote:> *** LOCAL GEMS *** > > actionmailer (1.3.2, 1.2.5, 1.1.5) > Service layer for easy email delivery and testing. > > actionpack (1.13.2, 1.12.5, 1.11.2) > Web-flow and rendering framework putting the VC in MVC. > > actionwebservice (1.2.2, 1.1.6, 1.0.0) > Web service support for Action Pack. > > activerecord (1.15.2, 1.14.4, 1.13.2) > Implements the ActiveRecord pattern for ORM. > > activesupport (1.4.1, 1.3.1 , 1.2.5) > Support and utility classes used by the Rails framework. > > cgi_multipart_eof_fix (2.1) > Fix an exploitable bug in CGI multipart parsing which affects Ruby > <= 1.8.5 when multipart boundary attribute contains a non-halting > regular expression string. > > daemons (1.0.5, 0.4.4) > A toolkit to create and control daemons in different ways > > fastthread (0.6.4.1) > Optimized replacement for thread.rb primitives > > gem_plugin (0.2.2, 0.2.1) > A plugin system based only on rubygems that uses dependencies only > > hoe (1.1.7) > Hoe is a way to write Rakefiles much easier and cleaner. > > hpricot (0.5 ) > a swift, liberal HTML parser with a fantastic library > > htmltokenizer (1.0) > A class to tokenize HTML. > > mechanize (0.6.4) > Mechanize provides automated web-browsing > > memcache-client ( 1.2.0, 1.0.3) > A Ruby memcached client > > mongrel (1.0.1) > A small fast HTTP library and server that runs Rails, Camping, > Nitro > and Iowa apps. > > mongrel_cluster (0.2.1) > Mongrel plugin that provides commands and Capistrano tasks for > managing multiple Mongrel processes. > > mysql (2.7) > MySQL/Ruby provides the same functions for Ruby programs that the > MySQL C API provides for C programs. > > rails (1.2.2, 1.1.6, 1.0.0) > Web-application framework with template engine, control-flow > layer, > and ORM. > > rake (0.7.1, 0.7.0) > Ruby based make-like utility. > > rubyforge (0.4.0) > A simplistic script which automates a limited set of rubyforge > operations > > rubyzip (0.9.1) > rubyzip is a ruby module for reading and writing zip files > > sources (0.0.1) > This package provides download sources for remote gem installation > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
''gem cleanup'' i did that, but still -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/b65bcedf/attachment.html
I''ve got issues with my rails application leaking memory as well. I can say it''s not Mongrel''s fault, as I was able to duplicate the situation in Webbrick. My problem happens because I''m using monit to make sure my site stays up, but in doing so, monit hits each of my mongrels every minute. I thought the memory issues had to do with images, send_data or something else, and what I found, is that on a site that does nothing but respond to this monit controller, the memory grew and grew. I''m guessing it has to do with the plugins I''m using, as when I tried the same thing on a fresh rails application, the memory grew, but capped off at about 35MB, where the full application loading all plugins continued to grow until I killed it, never recovering memory. So, for now, monit is the cause and solution to my memory problems. I was thinking about trying to create a handler for mongrel that monit can hit to verify that it''s running, but then there''s the possibility that mongrel is up, but my application is down. My other issue with using monit are the constant hits to the log files, which logger.silence doesn''t help (at least the methods I''ve tried) If someone knows how to silence a controller completely, I''d love to know. Right now I''m a bit busy, but I think it would be a good test to add my plugins one at a time to a fresh application and check the memory usage after hitting it with a few thousand hits from apache bench. On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:> ''gem cleanup'' i did that, but still > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
So, memory leak is the problem, and monit is just highlighting it. Is that correct? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/446d94fe/attachment.html
Did you try adding GC.start in your application? On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote:> I''ve got issues with my rails application leaking memory as well. I > can say it''s not Mongrel''s fault, as I was able to duplicate the > situation in Webbrick. > > My problem happens because I''m using monit to make sure my site stays > up, but in doing so, monit hits each of my mongrels every minute. I > thought the memory issues had to do with images, send_data or > something else, and what I found, is that on a site that does nothing > but respond to this monit controller, the memory grew and grew. > > I''m guessing it has to do with the plugins I''m using, as when I tried > the same thing on a fresh rails application, the memory grew, but > capped off at about 35MB, where the full application loading all > plugins continued to grow until I killed it, never recovering memory. > > So, for now, monit is the cause and solution to my memory problems. I > was thinking about trying to create a handler for mongrel that monit > can hit to verify that it''s running, but then there''s the possibility > that mongrel is up, but my application is down. > > My other issue with using monit are the constant hits to the log > files, which logger.silence doesn''t help (at least the methods I''ve > tried) If someone knows how to silence a controller completely, I''d > love to know. > > Right now I''m a bit busy, but I think it would be a good test to add > my plugins one at a time to a fresh application and check the memory > usage after hitting it with a few thousand hits from apache bench. > > > On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: > > ''gem cleanup'' i did that, but still > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
Can you do a dump of all the plugins you are using? There are some that are well known to be problematic with memory leaks. - R0b On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote:> I''ve got issues with my rails application leaking memory as well. I > can say it''s not Mongrel''s fault, as I was able to duplicate the > situation in Webbrick. > > My problem happens because I''m using monit to make sure my site stays > up, but in doing so, monit hits each of my mongrels every minute. I > thought the memory issues had to do with images, send_data or > something else, and what I found, is that on a site that does nothing > but respond to this monit controller, the memory grew and grew. > > I''m guessing it has to do with the plugins I''m using, as when I tried > the same thing on a fresh rails application, the memory grew, but > capped off at about 35MB, where the full application loading all > plugins continued to grow until I killed it, never recovering memory. > > So, for now, monit is the cause and solution to my memory problems. I > was thinking about trying to create a handler for mongrel that monit > can hit to verify that it''s running, but then there''s the possibility > that mongrel is up, but my application is down. > > My other issue with using monit are the constant hits to the log > files, which logger.silence doesn''t help (at least the methods I''ve > tried) If someone knows how to silence a controller completely, I''d > love to know. > > Right now I''m a bit busy, but I think it would be a good test to add > my plugins one at a time to a fresh application and check the memory > usage after hitting it with a few thousand hits from apache bench. > > > On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: > > ''gem cleanup'' i did that, but still > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
> Did you try adding GC.start in your application?yep -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/58be976b/attachment.html
Here are the plugins that were on the application when I just tried loading a single controller, which ended up hitting an 80MB limit after about 8 hours on all 4 mongrels running rails 1.2.2. They all restarted within minutes of each other, which was interesting. acts_as_ferret arts authorization custom-err-msg exception_notification flex_image has_many_polymorphs http_url_validation paginating_find rails_rcov resource_feeder (added after test) restful_authentication routing_navigator simply_helpful sql_session_store timed_fragment_cache The application I have in development that restarts every few days has the following plugins. acts_as_authenticated acts_as_rateable arts assert_select authorization browser_filters custom-err-msg debug_view_helper exception_notification flex_image paginating_find rails_rcov responsible_markup simple_http_auth timed_fragment_cache white_list I ran the tests with and without GC.start in the controller. GC.start kicked off in the production application when I do a send_data call. On 3/6/07, Carl Lerche <carl.lerche at gmail.com> wrote:> Did you try adding GC.start in your application? > > On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote: > > I''ve got issues with my rails application leaking memory as well. I > > can say it''s not Mongrel''s fault, as I was able to duplicate the > > situation in Webbrick. > > > > My problem happens because I''m using monit to make sure my site stays > > up, but in doing so, monit hits each of my mongrels every minute. I > > thought the memory issues had to do with images, send_data or > > something else, and what I found, is that on a site that does nothing > > but respond to this monit controller, the memory grew and grew. > > > > I''m guessing it has to do with the plugins I''m using, as when I tried > > the same thing on a fresh rails application, the memory grew, but > > capped off at about 35MB, where the full application loading all > > plugins continued to grow until I killed it, never recovering memory. > > > > So, for now, monit is the cause and solution to my memory problems. I > > was thinking about trying to create a handler for mongrel that monit > > can hit to verify that it''s running, but then there''s the possibility > > that mongrel is up, but my application is down. > > > > My other issue with using monit are the constant hits to the log > > files, which logger.silence doesn''t help (at least the methods I''ve > > tried) If someone knows how to silence a controller completely, I''d > > love to know. > > > > Right now I''m a bit busy, but I think it would be a good test to add > > my plugins one at a time to a fresh application and check the memory > > usage after hitting it with a few thousand hits from apache bench. > > > > > > On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: > > > ''gem cleanup'' i did that, but still > > > > > > _______________________________________________ > > > Mongrel-users mailing list > > > Mongrel-users at rubyforge.org > > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > > -- > EPA Rating: 3000 Lines of Code / Gallon (of coffee) > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
> The application I have in development that restarts every few days has > the following plugins. > acts_as_authenticated > acts_as_rateable > arts > assert_select > authorization > browser_filters > custom-err-msg > debug_view_helper > exception_notification > flex_imageflex_image uses rmagick... which folks have lots of issues with...> paginating_find > rails_rcov > responsible_markup > simple_http_auth > timed_fragment_cache > white_list > > I ran the tests with and without GC.start in the controller. > GC.start kicked off in the production application when I do a send_data call. > > > > > On 3/6/07, Carl Lerche <carl.lerche at gmail.com> wrote: >> Did you try adding GC.start in your application? >> >> On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote: >>> I''ve got issues with my rails application leaking memory as well. I >>> can say it''s not Mongrel''s fault, as I was able to duplicate the >>> situation in Webbrick. >>> >>> My problem happens because I''m using monit to make sure my site stays >>> up, but in doing so, monit hits each of my mongrels every minute. I >>> thought the memory issues had to do with images, send_data or >>> something else, and what I found, is that on a site that does nothing >>> but respond to this monit controller, the memory grew and grew. >>> >>> I''m guessing it has to do with the plugins I''m using, as when I tried >>> the same thing on a fresh rails application, the memory grew, but >>> capped off at about 35MB, where the full application loading all >>> plugins continued to grow until I killed it, never recovering memory. >>> >>> So, for now, monit is the cause and solution to my memory problems. I >>> was thinking about trying to create a handler for mongrel that monit >>> can hit to verify that it''s running, but then there''s the possibility >>> that mongrel is up, but my application is down. >>> >>> My other issue with using monit are the constant hits to the log >>> files, which logger.silence doesn''t help (at least the methods I''ve >>> tried) If someone knows how to silence a controller completely, I''d >>> love to know. >>> >>> Right now I''m a bit busy, but I think it would be a good test to add >>> my plugins one at a time to a fresh application and check the memory >>> usage after hitting it with a few thousand hits from apache bench. >>> >>> >>> On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: >>>> ''gem cleanup'' i did that, but still >>>> >>>> _______________________________________________ >>>> Mongrel-users mailing list >>>> Mongrel-users at rubyforge.org >>>> http://rubyforge.org/mailman/listinfo/mongrel-users >>>> >>> _______________________________________________ >>> Mongrel-users mailing list >>> Mongrel-users at rubyforge.org >>> http://rubyforge.org/mailman/listinfo/mongrel-users >>> >> >> >> -- >> EPA Rating: 3000 Lines of Code / Gallon (of coffee) >> _______________________________________________ >> Mongrel-users mailing list >> Mongrel-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-users >> > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
> flex_image uses rmagick... which folks have lots of issues with...I re-wrote the bits of flex_image that I use to use mini-magick and it''s working quite nicely. I did that when the whole discussion of "RMagick bad" came up a couple of months ago. On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> So, memory leak is the problem, and monit is just highlighting it. Is that > correct?For me, yes. I''m not exactly sure what is happening, but on my current application, the only controller that was hit overnight was the monit checker, which was enough to cause the leak. Here''s a copy of the controller, which I''ve tried to strip down as much as possible. (On my current production app, it''s doing 10k requests per second :) class MonitController < ActionController::Base session :off ## this is used by the monitoring scripts to see if the mongrel is up and running def index end end
On Mar 6, 2007, at 12:34 PM, Joey Geiger wrote:>> flex_image uses rmagick... which folks have lots of issues with... > > I re-wrote the bits of flex_image that I use to use mini-magick and > it''s working quite nicely. > I did that when the whole discussion of "RMagick bad" came up a couple > of months ago. > > On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote: >> So, memory leak is the problem, and monit is just highlighting it. >> Is that >> correct? > > For me, yes. I''m not exactly sure what is happening, but on my current > application, the only controller that was hit overnight was the monit > checker, which was enough to cause the leak. > > Here''s a copy of the controller, which I''ve tried to strip down as > much as possible. (On my current production app, it''s doing 10k > requests per second :) > > class MonitController < ActionController::Base > session :off > ## this is used by the monitoring scripts to see if the mongrel is > up and running > def index > end > endI have seen weird behavior from having monit check with a http request. It woudl cause weird issues and mem leaks. I have since removed any http checks from all my monit recipes and used an external siteuptime monitoring for http status check. Monit just watches memory and cpu usage and restarts dead or bloated mongrels. It''s just a fact of life with large rails applications, some of them just like to keep using memory as much as possible. I have found that restarting mongrels that go over 110Mb for a few cycles will keep things in check pretty well. This is for 64bit systems where memory usage is higher then on 32 bit systems. Here is a monit recipe I am using on a large number of servers that seems to work very well. set httpd port 9111 allow localhost set daemon 60 set logfile /data/username/shared/log/monit.log set mail-format {from:info at engineyard.com} set mailserver smtp.engineyard.com set alert eymonit at gmail.com check process mongrel_username_0 with pidfile /data/username/shared/log/mongrel.5000.pid start program = "/data/username/shared/bin/start_mongrel.sh 5000 username" stop program = "/data/username/shared/bin/stop_mongrel.sh 5000 username" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 10 restarts within 10 cycles then timeout # something is wrong, call the sys-admin group mongrel check process mongrel_username_1 with pidfile /data/username/shared/log/mongrel.5001.pid start program = "/data/username/shared/bin/start_mongrel.sh 5001 username" stop program = "/data/username/shared/bin/stop_mongrel.sh 5001 username" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 10 restarts within 10 cycles then timeout # something is wrong, call the sys-admin group mongrel check process mongrel_username_2 with pidfile /data/username/shared/log/mongrel.5002.pid start program = "/data/username/shared/bin/start_mongrel.sh 5002 username" stop program = "/data/username/shared/bin/stop_mongrel.sh 5002 username" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 10 restarts within 10 cycles then timeout # something is wrong, call the sys-admin group mongrel Cheers- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273)
On 3/6/07, Ezra Zygmuntowicz <ezmobius at gmail.com> wrote:> > On Mar 6, 2007, at 12:34 PM, Joey Geiger wrote: > > >> flex_image uses rmagick... which folks have lots of issues with... > > > > I re-wrote the bits of flex_image that I use to use mini-magick and > > it''s working quite nicely. > > I did that when the whole discussion of "RMagick bad" came up a couple > > of months ago. > > > > On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote: > >> So, memory leak is the problem, and monit is just highlighting it. > >> Is that > >> correct? > > > > For me, yes. I''m not exactly sure what is happening, but on my current > > application, the only controller that was hit overnight was the monit > > checker, which was enough to cause the leak. > > > > Here''s a copy of the controller, which I''ve tried to strip down as > > much as possible. (On my current production app, it''s doing 10k > > requests per second :) > > > > class MonitController < ActionController::Base > > session :off > > ## this is used by the monitoring scripts to see if the mongrel is > > up and running > > def index > > end > > end > > > I have seen weird behavior from having monit check with a http > request. It woudl cause weird issues and mem leaks. I have since > removed any http checks from all my monit recipes and used an > external siteuptime monitoring for http status check. Monit just > watches memory and cpu usage and restarts dead or bloated mongrels. > > It''s just a fact of life with large rails applications, some of them > just like to keep using memory as much as possible. I have found that > restarting mongrels that go over 110Mb for a few cycles will keep > things in check pretty well. This is for 64bit systems where memory > usage is higher then on 32 bit systems. > > Here is a monit recipe I am using on a large number of servers that > seems to work very well. > > set httpd port 9111 > allow localhost > > set daemon 60 > set logfile /data/username/shared/log/monit.log > set mail-format {from:info at engineyard.com} > set mailserver smtp.engineyard.com > set alert eymonit at gmail.com > > check process mongrel_username_0 > with pidfile /data/username/shared/log/mongrel.5000.pid > start program = "/data/username/shared/bin/start_mongrel.sh 5000 > username" > stop program = "/data/username/shared/bin/stop_mongrel.sh 5000 > username" > if totalmem is greater than 110.0 MB for 4 cycles then > restart # eating up memory? > if cpu is greater than 50% for 2 cycles then > alert # send an email to admin > if cpu is greater than 80% for 3 cycles then > restart # hung process? > if loadavg(5min) greater than 10 for 8 cycles then > restart # bad, bad, bad > if 10 restarts within 10 cycles then > timeout # something is wrong, call the sys-admin > group mongrel > > check process mongrel_username_1 > with pidfile /data/username/shared/log/mongrel.5001.pid > start program = "/data/username/shared/bin/start_mongrel.sh 5001 > username" > stop program = "/data/username/shared/bin/stop_mongrel.sh 5001 > username" > if totalmem is greater than 110.0 MB for 4 cycles then > restart # eating up memory? > if cpu is greater than 50% for 2 cycles then > alert # send an email to admin > if cpu is greater than 80% for 3 cycles then > restart # hung process? > if loadavg(5min) greater than 10 for 8 cycles then > restart # bad, bad, bad > if 10 restarts within 10 cycles then > timeout # something is wrong, call the sys-admin > group mongrel > > check process mongrel_username_2 > with pidfile /data/username/shared/log/mongrel.5002.pid > start program = "/data/username/shared/bin/start_mongrel.sh 5002 > username" > stop program = "/data/username/shared/bin/stop_mongrel.sh 5002 > username" > if totalmem is greater than 110.0 MB for 4 cycles then > restart # eating up memory? > if cpu is greater than 50% for 2 cycles then > alert # send an email to admin > if cpu is greater than 80% for 3 cycles then > restart # hung process? > if loadavg(5min) greater than 10 for 8 cycles then > restart # bad, bad, bad > if 10 restarts within 10 cycles then > timeout # something is wrong, call the sys-admin > group mongrelEzra, What do you use for uptime monitoring? I haven''t found anything out of the box that can check often enough (ie at least every minute). - Rob
On Mar 6, 2007, at 1:23 PM, Rob Sanheim wrote:> On 3/6/07, Ezra Zygmuntowicz <ezmobius at gmail.com> wrote: >> snip > > Ezra, > > What do you use for uptime monitoring? I haven''t found anything out > of the box that can check often enough (ie at least every minute). > > - RobRob- We use http://siteuptime.com/ for an external site health check that checks every 2 minutes from SF, NYC, Florida and London. We also have health checking going on in our load balancers with alerts in closer to real time. Cheers- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273)
I created a new rails app named ''test'' which containing only a controller and an action . Here is the controller: class MyTestController < ApplicationController def index render_text ''Hello!'' end end I keeped the setup as the same as before. Then i ran a single mongrel server which listening on 3000 by default, and used httperf to hit the action: httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call 1 --num-conn 10000 The memory usage of the mongrel server grows from 20M to 144M in 20 seconds, it''s crazy! And i tryed Lighttpd + FastCGI to test this case, it works well. Then i think about if i need to roll back to the fastcgi way? is the mongrel the future of the rails community? confused! confused! confused! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/dc3bb504/attachment.html
is mongrel still far from production ready? or did i make something wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/aab628a2/attachment.html
There has got to be something seriously wrong with your stack/install although I am not knowledgeable enough to tell you where to start looking. Kyle Kochis On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:> is mongrel still far from production ready? or did i make something wrong? > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: Hey, Looks like we are on the same stage of the learning curve about this stuff. So, let''s share our discoveries with the rest of the world :) httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call> 1 --num-conn 10000 > > The memory usage of the mongrel server grows from 20M to 144M in 20 > seconds, it''sThis is exactly what Mongrel does when it cannot cope with the incoming traffic. I''ve discovered the same effect today. You are definitely overloading it with 80 requests per second. After all, it''s a single-threaded instance of a fairly CPU-heavy framework. With no page caching it should cope with ~10 to 30 requests per second max. The crappy part about this, after the overload condition is off, the Mongrel process stays at 150Mb. Not a problem when you are hosting one app on the box, but becomes a problem when it''s ten. By the way, check the errors section of httperf report, and the production.log. See if there are "fd_unavailable" socket errors in the former, and probably some complaints about "too many files open" in the latter. If there are, you need to either increase the number of file descriptors in the Linux kernel, or decrease the max number of open sockets in the Mongrel(s), with -n option. I don''t know if it solves the "RAM footprint growing to 150 Mb" problem... I will know it first thing tomorrow morning :) Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/ea2b19f4/attachment.html
On Wed, Mar 07, 2007 at 12:55:06AM -0600, Alexey Verkhovsky wrote:> This is exactly what Mongrel does when it cannot cope with the incoming > traffic. I''ve discovered the same effect today. > > You are definitely overloading it with 80 requests per second. After all, > it''s a single-threaded instance of a fairly CPU-heavy framework. With no > page caching it should cope with ~10 to 30 requests per second max. > > The crappy part about this, after the overload condition is off, the Mongrel > process stays at 150Mb. Not a problem when you are hosting one app on the > box, but becomes a problem when it''s ten.I''ve had some success reducing the number of processors. Try reducing this somewhat and see if it helps. -- Cheers, - Jacob Atzen
>Looks like we are on the same stage of the learning curve about this stuff.So, let''s share our discoveries with >the rest of the world :) httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call> 1 --num-conn 10000 > > The memory usage of the mongrel server grows from 20M to 144M in 20 > seconds, it''s>This is exactly what Mongrel does when it cannot cope with the incomingtraffic. I''ve discovered the same >effect today.>You are definitely overloading it with 80 requests per second. After all,it''s a single-threaded instance of a fairly >CPU-heavy framework. With no page caching it should cope with ~10 to 30 requests per second max. Yes, it''s overload with message ''Too many open files...''. Anyway this test doesn''t shoot the heart of this issue. I should make the other one later. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/832cc7ca/attachment.html
On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> On 3/6/07, Ken Wei <2828628 at gmail.com> wrote:> Looks like we are on the same stage of the learning curve about this stuff. > So, let''s share our discoveries with the rest of the world :) > > > httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test > --num-call 1 --num-conn 10000 > > > > The memory usage of the mongrel server grows from 20M to 144M in 20 > seconds, it''s > > This is exactly what Mongrel does when it cannot cope with the incoming > traffic. I''ve discovered the same effect today.I think it''s fair to make a distinction here. What is probably happening is that Rails is not keeping up with that 80 reqs/second rate; it''s not Mongrel. On any remotely modern hardware, Mongrel will easily keep that pace itself. However, the net effect is that Mongrel creates a thread for each accepted connection. These threads are fairly memory intensive, since each one carries with it a fair amount of context, yet all they are doing is sitting there sleeping behind a mutex, waiting for their chance to wake up and run their request through the Rails handler.> By the way, check the errors section of httperf report, and the > production.log. See if there are "fd_unavailable" socket errors in the > former, and probably some complaints about "too many files open" in the > latter. If there are, you need to either increase the number of file > descriptors in the Linux kernel, or decrease the max number of open sockets > in the Mongrel(s), with -n option. I don''t know if it solves the "RAM > footprint growing to 150 Mb" problem... I will know it first thing tomorrow > morning :)No. That is probably happening because of the file descriptor limit in Ruby. Your Mongrel has accepted as many connections as Ruby can handle; it is out of descriptors. Kirk Haines
On Wed, 7 Mar 2007 04:14:57 -0700 "Kirk Haines" <wyhaines at gmail.com> wrote:> > By the way, check the errors section of httperf report, and the > > production.log. See if there are "fd_unavailable" socket errors in > > the former, and probably some complaints about "too many files > > open" in the latter. If there are, you need to either increase the > > number of file descriptors in the Linux kernel, or decrease the max > > number of open sockets in the Mongrel(s), with -n option. I don''t > > know if it solves the "RAM footprint growing to 150 Mb" problem... > > I will know it first thing tomorrow morning :) > > No. That is probably happening because of the file descriptor limit > in Ruby. Your Mongrel has accepted as many connections as Ruby can > handle; it is out of descriptors.What file descriptor limit are you referring to? A typical Linux <default> ulimit on file descriptors is 1024, which should be more than enough for the test Ken is performing. Also, I would recommend doing a test where you separate Mongrel from Rails. Use a simple Mongrel handler like the one found here: http://mongrel.rubyforge.org/rdoc/index.html require ''mongrel'' class SimpleHandler < Mongrel::HttpHandler def process(request, response) response.start(200) do |head,out| head["Content-Type"] = "text/plain" out.write("hello!\n") end end end h = Mongrel::HttpServer.new("0.0.0.0", "3000") h.register("/test", SimpleHandler.new) h.register("/files", Mongrel::DirHandler.new(".")) h.run.join This will possibly narrow down the problem area. If Mongrel itself is to blame then you should still see lots-o-memory growth. Or it is the interface with Rails that is causing the problem. Jim Powers
Following is the newest test case i took: * create a fresh rails app * then create a controller, a model and a view. * the model is just a simple table containing id and name. * use httperf to beat the url /book/show/1 by the following steps: 1) httperf --server xxx --port 3000 --rate 50 --uri /book/show/1 --num-call 1 --num-conn 2000 2) httperf --server xxx --port 3000 --rate 60 --uri /book/show/1 --num-call 1 --num-conn 2000 3) httperf --server xxx --port 3000 --rate 70 --uri /book/show/1 --num-call 1 --num-conn 2000 4) httperf --server xxx --port 3000 --rate 80 --uri /book/show/1 --num-call 1 --num-conn 2000 thereinto, 1), 2), 3) 4) were passed, the memory usage of 1), 2), 3) were normal 30~32M, however, 4) up to 60M and never recover again. this is the report of 4): -------------------------------------------------------- httperf --client=0/1 --server=192.168.1.131 --port=3000 --uri=/book/show/1 --rate=80 --send-buffer=4096 --recv-buffer=16384 --num-conns=2000 --num-calls=1 Maximum connect burst length: 1 Total: connections 2000 requests 2000 replies 2000 test-duration 32.065 s Connection rate: 62.4 conn/s (16.0 ms/conn, <=482 concurrent connections) Connection time [ms]: min 24.0 avg 3091.5 max 12706.6 median 2240.5 stddev 2532.5 Connection time [ms]: connect 0.2 Connection length [replies/conn]: 1.000 Request rate: 62.4 req/s (16.0 ms/req) Request size [B]: 75.0 Reply rate [replies/s]: min 49.4 avg 61.2 max 73.6 stddev 8.9 (6 samples) Reply time [ms]: response 3091.3 transfer 0.0 Reply size [B]: header 267.0 content 1522.0 footer 0.0 (total 1789.0) Reply status: 1xx=0 2xx=2000 3xx=0 4xx=0 5xx=0 CPU time [s]: user 1.00 system 31.06 (user 3.1% system 96.9% total 100.0%) Net I/O: 113.5 KB/s (0.9*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 ------------------------------------------------------------------------------ i also repeated the 4) 2 times. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/d7977e1f/attachment-0001.html
used this cmd to run mongrel for the above test case: mongrel_rails start -d -e production -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/9c5ae370/attachment.html
On 3/7/07, Jim Powers <rancor at mindspring.com> wrote:> What file descriptor limit are you referring to? A typical Linux > <default> ulimit on file descriptors is 1024, which should be more than > enough for the test Ken is performing.It depends on how quickly it outruns the handler''s ability to process the requests. Consider the last httperf command that he gave: httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call 1 --num-conn 10000 10000 connections is plenty enough to run out of file descriptors if he''s only managing to process, say, 70 requests per second. Kirk Haines
Having RTFMed on the issue, Mongrel''s max number of SOCKETS is 1024, due to the use of select(). And in my case yesterday it was running out of file descriptors way before it hit this limit. As for threads and their associated context using up memory. This may well be the case. Why does it stay at 150 Mb forever after the load is off, however? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/7a7411a0/attachment.html
On 3/7/07, Ken Wei <2828628 at gmail.com> wrote:> > 4) httperf --server xxx --port 3000 --rate 80 --uri /book/show/1 > --num-call 1 --num-conn 2000 >2000 calls at a rate of 80/sec is not enough to it out completely and make it run out of either file descriptors or sockets. Try 10000 calls, and you''ll reproduce the same effect as you reported yesterday. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/313b1ce9/attachment.html
On 3/7/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> Having RTFMed on the issue, Mongrel''s max number of SOCKETS is 1024, due to > the use of select(). And in my case yesterday it was running out of file > descriptors way before it hit this limit. > > As for threads and their associated context using up memory. This may well > be the case. Why does it stay at 150 Mb forever after the load is off, > however?Analyzing details about where RAM is going is an exercise in patience. A quick and hopefully stupid question here, though...are you using an older version of Mongrel? Or are you doing anything that creates an array and then shifts values off of it? shift() has a dumb assed (made moreso by the fact that, at least as of 1.8.5 it still exists) bug in it that will mess with your RAM usage badly, especially if you have large things in your array. Kirk Haines
Zed, Do you know what causes overloaded Mongrel to go to 150 Mb VSS and stay there, and why it is supposed to behave this way? If not, would you like somebody to figure it out and/or try to fix it? I could try to give it a shot. Alex On 3/7/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote:> > And one more distinction: You can use the -n parameter to set a max > allowed concurrent connections limit.-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/ca553903/attachment.html
but, but, it did NOT run out of file descriptions, it works well except memory leaks, up to 60M and never recover again. the following is the all output from httperf, pls take a look seriously first: ----------------------------------------------------------- httperf --client=0/1 --server=192.168.1.131 --port=3000 --uri=/book/show/1 --rate=80 --send-buffer=4096 --recv-buffer=16384 --num-conns=2000 --num-calls=1 Maximum connect burst length: 1 Total: connections 2000 requests 2000 replies 2000 test-duration 32.065 s Connection rate: 62.4 conn/s (16.0 ms/conn, <=482 concurrent connections) Connection time [ms]: min 24.0 avg 3091.5 max 12706.6 median 2240.5 stddev 2532.5 Connection time [ms]: connect 0.2 Connection length [replies/conn]: 1.000 Request rate: 62.4 req/s (16.0 ms/req) Request size [B]: 75.0 Reply rate [replies/s]: min 49.4 avg 61.2 max 73.6 stddev 8.9 (6 samples) Reply time [ms]: response 3091.3 transfer 0.0 Reply size [B]: header 267.0 content 1522.0 footer 0.0 (total 1789.0) Reply status: 1xx=0 2xx=2000 3xx=0 4xx=0 5xx=0 CPU time [s]: user 1.00 system 31.06 (user 3.1% system 96.9% total 100.0%) Net I/O: 113.5 KB/s (0.9*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 ------------------------------------------------------------------------------ The output looks good, everything passed and there is no error at all. And there is no error/exception in production.log, e.g. ''Too many open files...''. So, everything works well/fast except LEAKS, again i never pointed to mongrel at all. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/e0b7f8fc/attachment.html
this is the only controller in the above test case: class BookController < ApplicationController scaffold :book def list logger.debug ''List the books'' @books=Book.find_all end def edit @book=Book.find(@params[''id'']) @categories=Category.find_all end end mongrel is 1.0.1, rails is 1.2.2, ruby is 1.8.5 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/e9e16e51/attachment.html
On Wed, 7 Mar 2007 04:14:57 -0700 "Kirk Haines" <wyhaines at gmail.com> wrote:> On 3/6/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote: > > On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: > > > This is exactly what Mongrel does when it cannot cope with the incoming > > traffic. I''ve discovered the same effect today. > > I think it''s fair to make a distinction here.And one more distinction: You can use the -n parameter to set a max allowed concurrent connections limit. What Mongrel does is when it hits that limit, it rejects the current connection, then goes through the list of active threads and starts killing any that are older than about 60 seconds. So, if people think they''re overloading Mongrel with too many threads, they can easily test it by setting -n to say 30, thrashing it, and then looking at either the screen output or mongrel.log (if you started as a daemon). -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
It did not run out of file descriptors because you didn''t give it enough time to run out of them. It was, however, going to run out of something in the next minute or so. The evidence is in the fact that reported throughput (62.4 requests/sec) was lower than the load you gave it (80 requests/sec). Alex On 3/7/07, Ken Wei <2828628 at gmail.com> wrote:> > but, but, it did NOT run out of file descriptions, it works well except > memory leaks, up to 60M and never recover again. > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/da724d3b/attachment.html
but, how come the memory usage still stay there (60M) while everything goes well? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/fc8a2ad5/attachment-0001.html
On Wed, 7 Mar 2007 13:06:17 +0800 "Ken Wei" <2828628 at gmail.com> wrote:> I created a new rails app named ''test'' which containing only a controller > and an action . > Here is the controller: > > class MyTestController < ApplicationController > > def index > render_text ''Hello!'' > end > > endYes, *this* is the test case you use.> I keeped the setup as the same as before. Then i ran a single mongrel server > which listening on 3000 by default, and used httperf to hit the action: > > httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call > 1 --num-conn 10000The --num-call parameter is useless, and a rate of 80 isn''t good either. How did you come up with this rating? Start with --num-conn 1000, and see what the estimated rate is, then give it that rate. After that, slowly move --rate up or down until you find the point where you can''t make it any faster without the speed dropping. And, go get http://peepcode.com/products/benchmarking-with-httperf so I don''t have to try to explain it over email.> The memory usage of the mongrel server grows from 20M to 144M in 20 seconds, > it''s crazy!Well, there''s enough people complaining about memory leaks in *MONGREL* when they run *RAILS* that I''ll have to investigate it.> And i tryed Lighttpd + FastCGI to test this case, it works well. Then i > think about if i need to roll back to the fastcgi way? is the mongrel the > future of the rails community?FastCGI forces a garbage collection after a certain number of requests, but if your app runs with FastCGI then you should use that. Don''t use a solution that doesn''t work. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
OK, I probably know what is happening. So... 1. Mongrel with a Hello, World application has virtual segment size (VSS) of ~32 Mb. As long as it is not overloaded, it stays at that level. 2. Once you overload it, Mongrel starts spawning new threads, up to the default limit of 1024, or {max # of file descriptors per process - 2}, whichever is smaller. 3. Creating a new thread takes quite a bit of RAM (I guess, most of it is simply allocated to the stack). 1024 new threads take over 60 Mb. 4. Killing the thread does not cause Ruby interpreter to release the memory back to OS. So, in a nutshell, in overload situation you end up with all 1024 threads parked at the Rails mutex. Solution is to set --num-procs much lower than 1024. Say, to 64. Or even to 10. In my tests, Hello''world application did not exceed 48 Mb even with --num-procs=256. Due to the Rails mutex, having more threads doesn''t really help in the situation where all of your static content, uploads and downloads are served by an upstream web server. A more "industrial" solution would be to redesign Mongrel''s internal architecture a bit. Requests routed to Rails can be placed in a queue, and the thread released, instead of being parked at the mutex. It would help people hosting their apps in those 64 Mb slices to work without an upstream web server, but it would also add complexity to the code. One of those tradeoffs. I also think that --num-procs=1024 is not the best possible default. For the benefit of all the people running Rails in cheap VPSes, it should be set to a level not exceeding the limits of a 64 Mb slice. I have a pretty lengthy record of configuration under test and what I did to figure all this out. If anyone is interested in reading it, let me know. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/2df78ff9/attachment-0001.html
Alex, I am quite interested in reading it. Thanks, ~Wayne On Mar 07, 2007, at 19:38 , Alexey Verkhovsky wrote:> I have a pretty lengthy record of configuration under test and what > I did to figure all this out. If anyone is interested in reading > it, let me know. > > Alex
Ah. Thank you Alex, this is very handy information for me as I am in charge of quite a few small VPS''s. I have not had mem leaking issues yet but this could could come in really handy for a much larger scale rails project I am working on. Kyle Kochis On 3/7/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> OK, I probably know what is happening. So... > > 1. Mongrel with a Hello, World application has virtual segment size (VSS) of > ~32 Mb. As long as it is not overloaded, it stays at that level. > 2. Once you overload it, Mongrel starts spawning new threads, up to the > default limit of 1024, or {max # of file descriptors per process - 2}, > whichever is smaller. > 3. Creating a new thread takes quite a bit of RAM (I guess, most of it is > simply allocated to the stack). 1024 new threads take over 60 Mb. > 4. Killing the thread does not cause Ruby interpreter to release the memory > back to OS. > > So, in a nutshell, in overload situation you end up with all 1024 threads > parked at the Rails mutex. > > Solution is to set --num-procs much lower than 1024. Say, to 64. Or even to > 10. > In my tests, Hello''world application did not exceed 48 Mb even with > --num-procs=256. Due to the Rails mutex, having more threads doesn''t really > help in the situation where all of your static content, uploads and > downloads are served by an upstream web server. > > A more "industrial" solution would be to redesign Mongrel''s internal > architecture a bit. Requests routed to Rails can be placed in a queue, and > the thread released, instead of being parked at the mutex. It would help > people hosting their apps in those 64 Mb slices to work without an upstream > web server, but it would also add complexity to the code. One of those > tradeoffs. > > I also think that --num-procs=1024 is not the best possible default. For the > benefit of all the people running Rails in cheap VPSes, it should be set to > a level not exceeding the limits of a 64 Mb slice. > > I have a pretty lengthy record of configuration under test and what I did to > figure all this out. If anyone is interested in reading it, let me know. > > Alex > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
On 3/7/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> OK, I probably know what is happening. So... > > 1. Mongrel with a Hello, World application has virtual segment size (VSS) of > ~32 Mb. As long as it is not overloaded, it stays at that level. > 2. Once you overload it, Mongrel starts spawning new threads, up to the > default limit of 1024, or {max # of file descriptors per process - 2}, > whichever is smaller. > 3. Creating a new thread takes quite a bit of RAM (I guess, most of it is > simply allocated to the stack). 1024 new threads take over 60 Mb. > 4. Killing the thread does not cause Ruby interpreter to release the memory > back to OS. > > So, in a nutshell, in overload situation you end up with all 1024 threads > parked at the Rails mutex. > > Solution is to set --num-procs much lower than 1024. Say, to 64. Or even to > 10. > In my tests, Hello''world application did not exceed 48 Mb even with > --num-procs=256. Due to the Rails mutex, having more threads doesn''t really > help in the situation where all of your static content, uploads and > downloads are served by an upstream web server. > > A more "industrial" solution would be to redesign Mongrel''s internal > architecture a bit. Requests routed to Rails can be placed in a queue, and > the thread released, instead of being parked at the mutex. It would help > people hosting their apps in those 64 Mb slices to work without an upstream > web server, but it would also add complexity to the code. One of those > tradeoffs. > > I also think that --num-procs=1024 is not the best possible default. For the > benefit of all the people running Rails in cheap VPSes, it should be set to > a level not exceeding the limits of a 64 Mb slice. > > I have a pretty lengthy record of configuration under test and what I did to > figure all this out. If anyone is interested in reading it, let me know. > > AlexAll of this sounds good, but I would say that the defaults should target something more like a 128 - 256 meg slice. No one who is sane is trying to really run a Rails app on a 64 meg VPS - - thats just asking for a lot of pain. - Rob
I''m going to go ahead and blame acts_as_ferret. I had an application that used acts_as_ferret and each mongrel process reached up to 300MB. I removed acts_as_ferret (as well as switched from mysql to postgresql) and now the exact same application is staying steady at 70 MB a process. I also use a lot of RMagick in that same application (didn''t have time to remove it) but the processes still stay at 70MB. Hope this helps. On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote:> Here are the plugins that were on the application when I just tried > loading a single controller, which ended up hitting an 80MB limit > after about 8 hours on all 4 mongrels running rails 1.2.2. They all > restarted within minutes of each other, which was interesting. > > acts_as_ferret > arts > authorization > custom-err-msg > exception_notification > flex_image > has_many_polymorphs > http_url_validation > paginating_find > rails_rcov > resource_feeder (added after test) > restful_authentication > routing_navigator > simply_helpful > sql_session_store > timed_fragment_cache > > The application I have in development that restarts every few days has > the following plugins. > acts_as_authenticated > acts_as_rateable > arts > assert_select > authorization > browser_filters > custom-err-msg > debug_view_helper > exception_notification > flex_image > paginating_find > rails_rcov > responsible_markup > simple_http_auth > timed_fragment_cache > white_list > > I ran the tests with and without GC.start in the controller. > GC.start kicked off in the production application when I do a send_data call. > > > > > On 3/6/07, Carl Lerche <carl.lerche at gmail.com> wrote: > > Did you try adding GC.start in your application? > > > > On 3/6/07, Joey Geiger <jgeiger at gmail.com> wrote: > > > I''ve got issues with my rails application leaking memory as well. I > > > can say it''s not Mongrel''s fault, as I was able to duplicate the > > > situation in Webbrick. > > > > > > My problem happens because I''m using monit to make sure my site stays > > > up, but in doing so, monit hits each of my mongrels every minute. I > > > thought the memory issues had to do with images, send_data or > > > something else, and what I found, is that on a site that does nothing > > > but respond to this monit controller, the memory grew and grew. > > > > > > I''m guessing it has to do with the plugins I''m using, as when I tried > > > the same thing on a fresh rails application, the memory grew, but > > > capped off at about 35MB, where the full application loading all > > > plugins continued to grow until I killed it, never recovering memory. > > > > > > So, for now, monit is the cause and solution to my memory problems. I > > > was thinking about trying to create a handler for mongrel that monit > > > can hit to verify that it''s running, but then there''s the possibility > > > that mongrel is up, but my application is down. > > > > > > My other issue with using monit are the constant hits to the log > > > files, which logger.silence doesn''t help (at least the methods I''ve > > > tried) If someone knows how to silence a controller completely, I''d > > > love to know. > > > > > > Right now I''m a bit busy, but I think it would be a good test to add > > > my plugins one at a time to a fresh application and check the memory > > > usage after hitting it with a few thousand hits from apache bench. > > > > > > > > > On 3/6/07, Ken Wei <2828628 at gmail.com> wrote: > > > > ''gem cleanup'' i did that, but still > > > > > > > > _______________________________________________ > > > > Mongrel-users mailing list > > > > Mongrel-users at rubyforge.org > > > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > > > > _______________________________________________ > > > Mongrel-users mailing list > > > Mongrel-users at rubyforge.org > > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > > > > > > -- > > EPA Rating: 3000 Lines of Code / Gallon (of coffee) > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
On Wed, 7 Mar 2007 17:38:19 -0700 "Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:> A more "industrial" solution would be to redesign Mongrel''s internal > architecture a bit. Requests routed to Rails can be placed in a queue, and > the thread released, instead of being parked at the mutex. It would help > people hosting their apps in those 64 Mb slices to work without an upstream > web server, but it would also add complexity to the code. One of those > tradeoffs.They are kept in a queue, a couple actually, and they are killed off when Mongrel can, which is usually when it gets a chance. If you''re thrashing it then it doesn''t have much of a chance. I''d say first off the solution is: just quit doing that. If you''re maxing out your Mongrel servers then you''re seriously screwed anyway and there''s nothing but -n to help you. No amount of additional queuing will help. You have to be smarter about it, especially if you''ve got a setup that''s only 64MB of ram and somehow you''re getting so many requests you can''t keep up. Time to stop being cheap and fork over the extra money for a bigger slice (which is good because it means you''re popular). I went through this many times over back in the deep dark days before fastthread, and no matter what you do if you''re piling requests behind some kind of list--whether that''s a mutex or something else--you build up RAM. It''s as simple as you make threads, threads take ram, threads don''t go away fast enough. Ultimately though, everyone will just keep cycling over the same old problems looking for how they can solve it within Mongrel when really the solution has to come from ruby-core. Until Ruby''s IO, GC, and threads improve drastically you''ll keep hitting these problems. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Some further findings: "Hello World" Rails application on my rather humble rig (Dell D620 laptop running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2) can handle over 500 hits per second on the following action: def say_hi render :text => ''Hi!'' end It also doesn''t leak any memory at all *when it is not overloaded*. E.g., under maximum non-concurent load (single-threaded test client that fires the next request immediately upon receiving a response to the previous one), it stays up forever. When Mongrel + "Hello, World" is overloaded, there is a memory leak to the tune of 6 Mb per hour. I have yet to figure out where is it coming from. On 3/7/07 Zed A. Shaw <zedshaw at zedshaw.com> wrote:> > I''d say first off the solution is: just quit doing that.This is something I can wholeheartedly agree with :) maxing out your Mongrel servers then you''re seriously screwed anyway> and there''s nothing but -n to help you.After a couple of hours quietly pumping iron in a gym, I came to the same conclusion. Let me explain myself, however. The situation that I am concerned about is 20 to 50 apps clustered on the same set of boxes, written by one group of people and supervised by another. Think "large corporate IT", or a shared hosting a la TextDrive. I want to maximize throughput under heavy load, but a more important problem is to reduce the ability of one app screwing up other apps on the same box(es).> It''s as simple as you make threads, threads take ram, threads > don''t go away fast enough.What I was thinking is that by uncoupling the request from its thread, you can probably max out all capabilities (CPU, I/O, Rails) of a 4 cores commodity box with only 15-30 threads. 10-20 request handlers (that will either serve static stuff or carry the request to the Rails queue), one rails handler that loops over requests queue, takes requests to Rails and drops responses off in the response queue, 5-10 response handlers (whose job is simply to copy Rails response from responses queue to originating sockets). Right now, as far as I understand the code, request is carried all the way through by the same thread. On second thoughts, this is asynchronous communications between threads within the process. Far too much design and maintenance overhead for the marginal benefits it may (or may not) bring. Basically, just me being stupid by trying to be too smart. :)> Until Ruby''s IO, GC, and threads improve drastically you''ll keep hitting > these problems.Yes. Meantime, the recipe apparently is "serve static stuff through an upstream web server, and use smaller values of --num--procs". Mongrel that only receives dynamic requests is, essentialy, a single-threaded process, anyway. The only reason to have more than one (1) thread is so that other requests can queue up while it''s doing something that takes time. Cool. By the way, is Ruby 1.9 solving all of these issues?> No one who is sane is trying to really run a Rails app on a 64 meg VPS - -thats just asking for a lot of pain. Well, entry-level slices on most Rails VPS services are 64 Mb. My poking around so far seems to say "it''s doable, but you need to tune it". Best regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/089f4f06/attachment-0001.html
On 3/7/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> > It also doesn''t leak any memory at all *when it is not overloaded*. E.g., > under maximum non-concurent load (single-threaded test client that fires the > next request immediately upon receiving a response to the previous one), it > stays up forever.Correction: it stays up forever, at the same VSS / RSS values as when it just started. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070307/a5bd0c02/attachment.html
Hi! On Wed, Mar 07, 2007 at 08:49:07PM -0800, Carl Lerche wrote:> I''m going to go ahead and blame acts_as_ferret. I had an application > that used acts_as_ferret and each mongrel process reached up to 300MB.Well, as long as it stays at 300 MB after that there doesn''t seem to be a leak :-) It''s right that aaf may take up some RAM, because it holds index instances open across requests. When using these, data structures that can be reused between searches are kept in RAM by Ferret (that process is called ''warming up'' the index) and so there really might be some initial growth in RAM usage of your mongrel processes. How large this can get depends on the size of your index, of course. Just making use of RAM to speed things up is not that bad, imho. Of course I could open/close the index on each request, but that would be like doing plain old CGI...> I removed acts_as_ferret (as well as switched from mysql to > postgresql) and now the exact same application is staying steady at 70 > MB a process. I also use a lot of RMagick in that same application > (didn''t have time to remove it) but the processes still stay at 70MB.I have a live app here running on Rails 1.2.1/MySQL with three mongrels for weeks without a hickup, each taking up max. 60MB of RAM. And yes, it uses acts_as_ferret :-) Jens -- Jens Kr?mer webit! Gesellschaft f?r neue Medien mbH Schnorrstra?e 76 | 01069 Dresden Telefon +49 351 46766-0 | Telefax +49 351 46766-66 kraemer at webit.de | www.webit.de Amtsgericht Dresden | HRB 15422 GF Sven Haubold, Hagen Malessa
Quick fix could be to use HAProxy for loadbalancing and setting the max number of connections per mongrel to 1. Added bonus here is that all requests get queued op at HAProxy (which is very conservative on memory use) and routed to the first available mongrel process instead of getting queued op at mongrel level. I read nginx would in the near future (or maybe already) have the option to limit the number of simultaneous proxied connections. But HAProxy is the only tool that can do this that I have experience with (still wondering why the Rails community seems to favor Pound so much). Piet. ________________________________ From: mongrel-users-bounces at rubyforge.org [mailto:mongrel-users-bounces at rubyforge.org] On Behalf Of Alexey Verkhovsky Sent: donderdag 8 maart 2007 7:36 To: mongrel-users at rubyforge.org Subject: Re: [Mongrel] Memory leaks in my site <snip> It also doesn''t leak any memory at all *when it is not overloaded*. E.g., under maximum non-concurent load (single-threaded test client that fires the next request immediately upon receiving a response to the previous one), it stays up forever. When Mongrel + "Hello, World" is overloaded, there is a memory leak to the tune of 6 Mb per hour. I have yet to figure out where is it coming from. <snip> Yes. Meantime, the recipe apparently is "serve static stuff through an upstream web server, and use smaller values of --num--procs". Mongrel that only receives dynamic requests is, essentialy, a single-threaded process, anyway. The only reason to have more than one (1) thread is so that other requests can queue up while it''s doing something that takes time. Cool. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/88ef70f1/attachment.html
On Thu, Mar 08, 2007 at 10:24:40AM +0100, Piet Hadermann wrote:> > Quick fix could be to use HAProxy for loadbalancing and setting the max > number of connections per mongrel to 1. > Added bonus here is that all requests get queued op at HAProxy (which is > very conservative on memory use) and routed to the first available > mongrel process instead of getting queued op at mongrel level. > > I read nginx would in the near future (or maybe already) have the option > to limit the number of simultaneous proxied connections. But HAProxy is > the only tool that can do this that I have experience with (still > wondering why the Rails community seems to favor Pound so much).Afair Pen can limit the number of connections per mongrel process, too. Jens -- Jens Kr?mer webit! Gesellschaft f?r neue Medien mbH Schnorrstra?e 76 | 01069 Dresden Telefon +49 351 46766-0 | Telefax +49 351 46766-66 kraemer at webit.de | www.webit.de Amtsgericht Dresden | HRB 15422 GF Sven Haubold, Hagen Malessa
Probably off topic, but which version of ferret are you running? The latest incarnation (0.10.14 i think?) was segfaulting about every 10th query bringing down mongrel along with it. On 3/8/07, Jens Kraemer <kraemer at webit.de> wrote:> I have a live app here running on Rails 1.2.1/MySQL with three mongrels > for weeks without a hickup, each taking up max. 60MB of RAM. And yes, it > uses acts_as_ferret :-)
On 3/8/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote:> > C''mon, Java processes typically hit the 600M or even > 2G ranges and that''s just common place.Java runtime is a bloat. But its processes are natively multi-threaded and can serve multiple apps from the same server. So, one Java process should probably be compared with about 5-10 Rails processes in terms of RAM footprint. It''s still a bloat. Even better, why are people complaining about the memory footprint of> RAILS on the MONGREL mailing list?Not guilty. In this thread, I am complaining about Mongrel''s memory footprint, not "Hello, World"''s. :) Besides, I''m not even complaining, just trying to understand the ins and outs of this stack, and what can or cannot be squeezed out of it. By the way, I hope load balancing solutions are not off-topic? :) Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/958afb2c/attachment.html
On Thu, Mar 08, 2007 at 09:58:03PM +0800, Eden Li wrote:> Probably off topic, but which version of ferret are you running? The > latest incarnation (0.10.14 i think?) was segfaulting about every 10th > query bringing down mongrel along with it.that''s a common problem with 0.10.x, however I haven''t experienced it myself in this app (which uses 0.10.14), I guess because the index is mostly read-only. Recent 0.11.x versions of Ferret seem to fix large parts of these issues. Jens -- Jens Kr?mer webit! Gesellschaft f?r neue Medien mbH Schnorrstra?e 76 | 01069 Dresden Telefon +49 351 46766-0 | Telefax +49 351 46766-66 kraemer at webit.de | www.webit.de Amtsgericht Dresden | HRB 15422 GF Sven Haubold, Hagen Malessa
I agree - java server processes can get HUGE. I work with weblogic clusters and we constantly have to change the thread pools and stack size on the threads because they can get wayyy out of hand. Has anyone successfully created mongrel clusters spanning two or more physical servers? Just curious. Ryan From: mongrel-users-bounces at rubyforge.org [mailto:mongrel-users-bounces at rubyforge.org] On Behalf Of Alexey Verkhovsky Sent: Thursday, March 08, 2007 10:52 AM To: mongrel-users at rubyforge.org Subject: Re: [Mongrel] Memory leaks in my site On 3/8/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote: C''mon, Java processes typically hit the 600M or even 2G ranges and that''s just common place. Java runtime is a bloat. But its processes are natively multi-threaded and can serve multiple apps from the same server. So, one Java process should probably be compared with about 5-10 Rails processes in terms of RAM footprint. It''s still a bloat. Even better, why are people complaining about the memory footprint of RAILS on the MONGREL mailing list? Not guilty. In this thread, I am complaining about Mongrel''s memory footprint, not "Hello, World"''s. :) Besides, I''m not even complaining, just trying to understand the ins and outs of this stack, and what can or cannot be squeezed out of it. By the way, I hope load balancing solutions are not off-topic? :) Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070308/1d50ec07/attachment.html
On Wed, 7 Mar 2007 23:35:36 -0700 "Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:> Some further findings: > > "Hello World" Rails application on my rather humble rig (Dell D620 laptop > running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2) can handle over 500 hits > per second on the following action: > > def say_hi > render :text => ''Hi!'' > end > > It also doesn''t leak any memory at all *when it is not overloaded*. E.g., > under maximum non-concurent load (single-threaded test client that fires the > next request immediately upon receiving a response to the previous one), it > stays up forever.Sigh, I''ll test this too, but the solution is to set -n to something reasonable for your setup.> What I was thinking is that by uncoupling the request from its thread, you > can probably max out all capabilities (CPU, I/O, Rails) of a 4 cores > commodity box with only 15-30 threads. 10-20 request handlers (that will > either serve static stuff or carry the request to the Rails queue), one > rails handler that loops over requests queue, takes requests to Rails and > drops responses off in the response queue, 5-10 response handlers (whose job > is simply to copy Rails response from responses queue to originating > sockets). > > Right now, as far as I understand the code, request is carried all the way > through by the same thread. > > On second thoughts, this is asynchronous communications between threads > within the process. Far too much design and maintenance overhead for the > marginal benefits it may (or may not) bring. Basically, just me being stupid > by trying to be too smart. :)Exactly, I went through this design too using various queueing mechanisms and Ruby''s Thread primitives were just too damn slow. The best way to get max performance was to start a thread which handled the request and response. The fastest (by a small margin) was not using threads at all but instead using a select loop. Problem with that is then if your Rails code starts a Thread and doesn''t do it right then Ruby''s idiotic deadlock detection kicks in because it considers the select calls part of the deadlock detection. Now that fastthread is out though, it might be worth checking out the queueing model to see if it''s still slow as hell or not. Ultimately I wanted a single thread that listened for connections and built the HttpRequest/Response objects using select, then fire off these to a queue of N numbers of processor threads. Queue was just too damn slow to pull it off so it didn''t work out.> > Until Ruby''s IO, GC, and threads improve drastically you''ll keep hitting > > these problems. > > > Yes. Meantime, the recipe apparently is "serve static stuff through an > upstream web server, and use smaller values of --num--procs". Mongrel that > only receives dynamic requests is, essentialy, a single-threaded process, > anyway. The only reason to have more than one (1) thread is so that other > requests can queue up while it''s doing something that takes time. Cool.Not really, if you set Mongrel to handle only -n 1 then your web server will randomly kill off mongrels and connections from clients whenever you run out of backends to service requests. The nginx author is currently working on a mechanism to allow you to queue the requests at the proxy server before sending them back. Also, -n 1 will not work for all the other Ruby web frameworks that don''t have this locking problem. All of the other frameworks are thread safe (even ones that use AR) and can run multiple requests concurrently.> By the way, is Ruby 1.9 solving all of these issues?No idea, considering 1.9 is decades out at it''s current pace. You should go look at JRuby if you want something modern that''s able to run Rails right now (and Mongrel).> > No one who is sane is trying to really run a Rails app on a 64 meg VPS - - > thats just asking for a lot of pain. > Well, entry-level slices on most Rails VPS services are 64 Mb. > My poking around so far seems to say "it''s doable, but you need to tune it".No, people need to quit thinking that this will work the way it did when they dumped their crappy PHP code into a directory and prayed Apache would run it. Even in those situations that ease of deployment and ability to run on small installations was an illusion. Talk to anyone who does serious PHP hosting and they''ll tell you it gets much more complicated. Sorry to be so harsh, but as the saying goes, you can have it cheap, fast, or reliable pick one. (Yes, one, I''m changing it. :-) However, why are people are complaining about 64M of ram for a Mongrel process? C''mon, Java processes typically hit the 600M or even 2G ranges and that''s just common place. If you want small scale hosting, you''ll have to try a different solution entirely. Even better, why are people complaining about the memory footprint of RAILS on the MONGREL mailing list? These same problems existed before Mongrel, and when you complain here there''s nothing I can really do. You want the RAM to go down in Rails, then start writing the patches to get it to go down. I''m sure there''s just oodles of savings to be made inside Rails. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
On Thu, 8 Mar 2007 09:52:09 -0700 "Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:> On 3/8/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote: > > > > C''mon, Java processes typically hit the 600M or even > > 2G ranges and that''s just common place. > > > Java runtime is a bloat. But its processes are natively multi-threaded and > can serve multiple apps from the same server. So, one Java process should > probably be compared with about 5-10 Rails processes in terms of RAM > footprint. It''s still a bloat. > > Even better, why are people complaining about the memory footprint of > > RAILS on the MONGREL mailing list? > > Not guilty. In this thread, I am complaining about Mongrel''s memory > footprint, not "Hello, World"''s. :) > Besides, I''m not even complaining, just trying to understand the ins and > outs of this stack, and what can or cannot be squeezed out of it.No, you''re still testing Rails'' memory usage. If you want to test Mongrel''s you need to write a non-rails Mongrel handler that does hello world. For example, here''s the tests I run to make sure that I haven''t destroyed Mongrel''s memory usage: SERVER TEST VSZ RSS Mongrel Hello 20844 10564 Mongrel Rails 56092 39512 Webrick Hello 15180 4916 Webrick Rails 56548 39924 ruby IRB 4756 3252 ruby IRB_Thr 12952 3376 IRB_Thr is just irb run with a single thread in sleep. The two Hello apps are pretty close to the same, and the Rails apps are the exact same and show usage after the handling the same number of requests. What you can see is even though Webrick is smaller than Mongrel, it still ends up being the same size when it runs under Rails. That''s why I say look at Rails if you want to save memory. Even if you stripped Mongrel down to the absolute essentials, the absolute impossible best is IRB_Thr. If you could get Mongrel down to Webrick''s size, the test also shows it doesn''t matter since the memory goes up to the same as when running mongrel anyway. No matter what, if you want to save memory you''ve gotta look at Rails and not Mongrel. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
On Thu, 8 Mar 2007 11:19:55 -0600 "Ryan Richards" <abstractryan at gmail.com> wrote:> I agree - java server processes can get HUGE. I work with weblogic clusters > and we constantly have to change the thread pools and stack size on the > threads because they can get wayyy out of hand. > > > > Has anyone successfully created mongrel clusters spanning two or more > physical servers? Just curious.Yep, many folks do this. Just run the mongrels across the servers and point your webserver or load balancer at them. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/