I think my rails app has a slow memory leak that degrades performance over the course of several days. I''d like to get to the bottom of it but in the meantime I''d prefer to just bandage it and force the fcgi library to start a new instance of my app every so often. Is there a clean way to do this? Calling break in the each_cgi loop causes an error. Sending SIGUSR1, which looks like the way apache tells FCGI to quit, also causes an error. Any other ideas? miles
Miles- I''m not sure there is a *clean* way to do this. I modified my dispatcher.fcgi to basically exit after x period of time (I''m using 3600 seconds -- one hour), as long as it hadn''t served a request within the last (roughly) 30 seconds. It still ends up generating a warning message in the Apache error log that looks like this: [Wed Mar 9 15:41:18 2005] [warn] FastCGI: (dynamic) server "/usr/www/dev/sites/dispatch.fcgi" (pid 64454) terminated by calling exit with status ''1'' I''d be happy to share the code, but it''s not really all that complicated. This sort of randomly popped into my head, but maybe you could try using "exit" instead of break to get out of the each_cgi loop in your code. Hope this helps. Cheers, Ben On Wed, 09 Mar 2005 14:41:23 -0800, Miles Egan <miles-PVYhMsubmREAvxtiuMwx3w@public.gmane.org> wrote:> I think my rails app has a slow memory leak that degrades performance > over the course of several days. I''d like to get to the bottom of it > but in the meantime I''d prefer to just bandage it and force the fcgi > library to start a new instance of my app every so often. Is there a > clean way to do this? Calling break in the each_cgi loop causes an > error. Sending SIGUSR1, which looks like the way apache tells FCGI to > quit, also causes an error. > > Any other ideas? > > miles > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
On Wed, 9 Mar 2005, Miles Egan wrote:> I think my rails app has a slow memory leak that degrades performance over > the course of several days. I''d like to get to the bottom of it but in the > meantime I''d prefer to just bandage it and force the fcgi library to start a > new instance of my app every so often. Is there a clean way to do this? > Calling break in the each_cgi loop causes an error. Sending SIGUSR1, which > looks like the way apache tells FCGI to quit, also causes an error.apache?> > Any other ideas? > > miles > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >-a -- ==============================================================================| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | When you do something, you should burn yourself completely, like a good | bonfire, leaving no trace of yourself. --Shunryu Suzuki ===============================================================================
On Wed, 9 Mar 2005, Miles Egan wrote:> I think my rails app has a slow memory leak that degrades performance over > the course of several days. I''d like to get to the bottom of it but in the > meantime I''d prefer to just bandage it and force the fcgi library to start a > new instance of my app every so often. Is there a clean way to do this? > Calling break in the each_cgi loop causes an error. Sending SIGUSR1, which > looks like the way apache tells FCGI to quit, also causes an error. > > Any other ideas?http://www.fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html#FastCgiConfig -a -- ==============================================================================| EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | When you do something, you should burn yourself completely, like a good | bonfire, leaving no trace of yourself. --Shunryu Suzuki ===============================================================================
> > I think my rails app has a slow memory leak that degrades performance over > > the course of several days. I''d like to get to the bottom of it but in theI thought I had a similar problem at one time but it ended up being my log files. My production.log file grew to several hundred meg and my app sort of came to a crawl. I changed the RAILS_DEFAULT_LOGGER line in environment.rb file to: RAILS_DEFAULT_LOGGER Logger.new("#{RAILS_ROOT}/log/#{RAILS_ENV}.log", 5, 1000*1024) Which will create a new production.log file after the size grows to a meg and keeps 5 archived. After I did this my app hasn''t had a problem. This is something that was probably obvious to others, but didn''t occur to me until my app started slowing down. --austin