Following Hemant''s suggestion, I upgraded to the git version of BDRb as per http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb -for-better-memory-usage/ and yes, it is more stable and doesn''t have the limit of a maximum of 256 jobs that can be queued. So that''s great. But one thing I''ve noticed is that my mongrels which start off at about 50MB of memory slowly start to swell and in about 12-24 hours end up at 200MB or even 300MB at times. I can restart rails in a couple seconds and the problem goes away (but eventually returns) but I wanted to know if anyone else is seeing this and what can be done to avoid it altogether. I am not using any other gems/plugins except googlecharts. I haven''t had this problem before. Another question - is there a way to roll over the background*.log files once they hit a certain size like 50 or 100M. Eventually isn''t that going to slow things down - having to write to a ginormous log file? Sort of like how we can set, in Rails, both a max log file size and the number of log files to be stored? Thanks, Raghu -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080703/b1040441/attachment.html>
On Thu, Jul 3, 2008 at 10:22 PM, Raghu Srinivasan <raghu.srinivasan at gmail.com> wrote:> Following Hemant''s suggestion, I upgraded to the git version of BDRb as per > http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb-for-better-memory-usage/ > and yes, it is more stable and doesn''t have the limit of a maximum of 256 > jobs that can be queued. So that''s great. But one thing I''ve noticed is that > my mongrels which start off at about 50MB of memory slowly start to swell > and in about 12-24 hours end up at 200MB or even 300MB at times. I can > restart rails in a couple seconds and the problem goes away (but eventually > returns) but I wanted to know if anyone else is seeing this and what can be > done to avoid it altogether. I am not using any other gems/plugins except > googlecharts. I haven''t had this problem before.Hmm, you figure this could be a problem with BackgrounDRb client library, since server runs totally separately from rails and shouldn''t have anything to do with rails? Anyway, try to run your app under bleakhouse and see from where this memory issue is coming up. I haven''t seen this kind of problem in our production environment.> > Another question - is there a way to roll over the background*.log files > once they hit a certain size like 50 or 100M. Eventually isn''t that going to > slow things down - having to write to a ginormous log file? Sort of like how > we can set, in Rails, both a max log file size and the number of log files > to be stored? >Another good idea would be disable logging altogether with following option in config file: :backgroundrb: :debug_log: false
1. I see it came out that way but I didn''t mean to say that I was sure that it was the new BDRb version that was causing my mongrel bloat. Sorry about that. The only change I made was the switch to this version and the ''packet'' gem upgrade. Are you saying that BDRb should never affect Rails? 2. No, I actually find logging very valuable and don''t want to disable it. I just don''t want the log sizes to grow too big. On Thu, Jul 3, 2008 at 10:26 AM, hemant <gethemant at gmail.com> wrote:> On Thu, Jul 3, 2008 at 10:22 PM, Raghu Srinivasan > <raghu.srinivasan at gmail.com> wrote: > > Following Hemant''s suggestion, I upgraded to the git version of BDRb as > per > > > http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb-for-better-memory-usage/ > > and yes, it is more stable and doesn''t have the limit of a maximum of 256 > > jobs that can be queued. So that''s great. But one thing I''ve noticed is > that > > my mongrels which start off at about 50MB of memory slowly start to swell > > and in about 12-24 hours end up at 200MB or even 300MB at times. I can > > restart rails in a couple seconds and the problem goes away (but > eventually > > returns) but I wanted to know if anyone else is seeing this and what can > be > > done to avoid it altogether. I am not using any other gems/plugins except > > googlecharts. I haven''t had this problem before. > > Hmm, you figure this could be a problem with BackgrounDRb client > library, since server runs totally separately from rails and shouldn''t > have anything to do with rails? Anyway, try to run your app under > bleakhouse and see from where this memory issue is coming up. I > haven''t seen this kind of problem in our production environment. > > > > > > Another question - is there a way to roll over the background*.log files > > once they hit a certain size like 50 or 100M. Eventually isn''t that going > to > > slow things down - having to write to a ginormous log file? Sort of like > how > > we can set, in Rails, both a max log file size and the number of log > files > > to be stored? > > > > Another good idea would be disable logging altogether with following > option in config file: > > :backgroundrb: > :debug_log: false >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080703/ed155da1/attachment.html>
On Thu, Jul 3, 2008 at 11:05 PM, Raghu Srinivasan <raghu.srinivasan at gmail.com> wrote:> 1. I see it came out that way but I didn''t mean to say that I was sure that > it was the new BDRb version that was causing my mongrel bloat. Sorry about > that. The only change I made was the switch to this version and the ''packet'' > gem upgrade. Are you saying that BDRb should never affect Rails?Yes, bdrb should never affect rails. In your mongrel, only client libraries of BackgrounDRb will be available which holds the socket connection to the server. Its a very thin layer and shouldn''t affect rails. But anyways, I will try a hard look on this.> 2. No, I actually find logging very valuable and don''t want to disable it. I > just don''t want the log sizes to grow too big.There is a solution, you can monkey patch log_worker.rb, @log_file = Logger.new("#{RAILS_HOME}/log/backgroundrb_#{CONFIG_FILE[:backgroundrb][:port]}.log") and change above line to: @log_file = Logger.new("#{RAILS_HOME}/log/backgroundrb_#{CONFIG_FILE[:backgroundrb][:port]}.log",''daily'')> > On Thu, Jul 3, 2008 at 10:26 AM, hemant <gethemant at gmail.com> wrote: >> >> On Thu, Jul 3, 2008 at 10:22 PM, Raghu Srinivasan >> <raghu.srinivasan at gmail.com> wrote: >> > Following Hemant''s suggestion, I upgraded to the git version of BDRb as >> > per >> > >> > http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb-for-better-memory-usage/ >> > and yes, it is more stable and doesn''t have the limit of a maximum of >> > 256 >> > jobs that can be queued. So that''s great. But one thing I''ve noticed is >> > that >> > my mongrels which start off at about 50MB of memory slowly start to >> > swell >> > and in about 12-24 hours end up at 200MB or even 300MB at times. I can >> > restart rails in a couple seconds and the problem goes away (but >> > eventually >> > returns) but I wanted to know if anyone else is seeing this and what can >> > be >> > done to avoid it altogether. I am not using any other gems/plugins >> > except >> > googlecharts. I haven''t had this problem before. >> >> Hmm, you figure this could be a problem with BackgrounDRb client >> library, since server runs totally separately from rails and shouldn''t >> have anything to do with rails? Anyway, try to run your app under >> bleakhouse and see from where this memory issue is coming up. I >> haven''t seen this kind of problem in our production environment. >> >> >> > >> > Another question - is there a way to roll over the background*.log files >> > once they hit a certain size like 50 or 100M. Eventually isn''t that >> > going to >> > slow things down - having to write to a ginormous log file? Sort of like >> > how >> > we can set, in Rails, both a max log file size and the number of log >> > files >> > to be stored? >> > >> >> Another good idea would be disable logging altogether with following >> option in config file: >> >> :backgroundrb: >> :debug_log: false > >-- Let them talk of their oriental summer climes of everlasting conservatories; give me the privilege of making my own summer with my own coals. http://gnufied.org
On Thu, Jul 3, 2008 at 7:52 PM, Raghu Srinivasan <raghu.srinivasan at gmail.com> wrote:> ... But one thing I''ve noticed is that > my mongrels which start off at about 50MB of memory slowly start to swell > and in about 12-24 hours end up at 200MB or even 300MB at times. I can > restart rails in a couple seconds and the problem goes away (but eventually > returns) but I wanted to know if anyone else is seeing this and what can be > done to avoid it altogether. I am not using any other gems/plugins except > googlecharts. I haven''t had this problem before. >This is not related to backgroundrb, but yes, we see and have seen process growth like this. After very long investigations, we believe most of our problems to have been attributable to Ruby memory leaks. See here: http://pennysmalls.com/2008/03/23/ruby-leaks-memory/ I recommend using Ruby 1.8.7 if you are able. See here: http://pennysmalls.com/2008/06/29/rails-on-ruby-187/ Regards, Stephen