Hello, I''m running a Rails application which must sort and manipulate a lot of data which are loaded in memory. The Rails app runs on 2 mongrel processes. When I first load the app, both are 32Mb in memory. After some days, both are between 200Mb and 300Mb. My question is : is there some kind of garbage collector in Mongrel? I never see the two Mongrel processes memory footprint decrease. Is it normal? I use Mongrel 1.0.1 with Rails 1.2.3 on Debian. Best regards, Thomas. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071105/19c99906/attachment.html
On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote:> I''m running a Rails application which must sort and manipulate a lot of data > which are loaded in memory. > The Rails app runs on 2 mongrel processes. > When I first load the app, both are 32Mb in memory. > After some days, both are between 200Mb and 300Mb. > > My question is : is there some kind of garbage collector in Mongrel? > I never see the two Mongrel processes memory footprint decrease. > Is it normal?Ruby is a garbage collected language. Ruby has a conservative mark and sweep garbage collector. Memory usage like that is probably not a Mongrel issue (unless you are generating _very_ large responses in your application). It''s likely an issue with your code. What version of Ruby are you using? Are you using any extensions? Kirk Haines
On 11/5/07, Kirk Haines <wyhaines at gmail.com> wrote:> On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote: > > I never see the two Mongrel processes memory footprint decrease. > > Is it normal? > Ruby is a garbage collected language. Ruby has a conservative mark > and sweep garbage collector.But Ruby processes never release memory back to the operating system. So, the fact that its RSS never goes down is normal. In normal circumstances, Mongrel should grow up to some point around 60-120 Mb and stay there. 300 Mb and growing is a sure sign you have a memory leak somewhere. -- Alexey Verkhovsky CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com] RubyWorks [http://rubyworks.thoughtworks.com]
On 11/5/07, Kirk Haines <wyhaines at gmail.com> wrote:> > On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote: > > > I''m running a Rails application which must sort and manipulate a lot of > data > > which are loaded in memory. > > The Rails app runs on 2 mongrel processes. > > When I first load the app, both are 32Mb in memory. > > After some days, both are between 200Mb and 300Mb. > > > > My question is : is there some kind of garbage collector in Mongrel? > > I never see the two Mongrel processes memory footprint decrease. > > Is it normal? > > Ruby is a garbage collected language. Ruby has a conservative mark > and sweep garbage collector. > > Memory usage like that is probably not a Mongrel issue (unless you are > generating _very_ large responses in your application). It''s likely > an issue with your code. What version of Ruby are you using? Are you > using any extensions? > > > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-usersHello Kirk, Thanks for your answer. I''m using ruby 1.8.5 (2006-08-25) [i486-linux]. The Rails app uses those plugins : * acts_as_taggable_on_steroids * attachment_fu * exception_notification * localization Which kink of issues with my code could use that much memory? If I load lots of records with Active Records, aren''t they "unloaded" at some times? Thanks in advance for your help. Thomas. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071105/76ef2707/attachment.html
On 11/5/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> On 11/5/07, Kirk Haines <wyhaines at gmail.com> wrote: > > On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote: > > > I never see the two Mongrel processes memory footprint decrease. > > > Is it normal? > > Ruby is a garbage collected language. Ruby has a conservative mark > > and sweep garbage collector. > > But Ruby processes never release memory back to the operating system. > So, the fact that its RSS never goes down is normal. > > In normal circumstances, Mongrel should grow up to some point around > 60-120 Mb and stay there. 300 Mb and growing is a sure sign you have a > memory leak somewhere. > > -- > Alexey Verkhovsky > CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com] > RubyWorks [http://rubyworks.thoughtworks.com] > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >Hello Alexey,> 300 Mb and growing is a sure sign you have a memory leak somewhere.What would you suggest me to investigate? Thanks, Thomas.
On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote:> Thanks for your answer. > I''m using ruby 1.8.5 (2006-08-25) [i486-linux]. > The Rails app uses those plugins : > * acts_as_taggable_on_steroids > * attachment_fu > * exception_notification > * localization > > > Which kink of issues with my code could use that much memory? > If I load lots of records with Active Records, aren''t they "unloaded" at > some times?Does your code or any of those pluginx use Array#shift? There was a bug with Array#shift which still existed in 1.8.5 which basically left stuff inside the array data structure after a shift, so that those things didn''t get GCd when they should have. It''s a sneaky bug that can easily eat a lot of memory. Otherwise, can you start a test instance of your application, and then test it to see if there are certain actions which cause the memory growth. That would help you pinpoint where the likely problems are. Just use ab or httperf to send a large number of requests to specific urls in your app, and see how ram usage changes as you do that. Kirk Haines
We''re seeing that all the time with our Rails apps. I''m looking at four processes right now in the 700 to 900MB range. My first guess is that it''s something in Rails or our app. After all, that''s where most of the code is. You might try running requests through WEBrick on a test server to see if the leak still occurs. If so, then you know at least part of it is Rails. There''s always nightly restarts ;) Not my choice on how to do things, but hey, it''ll have to hold till I can fix bigger things. What''s the Ruby GC like? Circular references a problem? Thomas Balthazar wrote:> Hello, > > I''m running a Rails application which must sort and manipulate a lot > of data which are loaded in memory. > The Rails app runs on 2 mongrel processes. > When I first load the app, both are 32Mb in memory. > After some days, both are between 200Mb and 300Mb. > > My question is : is there some kind of garbage collector in Mongrel? > I never see the two Mongrel processes memory footprint decrease. > Is it normal? > > I use Mongrel 1.0.1 with Rails 1.2.3 on Debian. > > Best regards, > Thomas. > > ------------------------------------------------------------------------ > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users-------------- next part -------------- A non-text attachment was scrubbed... Name: rob.vcf Type: text/x-vcard Size: 116 bytes Desc: not available Url : http://rubyforge.org/pipermail/mongrel-users/attachments/20071105/f0ed284a/attachment.vcf
Hi guys, Along the lines of Thomas question, I''ve noticed that my mongrel rails processes start at around 50 MB, and creep up to around 100 MB (or a little over) pretty soon after being used. Is this something other folks are seeing (i.e. standard rails overhead), or does it sound specific to my app? Also, if anyone has any tips on finding memory leaks in mongrel, they''d be much appreciated. I''ve played with watching the ObjectSpace. Is this the best way? Kirk: thanks for the tip on Array.shift with Ruby 1.8.5. I''ll keep an eye out for this. Thanks, Pete On Nov 5, 2007, at 8:09 AM, Thomas Balthazar wrote:> On 11/5/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote: >> On 11/5/07, Kirk Haines <wyhaines at gmail.com> wrote: >>> On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote: >>>> I never see the two Mongrel processes memory footprint decrease. >>>> Is it normal? >>> Ruby is a garbage collected language. Ruby has a conservative mark >>> and sweep garbage collector. >> >> But Ruby processes never release memory back to the operating system. >> So, the fact that its RSS never goes down is normal. >> >> In normal circumstances, Mongrel should grow up to some point around >> 60-120 Mb and stay there. 300 Mb and growing is a sure sign you >> have a >> memory leak somewhere. >> >> -- >> Alexey Verkhovsky >> CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com] >> RubyWorks [http://rubyworks.thoughtworks.com] >> _______________________________________________ >> Mongrel-users mailing list >> Mongrel-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-users >> > > Hello Alexey, > >> 300 Mb and growing is a sure sign you have a memory leak somewhere. > > What would you suggest me to investigate? > > Thanks, > Thomas. > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote:> Hi guys, > > Along the lines of Thomas question, I''ve noticed that my mongrel > rails processes start at around 50 MB, and creep up to around 100 MB > (or a little over) pretty soon after being used. Is this something > other folks are seeing (i.e. standard rails overhead), or does it > sound specific to my app?There is probably a jump after the first request, then a slow creep upward for a bit, then it should stabilize. If it never stabilizes, then you have something in your code somewhere which is leaking.> Also, if anyone has any tips on finding memory leaks in mongrel, > they''d be much appreciated. I''ve played with watching the > ObjectSpace. Is this the best way?There are some tools that help, but yeah, mostly it''s by using objectspace and looking through your code. If the code uses an extension, it''s easy for an extension to have a leak that doesn''t show up so easily, though. I originally found the Array#shift leak by using valgrind on Ruby, since all of that is C code.> Kirk: thanks for the tip on Array.shift with Ruby 1.8.5. I''ll keep > an eye out for this.If this bites you, you can migrate to the most recent 1.8.6, or you can change your code to not use shift. Generally when shift is used, push is being used to stick things on one end of the array while shift pulls them off the front. Changing that to use unshift and pop gets around the problem. Kirk Haines
On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote:> Along the lines of Thomas question, I''ve noticed that my mongrel > rails processes start at around 50 MB, and creep up to around 100 MBSet --num-procs lower than default 1024, and it won''t be happening. -- Alexey Verkhovsky CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com] RubyWorks [http://rubyworks.thoughtworks.com]
On 11/5/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote: > > Along the lines of Thomas question, I''ve noticed that my mongrel > > rails processes start at around 50 MB, and creep up to around 100 MB > > Set --num-procs lower than default 1024, and it won''t be happening.It depends. One cause of that sort of creeping mem usage is having an app that sees large numbers of concurrent threads, as you know, but it''s not the only cause. If concurrent threads ARE a mem usage problem, one might try using evented_mongrel out of the Swiftiply package. http://swiftiply.swiftcore.org Just run it in a test environment and see if it helps. For some apps, it makes a big difference in that thread related RAM creep. Kirk Haines P.S. Yes, I WILL have the patch to fix it for Mongrel > 1.0.1 today. The end of my week/weekend got very busy with things that don''t involve computer screens.
At 09:17 AM 11/5/2007, you wrote:> > Which kink of issues with my code could use that much memory? > > If I load lots of records with Active Records, aren''t they > "unloaded" at > > some times? > >Does your code or any of those pluginx use Array#shift? There was a >bug with Array#shift which still existed in 1.8.5 which basically left >stuff inside the array data structure after a shift, so that those >things didn''t get GCd when they should have. It''s a sneaky bug that >can easily eat a lot of memory. > >Otherwise, can you start a test instance of your application, and then >test it to see if there are certain actions which cause the memory >growth. That would help you pinpoint where the likely problems are. >Just use ab or httperf to send a large number of requests to specific >urls in your app, and see how ram usage changes as you do that. > > >Kirk HainesThanks Kirk - I guess I''m totally OT at this point, but I hadn''t heard about this bug before. From your description this is a specific problem to the underlying C code implementing shift, which is not found in related functions? So "array.slice!(0)" would be identical in function to shift but not contain this leak? Thanks again, Steve
On 11/5/07, Steve Midgley <public at misuse.org> wrote:> At 09:17 AM 11/5/2007, you wrote: > > > Which kink of issues with my code could use that much memory? > > > If I load lots of records with Active Records, aren''t they > > "unloaded" at > > > some times? > > > >Does your code or any of those pluginx use Array#shift? There was a > >bug with Array#shift which still existed in 1.8.5 which basically left > >stuff inside the array data structure after a shift, so that those > >things didn''t get GCd when they should have. It''s a sneaky bug that > >can easily eat a lot of memory. > > > >Otherwise, can you start a test instance of your application, and then > >test it to see if there are certain actions which cause the memory > >growth. That would help you pinpoint where the likely problems are. > >Just use ab or httperf to send a large number of requests to specific > >urls in your app, and see how ram usage changes as you do that. > > > > > >Kirk Haines > > Thanks Kirk - I guess I''m totally OT at this point, but I hadn''t heard > about this bug before. From your description this is a specific problem > to the underlying C code implementing shift, which is not found in > related functions? So "array.slice!(0)" would be identical in function > to shift but not contain this leak? > > Thanks again, > > Steve > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >Hello, Thanks everybody for all those informations. I''ll make some tests and I''ll keep you posted. I won''t have the time to run those tests this week, but I won''t forget to post the results on the list. Best, Thomas.
On 11/5/07, Steve Midgley <public at misuse.org> wrote:> Thanks Kirk - I guess I''m totally OT at this point, but I hadn''t heard > about this bug before. From your description this is a specific problem > to the underlying C code implementing shift, which is not found in > related functions? So "array.slice!(0)" would be identical in function > to shift but not contain this leak?Yeah. It looked to me like whoever wrote the original array.c code just forgot something when writing the code, because it''s just #shift that has the problem. This bug was fixed, but not until 1.8.6. I know it is fixed as of at least the last couple of patch releases. I am unsure if it was fixed in the original 1.8.6 release, however. Kirk Haines
What is a good value for --num-procs for rails applications, since these are single threaded? Does it depend on how fast the application responds to users? Thanks, Pete On Nov 5, 2007, at 9:25 AM, Alexey Verkhovsky wrote:> On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote: >> Along the lines of Thomas question, I''ve noticed that my mongrel >> rails processes start at around 50 MB, and creep up to around 100 MB > > Set --num-procs lower than default 1024, and it won''t be happening. > > -- > Alexey Verkhovsky > CruiseControl.rb [http://cruisecontrolrb.thoughtworks.com] > RubyWorks [http://rubyworks.thoughtworks.com] > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote:> What is a good value for --num-procs for rails applications, since > these are single threaded? Does it depend on how fast the > application responds to users?It''s application specific. Your sweet spot is going to be big enough that you don''t experience capacity starvation during load bursts when you have temporary periods where more traffic is coming in than you are clearing, but small enough that you don''t waste resources. Experimentation will probably be required to find the best balance. If you try evented_mongrel, you don''t need to worry about num_procs. It''s irrelevant for the evented_mongrel. Kirk Haines
Which Image processor are you using for attachment_fu? If you''re using RMagick, it is notorious for memory leaks. Look at mini_magick or ImageScience as a replacement. =Will Green Kirk Haines wrote:> On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com> wrote: >> Thanks for your answer. >> I''m using ruby 1.8.5 (2006-08-25) [i486-linux]. >> The Rails app uses those plugins : >> * acts_as_taggable_on_steroids >> * attachment_fu >> * exception_notification >> * localization >> >> >> Which kink of issues with my code could use that much memory? >> If I load lots of records with Active Records, aren''t they "unloaded" at >> some times? > > Does your code or any of those pluginx use Array#shift? There was a > bug with Array#shift which still existed in 1.8.5 which basically left > stuff inside the array data structure after a shift, so that those > things didn''t get GCd when they should have. It''s a sneaky bug that > can easily eat a lot of memory. > > Otherwise, can you start a test instance of your application, and then > test it to see if there are certain actions which cause the memory > growth. That would help you pinpoint where the likely problems are. > Just use ab or httperf to send a large number of requests to specific > urls in your app, and see how ram usage changes as you do that. > > > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
If you''re using attachment_fu and send_file then mongrel is handling the sending of files. I had the same problem, spiking memory usage, until I switched to using x_send_file. It pushes the file downloads to apache, instead of mongrel. My memory usage has never spiked since... The XSendFile plugin http://tn123.ath.cx/mod_xsendfile/ Plugin to simplify using x-sendfile... http://john.guen.in/past/2007/4/17/send_files_faster_with_xsendfile/ (e) Thomas Balthazar wrote:> On 11/5/07, *Kirk Haines* <wyhaines at gmail.com > <mailto:wyhaines at gmail.com>> wrote: > > On 11/5/07, Thomas Balthazar <thomas.tmp at gmail.com > <mailto:thomas.tmp at gmail.com>> wrote: > > > I''m running a Rails application which must sort and manipulate a > lot of data > > which are loaded in memory. > > The Rails app runs on 2 mongrel processes. > > When I first load the app, both are 32Mb in memory. > > After some days, both are between 200Mb and 300Mb. > > > > My question is : is there some kind of garbage collector in > Mongrel? > > I never see the two Mongrel processes memory footprint decrease. > > Is it normal? > > Ruby is a garbage collected language. Ruby has a conservative mark > and sweep garbage collector. > > Memory usage like that is probably not a Mongrel issue (unless you > are > generating _very_ large responses in your application). It''s likely > an issue with your code. What version of Ruby are you using? Are you > using any extensions? > > > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org <mailto:Mongrel-users at rubyforge.org> > http://rubyforge.org/mailman/listinfo/mongrel-users > <http://rubyforge.org/mailman/listinfo/mongrel-users> > > > > Hello Kirk, > > Thanks for your answer. > I''m using ruby 1.8.5 (2006-08-25) [i486-linux]. > The Rails app uses those plugins : > * acts_as_taggable_on_steroids > * attachment_fu > * exception_notification > * localization > > > Which kink of issues with my code could use that much memory? > If I load lots of records with Active Records, aren''t they "unloaded" > at some times? > > Thanks in advance for your help. > Thomas. > > ------------------------------------------------------------------------ > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071105/1fd6be7b/attachment.html
On 11/5/07, Matte Edens <matte at ruckuswireless.com> wrote:> If you''re using attachment_fu and send_file then mongrel is handling the > sending of files. I had the same problem, spiking memory usage, until I > switched to using x_send_file. It pushes the file downloads to apache, > instead of mongrel. My memory usage has never spiked since...This falls under the category of creating http responses. If you are using send_file within Mongrel, then the response object that is created will contain all of the file contents. If your file is small to moderately sized, that''s no big deal, but if you start pushing large files around, it will have an impact on your RAM usage. Pushing huge files via send_file necessarily implies huge RAM usage. Don''t do that. x_send_file is one way to avoid doing that. Kirk Haines
Hi Kirk, I''m wondering if we''re being hit by this issue in our application. We generate a lot of thumbnails on the fly and use send_file to transfer the data back to the browsers. Checking the rails docks for send_file it indicates, that unless you use the option :stream => false, the file will be read into a 4096 byte buffer and streamed to the client. http://api.rubyonrails.com/classes/ActionController/Streaming.html#M000093 Is this a bug in send_file? Cheers Dave On 06/11/2007, at 8:39 AM, Kirk Haines wrote:> This falls under the category of creating http responses. If you are > using send_file within Mongrel, then the response object that is > created will contain all of the file contents. If your file is small > to moderately sized, that''s no big deal, but if you start pushing > large files around, it will have an impact on your RAM usage. Pushing > huge files via send_file necessarily implies huge RAM usage. > > Don''t do that. x_send_file is one way to avoid doing that.
Hi Kirk, Does Mongrel need to be multi-threaded at all if you''re working with Rails applications? I use Lighttpd''s mod_proxy_core to distribute incoming requests between 8 mongrels. If mongrel A is working on another request, I want mongrel B to pick up the request right away. If all 8 mongrels are busy, I believe Lighty retries the cycle a few times. So, who needs threads? I''m guessing this is a naive question, but I''d appreciate it if you''d set me straight. Once I get a breather in my release schedule, I plan to look at a switch to evented mongrel. Performance benchmarks + community feedback looks very good. But I still need to get a better grasp on how it works + the differences from standard mongrel. Thanks, Pete On Nov 5, 2007, at 11:02 AM, Kirk Haines wrote:> On 11/5/07, Pete DeLaurentis <pete at nextengine.com> wrote: >> What is a good value for --num-procs for rails applications, since >> these are single threaded? Does it depend on how fast the >> application responds to users? > > It''s application specific. Your sweet spot is going to be big enough > that you don''t experience capacity starvation during load bursts when > you have temporary periods where more traffic is coming in than you > are clearing, but small enough that you don''t waste resources. > Experimentation will probably be required to find the best balance. > > If you try evented_mongrel, you don''t need to worry about num_procs. > It''s irrelevant for the evented_mongrel. > > > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
On 11/6/07, Pete DeLaurentis <pete at nextengine.com> wrote:> Does Mongrel need to be multi-threaded at all if you''re working with > Rails applications?No.> If all 8 mongrels are busy, I believe Lighty retries the cycle a few > times. So, who needs threads? I''m guessing this is a naive > question, but I''d appreciate it if you''d set me straight.evented_mongrel still queues the request, but it does so without creating any threads, so there isn''t the thread related RAM growth.> Once I get a breather in my release schedule, I plan to look at a > switch to evented mongrel. Performance benchmarks + community > feedback looks very good. But I still need to get a better grasp on > how it works + the differences from standard mongrel.My intention is that switching to evented_mongrel or swiftiplied_mongrel is transparent from the perspective of the application (or whatever is running inside a mongrel handler). Kirk Haines
Hi, On 5-Nov-07, at 1:38 PM, Kirk Haines wrote:> On 11/5/07, Steve Midgley <public at misuse.org> wrote: > >> Thanks Kirk - I guess I''m totally OT at this point, but I hadn''t >> heard >> about this bug before. From your description this is a specific >> problem >> to the underlying C code implementing shift, which is not found in >> related functions? So "array.slice!(0)" would be identical in >> function >> to shift but not contain this leak? > > Yeah. It looked to me like whoever wrote the original array.c code > just forgot something when writing the code, because it''s just #shift > that has the problem. > > This bug was fixed, but not until 1.8.6. I know it is fixed as of at > least the last couple of patch releases. I am unsure if it was fixed > in the original 1.8.6 release, however.It isn''t fixed in the ruby that ships with Leopard: 1.8.6 (2007-06-07 patchlevel 36) [universal-darwin9.0] This hack will fix things. class Array alias :naughty_shift :shift def shift result = self.first self[0] = nil # This is the ''magic'' self.naughty_shift result end end>> > > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users---- Bob Hutchison -- tumblelog at http://www.recursive.ca/so/ Recursive Design Inc. -- weblog at http://www.recursive.ca/hutch http://www.recursive.ca/ -- works on http://www.raconteur.info/cms-for-static-content/home/
On 11/6/07, Bob Hutchison <hutch at recursive.ca> wrote:> It isn''t fixed in the ruby that ships with Leopard: > 1.8.6 (2007-06-07 patchlevel 36) [universal-darwin9.0]Ugh. IIRC I checked it with the patch release after 36, and it was fixed there.> This hack will fix things. > > class Array > alias :naughty_shift :shift > def shift > result = self.first > self[0] = nil # This is the ''magic'' > self.naughty_shift > result > end > endNote that this just _mostly_ fixes things. You still end up with array elements in memory carrying around Qnils, but most of the time that''s good enough. Kirk Haines
Why not build from source: ruby 1.8.6 (2007-09-23 patchlevel 110) [i686-darwin8.10.1] On 11/6/07, Kirk Haines <wyhaines at gmail.com> wrote:> > On 11/6/07, Bob Hutchison <hutch at recursive.ca> wrote: > > > It isn''t fixed in the ruby that ships with Leopard: > > 1.8.6 (2007-06-07 patchlevel 36) [universal-darwin9.0] > > Ugh. IIRC I checked it with the patch release after 36, and it was fixed > there. > > >-- geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071106/4f11ae7c/attachment.html
On Mon, 5 Nov 2007 09:55:34 -0600 "Alexey Verkhovsky" <alexey.verkhovsky at gmail.com> wrote:> But Ruby processes never release memory back to the operating system. > So, the fact that its RSS never goes down is normal. > > In normal circumstances, Mongrel should grow up to some point around > 60-120 Mb and stay there. 300 Mb and growing is a sure sign you have a > memory leak somewhere.Ehem, s/Mongrel/Ruby/g on the above. -- Zed A. Shaw - Hate: http://savingtheinternetwithhate.com/ - Good: http://www.zedshaw.com/ - Evil: http://yearofevil.com/
On Mon, 5 Nov 2007 17:06:01 +0100 "Thomas Balthazar" <thomas.tmp at gmail.com> wrote:> Hello Kirk, > > Thanks for your answer. > I''m using ruby 1.8.5 (2006-08-25) [i486-linux]. > The Rails app uses those plugins : > * acts_as_taggable_on_steroids > * attachment_fu > * exception_notification > * localizationHmm, I seem to see this problem quite a lot with attachment_fu installations. Just a hunch. -- Zed A. Shaw - Hate: http://savingtheinternetwithhate.com/ - Good: http://www.zedshaw.com/ - Evil: http://yearofevil.com/
On Tue, 6 Nov 2007 14:34:25 +1100 Dave Cheney <dave at cheney.net> wrote:> Hi Kirk, > > I''m wondering if we''re being hit by this issue in our application. We > generate a lot of thumbnails on the fly and use send_file to transfer > the data back to the browsers. > > Checking the rails docks for send_file it indicates, that unless you > use the option :stream => false, the file will be read into a 4096 > byte buffer and streamed to the client. > > http://api.rubyonrails.com/classes/ActionController/Streaming.html#M000093 > > Is this a bug in send_file?You souldn''t use send_file at all really, because this streams the full file into a StringIO so that mongrel can then send the StringIO outside the rails lock, and because rails is inconsistent in how it sends headers and the body. You should be using either x-sendfile or simply redirect to the real image. If you need to auth the images then check out some of the auth-before-redirect modules available for various web servers. -- Zed A. Shaw - Hate: http://savingtheinternetwithhate.com/ - Good: http://www.zedshaw.com/ - Evil: http://yearofevil.com/
> If you need to auth the images then check out some of the auth-before-redirect modules available for various web servers.I think Danga''s Perlbal was made for just this purpose. Evan On Nov 7, 2007 12:01 PM, Zed A. Shaw <zedshaw at zedshaw.com> wrote:> On Tue, 6 Nov 2007 14:34:25 +1100 > Dave Cheney <dave at cheney.net> wrote: > > > Hi Kirk, > > > > I''m wondering if we''re being hit by this issue in our application. We > > generate a lot of thumbnails on the fly and use send_file to transfer > > the data back to the browsers. > > > > Checking the rails docks for send_file it indicates, that unless you > > use the option :stream => false, the file will be read into a 4096 > > byte buffer and streamed to the client. > > > > http://api.rubyonrails.com/classes/ActionController/Streaming.html#M000093 > > > > Is this a bug in send_file? > > You souldn''t use send_file at all really, because this streams the full file into a StringIO so that mongrel can then send the StringIO outside the rails lock, and because rails is inconsistent in how it sends headers and the body. > > You should be using either x-sendfile or simply redirect to the real image. If you need to auth the images then check out some of the auth-before-redirect modules available for various web servers. > > -- > Zed A. Shaw > - Hate: http://savingtheinternetwithhate.com/ > - Good: http://www.zedshaw.com/ > - Evil: http://yearofevil.com/ > _______________________________________________ > > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- Evan Weaver Cloudburst, LLC
Hello, I just wanted to tell the list that I''ve spent some time to optimize my code a little, I''ve reworked some SQL queries, removed some part of Rails I wasn''t using ... Now, both mongrel processes are stable at 150Mb each. T. On Nov 5, 2007 7:27 PM, Thomas Balthazar <thomas.tmp at gmail.com> wrote:> > On 11/5/07, Steve Midgley <public at misuse.org> wrote: > > At 09:17 AM 11/5/2007, you wrote: > > > > Which kink of issues with my code could use that much memory? > > > > If I load lots of records with Active Records, aren''t they > > > "unloaded" at > > > > some times? > > > > > >Does your code or any of those pluginx use Array#shift? There was a > > >bug with Array#shift which still existed in 1.8.5 which basically left > > >stuff inside the array data structure after a shift, so that those > > >things didn''t get GCd when they should have. It''s a sneaky bug that > > >can easily eat a lot of memory. > > > > > >Otherwise, can you start a test instance of your application, and then > > >test it to see if there are certain actions which cause the memory > > >growth. That would help you pinpoint where the likely problems are. > > >Just use ab or httperf to send a large number of requests to specific > > >urls in your app, and see how ram usage changes as you do that. > > > > > > > > >Kirk Haines > > > > Thanks Kirk - I guess I''m totally OT at this point, but I hadn''t heard > > about this bug before. From your description this is a specific problem > > to the underlying C code implementing shift, which is not found in > > related functions? So "array.slice!(0)" would be identical in function > > to shift but not contain this leak? > > > > Thanks again, > > > > Steve > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > Hello, > > Thanks everybody for all those informations. > I''ll make some tests and I''ll keep you posted. > I won''t have the time to run those tests this week, but I won''t forget > to post the results on the list. > > Best, > Thomas. >