I''m trying to figure out a possible memory leak problem I have in my application. I''ve tested both with mongrel and webrick and the problem remains. So I think it''s not a mongrel problem, but I''m posting here to see if anybody can help me. The RAILS application is quite simple, no special plugins ( no RMagick that has the memory problem ), it''s a query and the data is outputted via rxml template, so there is the xml Builder in the template. Actually I''m doing quite a lot of Ruby in the template, and the output file can get VERY big ( depends on the input data ). The maximum is an output file of 35MB. When somebody does this very heavy search the Mongrel ( or webrick ) process reaches 400MB and the memory is never ever freed again... My application has 6 mongrels and if all the mongrel end up with very big queries I need to restart everything, having eaten up all the 2GB of RAM my box has. I tested for memory leaks with Scott Laird module, but I didn''t find anything. Is it a problem of the RAILS/Ruby CG? or is there something I''m missing ? My only hint is the xml Builder RAILS functions, I plan to try to use a normal RHTML to see if something changes, I know it''s not a very neat solution ( and BTW it is 2 to 3 times faster on very big XML files, from some benchmarks I''ve done )... but I''m desperate! Thanks Massimo
On Aug 19, 2007, at 02:53 , Massimo Santoli wrote:> I''m trying to figure out a possible memory leak problem I have in my > application. > > I''ve tested both with mongrel and webrick and the problem remains. So > I think it''s not a mongrel problem, but I''m posting here to see if > anybody can help me. > > The RAILS application is quite simple, no special plugins ( no > RMagick that has the memory problem ), it''s a query and the data is > outputted via rxml template, > so there is the xml Builder in the template. Actually I''m doing quite > a lot of Ruby in the template, and the output file can get VERY big > ( depends on the input data ). > The maximum is an output file of 35MB. When somebody does this very > heavy search the Mongrel ( or webrick ) process reaches 400MB and the > memory > is never ever freed again... > My application has 6 mongrels and if all the mongrel end up with very > big queries I need to restart everything, having eaten up all the 2GB > of RAM my box has. > > I tested for memory leaks with Scott Laird module, but I didn''t find > anything. > > Is it a problem of the RAILS/Ruby CG? or is there something I''m > missing ? > > My only hint is the xml Builder RAILS functions, I plan to try to use > a normal RHTML to see if something changes, I know it''s not a very > neat solution ( and BTW it is 2 to 3 times faster > on very big XML files, from some benchmarks I''ve done )... but I''m > desperate! > > > Thanks > MassimoMassimo, You should post this to ruby-talk mailing list. Make sure to include the version of ruby, the version rails, as well as any plugins and/ or gems your app uses. If at all possible try to include some code for analysis as many people on that list will jump right on that to tell you what might be causing it and a better way to go about it (if one exists). Best, ~Wayne s///g Wayne E. Seguin Sr. Systems Architect & Systems Administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070819/35d12bed/attachment.html
On 8/19/07, Wayne E. Seguin <wayneeseguin at gmail.com> wrote:> > Massimo, > > You should post this to ruby-talk mailing list. Make sure to include the > version of ruby, the version rails, as well as any plugins and/or gems your > app uses. If at all possible try to include some code for analysis as many > people on that list will jump right on that to tell you what might be > causing it and a better way to go about it (if one exists). > > Best, > > ~WayneI think rails-talk would be even more appropriate. -- Chris Carter concentrationstudios.com brynmawrcs.com
We are finding that anytime our application sends back large files to the requestor, we start chewing up memory fast. We haven''t determined the precise cause or how to solve it yet, but it sounds like the same kind of situation. In our case, the particular large files are usually not generated by Builder, but typically are actually files being downloaded/transferred (e.g. images, documents, etc.). On 8/19/07, Chris Carter <cdcarter at gmail.com> wrote:> > On 8/19/07, Wayne E. Seguin <wayneeseguin at gmail.com> wrote: > > > > Massimo, > > > > You should post this to ruby-talk mailing list. Make sure to include the > > version of ruby, the version rails, as well as any plugins and/or gems > your > > app uses. If at all possible try to include some code for analysis as > many > > people on that list will jump right on that to tell you what might be > > causing it and a better way to go about it (if one exists). > > > > Best, > > > > ~Wayne > > I think rails-talk would be even more appropriate. > > > -- > Chris Carter > concentrationstudios.com > brynmawrcs.com > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070831/82795dc0/attachment-0001.html
On Fri, 31 Aug 2007 21:54:53 -0700 "Christopher Bailey" <chris at codeintensity.com> wrote:> We are finding that anytime our application sends back large files to the > requestor, we start chewing up memory fast. We haven''t determined the > precise cause or how to solve it yet, but it sounds like the same kind of > situation. In our case, the particular large files are usually not > generated by Builder, but typically are actually files being > downloaded/transferred (e.g. images, documents, etc.).This is because Mongrel collects your large file response into a StringIO so that it can keep rails happy. After the StringIO is done and rails leaves the locked section of code it then shoves that out the door on the socket. You are expecting to write a file, in rails, using Ruby''s crappy IO, and that it would go immediately on the socket. That''s probably why you''re seeing this. In reality, if it''s a large file, and you know where it is, then you should let a real web server handle it, or at a minimum write a Mongrel Handler to do the real heavy lifting. There''s plenty of information on doing this, but seriously, do not ever use send_file in rails or similar. It''s just a waste of resources, and even if you need to authenticate you can use X-Sendfile in apache or nginx and that''ll let you auth someone then send the file. -- Zed A. Shaw - Hate: http://savingtheinternetwithhate.com/ - Good: http://www.zedshaw.com/ - Evil: http://yearofevil.com/
Now, I''m opening a different can of worms, but how do you suggest sending an image file from the database without using send_data, or does send_data not suffer from the same issues as send_file? I tried using X-Sendfile and ran into issues that files weren''t being pulled out of the DB correctly. (Probably should be able to fix this now, after I figured it out) On 9/1/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote:> On Fri, 31 Aug 2007 21:54:53 -0700 > "Christopher Bailey" <chris at codeintensity.com> wrote: > > > We are finding that anytime our application sends back large files to the > > requestor, we start chewing up memory fast. We haven''t determined the > > precise cause or how to solve it yet, but it sounds like the same kind of > > situation. In our case, the particular large files are usually not > > generated by Builder, but typically are actually files being > > downloaded/transferred (e.g. images, documents, etc.). > > This is because Mongrel collects your large file response into a StringIO so that it can keep rails happy. After the StringIO is done and rails leaves the locked section of code it then shoves that out the door on the socket. > > You are expecting to write a file, in rails, using Ruby''s crappy IO, and that it would go immediately on the socket. That''s probably why you''re seeing this. > > In reality, if it''s a large file, and you know where it is, then you should let a real web server handle it, or at a minimum write a Mongrel Handler to do the real heavy lifting. > > There''s plenty of information on doing this, but seriously, do not ever use send_file in rails or similar. It''s just a waste of resources, and even if you need to authenticate you can use X-Sendfile in apache or nginx and that''ll let you auth someone then send the file. > > -- > Zed A. Shaw > - Hate: http://savingtheinternetwithhate.com/ > - Good: http://www.zedshaw.com/ > - Evil: http://yearofevil.com/ > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
On Sep 1, 2007, at 5:41 AM, Joey Geiger wrote:> Now, I''m opening a different can of worms, but how do you suggest > sending an image file from the database without using send_data, or > does send_data not suffer from the same issues as send_file? > > I tried using X-Sendfile and ran into issues that files weren''t being > pulled out of the DB correctly. (Probably should be able to fix this > now, after I figured it out)You should never store files in the database in the first place. send_data has the same leaky problem as send_file. The filesystem is a database for files already and is much more efficient at storing files then using the database for it. Cheers- -- Ezra Zygmuntowicz -- Founder & Ruby Hacker -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273)
Not necessarily so, Ezra. Storing images in the database is perfectly legitimate. However, just like Rails HTML views, you could implement caching of the images on the filesystem (i.e. write them to both the FS and the DB). Whatever action "renders" the image could take care of caching on the FS, serving the FS version if the DB version has, for example, the same MD5 hash as the one in the DB. Yes, performance will be a bit less than pure FS, but backups are a whole lot simpler (just backup and restore the DB). Besides, servers are cheap compared to developers (just ask the 37s guys), right? =Will Green Find out why this email is 5 sentences or less at http://five.sentec.es/
> Not necessarily so, Ezra. Storing images in the database is perfectly > legitimate. However, just like Rails HTML views, you could implement > caching of the images on the filesystem (i.e. write them to both the FS > and the DB). Whatever action "renders" the image could take care of > caching on the FS, serving the FS version if the DB version has, for > example, the same MD5 hash as the one in the DB.and why do all that then put them on the fs when you can just put them on the FS in the first place?> > Yes, performance will be a bit less than pure FS, but backups are a > whole lot simpler (just backup and restore the DB).rsync and tar are very simple> Besides, servers are > cheap compared to developersyou get a significant productivity boost out of putting binary files into the database? really?
On Sat, 01 Sep 2007 13:46:18 -0400 Will Green <will at hotgazpacho.com> wrote:> Not necessarily so, Ezra. Storing images in the database is perfectly > legitimate. However, just like Rails HTML views, you could implement > caching of the images on the filesystem (i.e. write them to both the FS > and the DB). Whatever action "renders" the image could take care of > caching on the FS, serving the FS version if the DB version has, for > example, the same MD5 hash as the one in the DB.No, not right at all. All RDBMS were originally designed to store relations, not files. It''s only recently that people started putting every damn thing they could into a RDBMS. The smart folks just put the data on a file system behind a specialized image web server. Then, when you need to serve the image, you, uh serve it. Doesn''t get much easier than that.> Yes, performance will be a bit less than pure FS, but backups are a > whole lot simpler (just backup and restore the DB). Besides, servers are > cheap compared to developers (just ask the 37s guys), right?That''s it? Backups? Seriously man, that''s a lame reason to do anything. Especially since backing up a file system is *infinitely* easier than a database. In real companies around the world there are little children crying because every night the DBA has to shut the production databases down to back them up, even if Oracle says they don''t. Backing up a file system does not require shutting it down, and can even be done with just simple rsync scripts. -- Zed A. Shaw - Hate: http://savingtheinternetwithhate.com/ - Good: http://www.zedshaw.com/ - Evil: http://yearofevil.com/
Please, by all means, feel free to make this argument to my boss... Zed A. Shaw wrote:> On Sat, 01 Sep 2007 13:46:18 -0400 > Will Green <will at hotgazpacho.com> wrote: > >> Not necessarily so, Ezra. Storing images in the database is perfectly >> legitimate. However, just like Rails HTML views, you could implement >> caching of the images on the filesystem (i.e. write them to both the FS >> and the DB). Whatever action "renders" the image could take care of >> caching on the FS, serving the FS version if the DB version has, for >> example, the same MD5 hash as the one in the DB. > > No, not right at all. All RDBMS were originally designed to store relations, not files. It''s only recently that people started putting every damn thing they could into a RDBMS. The smart folks just put the data on a file system behind a specialized image web server. Then, when you need to serve the image, you, uh serve it. Doesn''t get much easier than that. > >> Yes, performance will be a bit less than pure FS, but backups are a >> whole lot simpler (just backup and restore the DB). Besides, servers are >> cheap compared to developers (just ask the 37s guys), right? > > That''s it? Backups? Seriously man, that''s a lame reason to do anything. Especially since backing up a file system is *infinitely* easier than a database. In real companies around the world there are little children crying because every night the DBA has to shut the production databases down to back them up, even if Oracle says they don''t. > > Backing up a file system does not require shutting it down, and can even be done with just simple rsync scripts. >-- =Will Green Find out why this email is 5 sentences or less at http://five.sentec.es/
The experimental MyBS MySQL engine can serve columns directly from the db to the client. http://www.blobstreaming.org/download/index.php Just saying. Evan On 9/1/07, Will Green <will at hotgazpacho.com> wrote:> Please, by all means, feel free to make this argument to my boss... > > > Zed A. Shaw wrote: > > On Sat, 01 Sep 2007 13:46:18 -0400 > > Will Green <will at hotgazpacho.com> wrote: > > > >> Not necessarily so, Ezra. Storing images in the database is perfectly > >> legitimate. However, just like Rails HTML views, you could implement > >> caching of the images on the filesystem (i.e. write them to both the FS > >> and the DB). Whatever action "renders" the image could take care of > >> caching on the FS, serving the FS version if the DB version has, for > >> example, the same MD5 hash as the one in the DB. > > > > No, not right at all. All RDBMS were originally designed to store relations, not files. It''s only recently that people started putting every damn thing they could into a RDBMS. The smart folks just put the data on a file system behind a specialized image web server. Then, when you need to serve the image, you, uh serve it. Doesn''t get much easier than that. > > > >> Yes, performance will be a bit less than pure FS, but backups are a > >> whole lot simpler (just backup and restore the DB). Besides, servers are > >> cheap compared to developers (just ask the 37s guys), right? > > > > That''s it? Backups? Seriously man, that''s a lame reason to do anything. Especially since backing up a file system is *infinitely* easier than a database. In real companies around the world there are little children crying because every night the DBA has to shut the production databases down to back them up, even if Oracle says they don''t. > > > > Backing up a file system does not require shutting it down, and can even be done with just simple rsync scripts. > > > > -- > => Will Green > > Find out why this email is 5 sentences or less at http://five.sentec.es/ > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- Evan Weaver Cloudburst, LLC
Does any one else find pleasure in the name MyBS? -- Jesse Proudman, Blue Box Group, LLC On Sep 1, 2007, at 10:59 PM, Evan Weaver wrote:> The experimental MyBS MySQL engine can serve columns directly from the > db to the client.
I''m storing the images in a DB because they are shared between 4 web servers, any of which can submit an image. We are rsynching the MAIN webserver to the 3 other subboxes, but we aren''t going the other way. In a perfect world, I''d have a separate image server or file storage area, but we don''t have the time or money to do that. I''m not serving the images every time from the DB, rails is actually caching the file on the first read anyway. On 9/2/07, Jesse Proudman <j.list at blueboxdev.com> wrote:> Does any one else find pleasure in the name MyBS? > > -- > > Jesse Proudman, Blue Box Group, LLC > > > > > On Sep 1, 2007, at 10:59 PM, Evan Weaver wrote: > > > The experimental MyBS MySQL engine can serve columns directly from the > > db to the client. > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
On Sep 02, 2007, at 03:17 , Jesse Proudman wrote:> Does any one else find pleasure in the name MyBS?Very much so :) ~Wayne s///g Wayne E. Seguin Sr. Systems Architect & Systems Administrator -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070902/c8a23bd3/attachment.html
Thanks Zed. This is the direction we are going, but it is non-trivial due to how our storage and authentication system works. A reality none-the-less. On 8/31/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote:> > On Fri, 31 Aug 2007 21:54:53 -0700 > "Christopher Bailey" <chris at codeintensity.com> wrote: > > > We are finding that anytime our application sends back large files to > the > > requestor, we start chewing up memory fast. We haven''t determined the > > precise cause or how to solve it yet, but it sounds like the same kind > of > > situation. In our case, the particular large files are usually not > > generated by Builder, but typically are actually files being > > downloaded/transferred (e.g. images, documents, etc.). > > This is because Mongrel collects your large file response into a StringIO > so that it can keep rails happy. After the StringIO is done and rails > leaves the locked section of code it then shoves that out the door on the > socket. > > You are expecting to write a file, in rails, using Ruby''s crappy IO, and > that it would go immediately on the socket. That''s probably why you''re > seeing this. > > In reality, if it''s a large file, and you know where it is, then you > should let a real web server handle it, or at a minimum write a Mongrel > Handler to do the real heavy lifting. > > There''s plenty of information on doing this, but seriously, do not ever > use send_file in rails or similar. It''s just a waste of resources, and even > if you need to authenticate you can use X-Sendfile in apache or nginx and > that''ll let you auth someone then send the file. > > -- > Zed A. Shaw > - Hate: http://savingtheinternetwithhate.com/ > - Good: http://www.zedshaw.com/ > - Evil: http://yearofevil.com/ > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070904/42ce64f6/attachment.html
On 2007-09-04 16:55:36 -0700, Christopher Bailey wrote:> Thanks Zed. This is the direction we are going, but it is non-trivial due > to how our storage and authentication system works. A reality > none-the-less.at least the authentication/authorization issues can be solved with x-(lighttpd-)sendfile darix -- openSUSE - SUSE Linux is my linux openSUSE is good for you www.opensuse.org