Alexey Verkhovsky
2007-Mar-02 20:49 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
Dear all, I am researching solutions for "how do you squeeze as many Rails apps as you can on a cluster" problem. Environment constraints are as follows: * 4 commodity web servers (2 CPUs, 8 Gb of RAM each) * shared file storage and database (big, fast, not a bottleneck) * multiple Rails apps running on it * normally, the load is insignificant, but from time to time any of these apps can have a big, unpredictable spike in load, that takes (say) 8 Mongrels to handle. The bottleneck, apparently, is RAM. At 100 Mb per Mongrel process, you can only put 320 Mongrel processes on those boxes, and under specified parameters you can only handle 40 apps on the hardware described above. PHP can handle thousands of sites under the same set of constraints. We could use lighty + FastCGI combo, but it has a bad reputation. I wonder if it''s because of bugs in implementation, or it''s just not designed for these scenarios (if not, what''s the limitation, and can it be fixed?) If anybody knows a ready-made solution to this problem, please let me know. The last thing I want to do is reinvent the wheel. If anybody knows a load balancer smart enough to start and kill multiple processes across the entire cluster, based on demand per application, please let me know about that, too. Finally, I''ve been thinking about making Rails execution within Mongrel concurrent by spawning multiple Rails processes as children of Mongrel, and talking to them through local pipes (just like FastCGI does, but a Ruby-specific solution). This may allow a single Mongrel to scale 3-4 times better than now, and also to scale down if no requests are coming in the last, say, 10 minutes. A "blank" Ruby process only takes 7Mb of RAM, perhaps a "blank" Mongrel is not much more (haven''t checked yet). Wonder if this makes sense, or am I just crazy. I think, we can implement (and open-source) any solution that needs weeks rather than years of effort. Thoughts? Best regards, Alex Verkhovsky ThoughtWorks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070302/fbe5d764/attachment.html
Kirk Haines
2007-Mar-04 15:12 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
On 3/2/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> Dear all, > > I am researching solutions for "how do you squeeze as many Rails apps as you > can on a cluster" problem. > > Environment constraints are as follows: > > * 4 commodity web servers (2 CPUs, 8 Gb of RAM each) > * shared file storage and database (big, fast, not a bottleneck) > * multiple Rails apps running on it > * normally, the load is insignificant, but from time to time any of these > apps can have a big, unpredictable spike in load, that takes (say) 8 > Mongrels to handle. > > The bottleneck, apparently, is RAM. At 100 Mb per Mongrel process, you can > only put 320 Mongrel processes on those boxes, and under specified > parameters you can only handle 40 apps on the hardware described above. PHP > can handle thousands of sites under the same set of constraints.Do you have a sense for how many requests/second capacity represents a rate that can survive slashdotting? i.e. you are assuming 8 mongrels/app. So what capacity do those 8 mongrels represent?> If anybody knows a load balancer smart enough to start and kill multiple > processes across the entire cluster, based on demand per application, please > let me know about that, too.I have developed a clustering proxy from scratch (the first fledgling release of it should be today) that is in a good position for feature requests. Having a single load balancer starting and stopping processes across a cluster, though, calls for quite a bit of complexity. A compromise, that would just manage backends on the same machine that the proxy is running on, could have potential. Something would proxy requests out to nodes, and the local proxy on each node would manage it''s pack of backend processes. The friction of having two layers of proxying might create too much throughput entropy, through.> Finally, I''ve been thinking about making Rails execution within Mongrel > concurrent by spawning multiple Rails processes as children of Mongrel, and > talking to them through local pipes (just like FastCGI does, but a > Ruby-specific solution). This may allow a single Mongrel to scale 3-4 times > better than now, and also to scale down if no requests are coming in the > last, say, 10 minutes. A "blank" Ruby process only takes 7Mb of RAM, perhaps > a "blank" Mongrel is not much more (haven''t checked yet). Wonder if this > makes sense, or am I just crazy.This sounds like a way to implement something similar to what I described above. Kirk Haines
Alexey Verkhovsky
2007-Mar-04 19:36 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
On 3/4/07, Kirk Haines <wyhaines at gmail.com> wrote:> > Do you have a sense for how many requests/second capacity represents a > rate that can survive slashdotting?"Survive slashdotting" is just a phrase. The target is ~100 requests/sec for dynamic page rendering, I have developed a clustering proxy from scratch (the first fledgling> release of it should be today) that is in a good position for feature > requests.If it''s free, open source, and done well, I am in a good position to give you those feature requests. Even lend a hand in implementing them, probably. Let''s continue this conversation off the list. Having a single load balancer starting and stopping> processes across a cluster, though, calls for quite a bit of > complexity.Yeah. Hopefully, it will prove unnecessary. A compromise, that would just manage backends on the same> machine that the proxy is running on, could have potential.Litespeed and mod_fcgid both do this fairly well. But proxy to Mongrels seems wiser, because Mongrel is a de-facto standard development environment. Little extra CPU time for avoiding deployment pains is always a good deal. Best regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070304/527bc894/attachment.html
Dee Zsombor
2007-Mar-05 07:10 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
Alexey Verkhovsky wrote:> We could use lighty + FastCGI combo, but it has a bad reputation. I > wonder if it''s because of bugs in implementation, or it''s just not > designed for these scenarios (if not, what''s the limitation, and can it > be fixed?)My personal experience points to the contrary, using FCGI listeners (if you have the option of dumping apache for lighttpd) works well. Just spawn the fcgi listeners externally from script/process/spinner, then tell only the ports to lighty where fcgi listeners run:> fastcgi.server = ( ".fcgi" => > ( "localhost-7000" => ( "host" => "127.0.0.1", "port" => 7001 ) ), > ( "localhost-7001" => ( "host" => "127.0.0.1", "port" => 7002 ) ), > ( "localhost-7002" => ( "host" => "127.0.0.1", "port" => 7003 ) ) > )Apache with fcgi is a pain true, but not with lighttpd. Zsombor -- Company - http://primalgrasp.com Thoughts - http://deezsombor.blogspot.com
Alexey Verkhovsky
2007-Mar-06 22:31 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
> My personal experience points to the contrary, using FCGI listeners (if > you have the option of dumping apache for lighttpd) works well. Just spawn > the fcgi listeners externally from script/process/spinnerI''ve seen and heard many enough conversations along the lines of "FastCGI leaves zombie Ruby processes floating around", 6 to 12 months ago. Sure, you can harvest those things with a Cron job, but doesn''t this smell funny? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070306/46680b91/attachment.html
Alexey Verkhovsky
2007-Mar-10 01:49 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
OK, having done some research on the subject, I came to a conclusion that the main problem with my scenario is that Ruby concurrency sucks, and should be avoided. Other problems are the total RAM footprint and Therefore, I''m currently leaning towards the following setup for, say, 20 apps running on the same box: 1. Apache (or possibly nginx) with 20 Virtual Hosts (vhost per app) serving static content from ./public and redirecting dynamic requests to... 2. HAProxy on the same box, listening on 20 ports (port per app) and configured to forward requests to 3-5 downstream ports per app, with maxconns = 1 (which means that the proxy will send only one request at a time to a downstream port). If all downstream proxies are busy or down, HAProxy queues the requests internally. It is also smart about not sending any requests to servers that are down. Below HAProxy, possibly on another physical box(-es), there are... 3. 3-5 Mongrels per app, with --num-conns=2 (since we are not really sending them more than one request at a time). This prevents a Mongrel process from allocating an extra 60 to 100 Mb RAM to itself when it comes under overload. Not all of these Mongrels need to be running. One or two per app may well be enough. Two is better, as it prevents a long-running action from holding up other requests. When a "slashdotting" occurs, some sort of smart agent (even a human operator) can start additional Mongrels as needed. I''ve created a setup like this on my laptop yesterday, and stress-tested it in some creative ways for half a day, running Mephisto with page caching turned off. Mongrels stayed up through the entire ordeal, at about 48 Mb VSS apiece, because they were basically never overloaded. The system behaved gracefully under overload (responding to as many requests as it could, and returning HTTP 503 to the rest), and did the right things when I killed and restarted individual Mongrels (seamlessly redirecting traffic to other nodes). No memory leaks, zombie processes or any other abnormalities observed. Another thing I discovered is that Ruby (since 1.8.5) can be told to commit suicide if it needs to allocate more than a certain amount of RAM. Process.setrlimit(Process::RLIMIT_AS) is the magical word. This is better than harvests oversized processes with a cron job, because a greedy process dies before the memory is actually allocated, so other processes on the same OS remain unaffected. Am I on the right track? What other issues should I be testing for / thinking about? Best regards, Alex Verkhovsky -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070309/6d73226a/attachment.html
Alexey Verkhovsky
2007-Mar-10 01:53 UTC
[Mongrel] Multiple apps on the same server, all should be able to survive slashdotting
On 3/9/07, Alexey Verkhovsky <alexey.verkhovsky at gmail.com> wrote:> > OK, having done some research on the subject, I came to a conclusion that > the main problem with my scenario is that Ruby concurrency sucks, and should > be avoided. Other problems are the total RAM footprint and"gracefully surviving overloads" is the missing part. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070309/f41fef80/attachment.html
Seemingly Similar Threads
- Again: Workaround found for request queuing vs. num_processors, accept/close
- Memory leaks in my site
- dropping lighty+fastcgi and moving to apache+mongrel in production
- what is the correct way to stop/start a mongrel instance using monit with mongrel cluster
- mongrel lighttpd and ssl