Michael Steinfeld
2007-Mar-13 18:59 UTC
[Mongrel] dropping lighty+fastcgi and moving to apache+mongrel in production
Most of my machines have apache+mongrel running but the mongrels are using the localhost. In my production environment, I have 4 boxes. I have setup 2 http servers ( apache 2.2.4 ) and 2 app servers. They are currently using lighttpd+fastcgi. Which I am changing this week. I want to get advice from you guys before I actually do this since this is my production env. I have installed apache on the the 2 https servers and will serve local content in app/public on them both, but in my cluster.conf file on my staging server I use: BalancerMember http://localhost:port(s) Since I have 2 app servers (separate boxes), I plan on running 10 mongrels on each.. I am assuming I just simply use: BalancerMember http://10.0.1.1:800* ... BalancerMember http://10.0.1.2:800* ... Should this be identical on both http servers? So I would have a total of 20 entries per conf on each http server ( 10x2 mongrels for 2 app servers ). Otherwise I am assuming the setup is the same as if they were local mongrels. If you guys can let me know if I am on the right track here I would appreciate it. Regards, -- -mike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070313/53f83ece/attachment.html
Philip Hallstrom
2007-Mar-13 19:31 UTC
[Mongrel] dropping lighty+fastcgi and moving to apache+mongrel in production
> Most of my machines have apache+mongrel running but the mongrels are using > the localhost. > In my production environment, I have 4 boxes. I have setup 2 http servers ( > apache 2.2.4 ) and 2 app servers. They are currently using lighttpd+fastcgi. > Which I am changing this week. > I want to get advice from you guys before I actually do this since this is > my production env. > > I have installed apache on the the 2 https servers and will serve local > content in app/public on them both, but in my cluster.conf file on my > staging server I use: > > BalancerMember http://localhost:port(s) > > Since I have 2 app servers (separate boxes), I plan on running 10 mongrels > on each.. I am assuming I just simply use:Why 10? Have you tested that assumption? We found 4 mongrels to be our sweet spot... http://mongrel.rubyforge.org/docs/how_many_mongrels.html Also, you might consider merging the web/app servers... we found that apache didn''t use up that many resources serving static files so each of our servers runs both apache and mongrel. So you might consider running apache + 4 mongrels on each of the 4 servers... Guess I''m assuming your HTTP is behind some other load balancer...> > BalancerMember http://10.0.1.1:800* > ... > BalancerMember http://10.0.1.2:800* > ... > > Should this be identical on both http servers? > > So I would have a total of 20 entries per conf on each http server ( 10x2 > mongrels for 2 app servers ). > > Otherwise I am assuming the setup is the same as if they were local > mongrels. If you guys can let me know if I am on the right track here I > would appreciate it. > > Regards, > -- > -mike >
Michael Steinfeld
2007-Mar-13 19:40 UTC
[Mongrel] dropping lighty+fastcgi and moving to apache+mongrel in production
Hi -- On 3/13/07, Philip Hallstrom <rails at philip.pjkh.com> wrote:> > > Most of my machines have apache+mongrel running but the mongrels are > using > > the localhost. > > In my production environment, I have 4 boxes. I have setup 2 http > servers ( > > apache 2.2.4 ) and 2 app servers. They are currently using > lighttpd+fastcgi. > > Which I am changing this week. > > I want to get advice from you guys before I actually do this since this > is > > my production env. > > > > I have installed apache on the the 2 https servers and will serve local > > content in app/public on them both, but in my cluster.conf file on my > > staging server I use: > > > > BalancerMember http://localhost:port(s) > > > > Since I have 2 app servers (separate boxes), I plan on running 10 > mongrels > > on each.. I am assuming I just simply use: > > Why 10? Have you tested that assumption? We found 4 mongrels to be our > sweet spot... http://mongrel.rubyforge.org/docs/how_many_mongrels.htmlWell, Someone else on the team had previously set up 10 so that is what I was assuming to go with here. I will read the link you just gave me. Thanks you! Also, you might consider merging the web/app servers... we found that> apache didn''t use up that many resources serving static files so each of > our servers runs both apache and mongrel. So you might consider running > apache + 4 mongrels on each of the 4 servers...I did think of that and suggested it to the CIO. I was wondering why we would go 2 and 2 when 4 and 4 gives us the performance and redundancy, I am pondering your suggestions whole heartedly. Just a note I have been a *nix admin for roughly 8 years but this is my first Rails production environment. So I don''t have much experience. Your feedback is more than helpful. Guess I''m assuming your HTTP is behind some other load balancer... Interestingly enough we don''t. In many of the other networks we did have HW for load balancing. This environment is using round robin dns, though this is my first encounter with using round robin to handle the load balancing needs as well. I am open to any suggestions you may have in regards to the direction we should investigate. Right now there are 4 machines but soon that may double. So I am thinking of both short-term and long-term needs right now. Anything else you have to offer would be appreciated. Thanks> > > BalancerMember http://10.0.1.1:800* > > ... > > BalancerMember http://10.0.1.2:800* > > ... > > > > Should this be identical on both http servers? > > > > So I would have a total of 20 entries per conf on each http server ( > 10x2 > > mongrels for 2 app servers ). > > > > Otherwise I am assuming the setup is the same as if they were local > > mongrels. If you guys can let me know if I am on the right track here I > > would appreciate it. > > > > Regards, > > -- > > -mike > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- -mike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070313/2a53f32a/attachment.html
Philip Hallstrom
2007-Mar-13 20:18 UTC
[Mongrel] dropping lighty+fastcgi and moving to apache+mongrel in production
>> > Since I have 2 app servers (separate boxes), I plan on running 10 >> mongrels >> > on each.. I am assuming I just simply use: >> >> Why 10? Have you tested that assumption? We found 4 mongrels to be our >> sweet spot... http://mongrel.rubyforge.org/docs/how_many_mongrels.html > > Well, Someone else on the team had previously set up 10 so that is what I > was assuming to go with here. I will read the link you just gave me. Thanks > you!Yeah, definitely go through the steps in that URL. We had been running 20 FCGI''s then went to 8 mongrels I believe then Zed posted those docs and when we tested we found 4 to be ideal. Seems counter intuitive, but there ya go :)> Also, you might consider merging the web/app servers... we found that >> apache didn''t use up that many resources serving static files so each of >> our servers runs both apache and mongrel. So you might consider running >> apache + 4 mongrels on each of the 4 servers... > > I did think of that and suggested it to the CIO. I was wondering why we > would go 2 and 2 when 4 and 4 gives us the performance and redundancy, I am > pondering your suggestions whole heartedly. Just a note I have been a *nix > admin for roughly 8 years but this is my first Rails production environment. > So I don''t have much experience. Your feedback is more than helpful.I suppose it depends on how hard your apache servers are working. Ours don''t work that hard at all so we''d rather rails had access to more ram/cpu...> Guess I''m assuming your HTTP is behind some other load balancer... > > Interestingly enough we don''t. In many of the other networks we did have HW > for load balancing. This environment is using round robin dns, though this > is my first encounter with using round robin to handle the load balancing > needs as well. I am open to any suggestions you may have in regards to the > direction we should investigate. Right now there are 4 machines but soon > that may double. So I am thinking of both short-term and long-term needs > right now. Anything else you have to offer would be appreciated.Hrm. I think I would. The problem with round robin is how long the cycle is... if it''s too long and you get slashdotted (for example) won''t all those new users end up on a single web server? I didn''t set it up, but I like the load balancer we''ve got since it lets us disable a backend and we can take it offline and do whatever we want with it and bring it back up, etc... I haven''t used it, but a lot of people really seem to like HAproxy as a software solution. Might be worth investigating as well. -p