Hi, I''ve got an app which will only be dealing with a few requests a minute for most of the time, then will shoot up to a continuous 20 req/s for an hour at a time. We''ll potentially be running a lot of instances of this app on the same server. Is there any way to have additional instances of Mongrel be started when the existing instance(s) stopping being able to handle the volume of traffic. I could preload adequate instances to cope with peak traffic but that will result in a lot of instances sitting idle for the majority of the time. Does anyone have any experience of SCGI? I know that FastCGI will start loading additional app instances after a certain threshold - does SCGI behave the same. Thanks Jeremy
On 10/22/07, Jeremy Wilkins <jeremy at ibexinternet.co.uk> wrote:> I''ve got an app which will only be dealing with a few requests a > minute for most of the time, then will shoot up to a continuous 20 > req/s for an hour at a time. We''ll potentially be running a lot of > instances of this app on the same server.How fast is your app? How many mongrels do you figure you need to handle that volume?> Is there any way to have additional instances of Mongrel be started > when the existing instance(s) stopping being able to handle the > volume of traffic. I could preload adequate instances to cope with > peak traffic but that will result in a lot of instances sitting idle > for the majority of the time.I am working on something that will do exactly that. It''s not ready for public consumption, but it will permit one to have a mongrel cluster that self adjusts to the load it is receiving, with real time reporting of cluster status. Kirk Haines
On 22 Oct 2007, at 17:45, Kirk Haines wrote:> On 10/22/07, Jeremy Wilkins <jeremy at ibexinternet.co.uk> wrote: > >> I''ve got an app which will only be dealing with a few requests a >> minute for most of the time, then will shoot up to a continuous 20 >> req/s for an hour at a time. We''ll potentially be running a lot of >> instances of this app on the same server. > > How fast is your app? How many mongrels do you figure you need to > handle that volume?Hopefully very fast - I''m aiming to get it down to 1 or 2 database queries per (ajax) request, with just a small amount to text being sent back. We''re currently planning this part of the so don''t have any stats yet. My concern is that it will probably be being hosted on our existing (heavily loaded) PHP server till the clients needs enough instances to justify their own server.> >> Is there any way to have additional instances of Mongrel be started >> when the existing instance(s) stopping being able to handle the >> volume of traffic. I could preload adequate instances to cope with >> peak traffic but that will result in a lot of instances sitting idle >> for the majority of the time. > > I am working on something that will do exactly that. It''s not ready > for public consumption, but it will permit one to have a mongrel > cluster that self adjusts to the load it is receiving, with real time > reporting of cluster status. >Sounds very interesting - I''m pretty new to all this but this seems to be one area where FastCGI (and I presume SCGI) has significant advantage over the mongrel cluster approach. How long do you anticipate it will take to develop your solution (just curious - I know its a when its done thing). Thanks jebw> > Kirk Haines > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Several years back I accidentally discovered that multiple processes can listen on the same TCP/IP socket. The trick of course is that all the processes are in the same process group, and the socket is opened by a shared parent. The OS somehow was managing queuing up the various calls to accept() on that socket. Since the watchdog parent / multiple child servers is a common, this was a workable solution on the versions of Linux we were using. IIRC, the OS gracefully queued several processes'' calls to accept(), requiring no additional synchronization. But, even if that weren''t the case, there is still the option of puting an acceptor in a parent and dispatching the client socket to available child servers. Anyhow -- The application I wrote has a watchdog process in C that opens up a server socket before forking child server processes. The children get passed the descriptor number for that server socket as an argument on their command lines. All child server processes then enter an accept loop. They all call accept() on that same, shared descriptor . Each child, btw, opened up its own "private" admin socket on a port that was an offset of the main, shared service port ( and optionally on a different interface as well ). Within a pool then processes are somewhat self-balancing -- a process only calls accept() when it''s got threads available and ready to handle a reqeust. Clients, or a client load balancer, don''t have to keep track of traffic or request counts between individual server processes. They also don''t have to try back-end app servers individually before finding a "thread" that''s free -- if any process in a pool is available, it''s already sitting in accept() on that shared socket ( likely with others queued up behind it in their accept() calls). If Mongrel''s suitable as an Internet-facing front-end, then there might be, for many applications, no need for a load balancing proxy. Simply fire up a pool of mongrels at port 80, and they''ll sort it all out among themselves. Even for applications requiring multiple machines a scheme like this would simplify load balancer proxy configurations (100 mongrels in a pool? No problem -- all one port!) I''m sure the folks who wrote Mongrel thought of this and either tried it or rejected it beforehand for good reason. And I had the luxury of coding for just one platform. Perhaps others impose hurdles that make this impractical. But even there, isn''t there the Apache model of a parent acceptor() passing client sockets to ready children? Thoughts? -------------- next part -------------- A non-text attachment was scrubbed... Name: rob.vcf Type: text/x-vcard Size: 116 bytes Desc: not available Url : http://rubyforge.org/pipermail/mongrel-users/attachments/20071023/f7c43ca1/attachment.vcf
On 10/23/07, Robert Mela <rob at robmela.com> wrote:> > Several years back I accidentally discovered that multiple processes can > listen on the same TCP/IP socket. The trick of course is that all the > processes are in the same process group, and the socket is opened by a > shared parent. The OS somehow was managing queuing up the various > calls to accept() on that socket. Since the watchdog parent / multiple > child servers is a common, this was a workable solution on the versions > of Linux we were using. > > IIRC, the OS gracefully queued several processes'' calls to accept(), > requiring no additional synchronization. But, even if that weren''t the > case, there is still the option of puting an acceptor in a parent and > dispatching the client socket to available child servers. > > Anyhow -- > > The application I wrote has a watchdog process in C that opens up a > server socket before forking child server processes. The children get > passed the descriptor number for that server socket as an argument on > their command lines. > > All child server processes then enter an accept loop. They all call > accept() on that same, shared descriptor . Each child, btw, opened up > its own "private" admin socket on a port that was an offset of the main, > shared service port ( and optionally on a different interface as well ). > > Within a pool then processes are somewhat self-balancing -- a process > only calls accept() when it''s got threads available and ready to handle > a reqeust. Clients, or a client load balancer, don''t have to keep > track of traffic or request counts between individual server processes. > They also don''t have to try back-end app servers individually before > finding a "thread" that''s free -- if any process in a pool is > available, it''s already sitting in accept() on that shared socket ( > likely with others queued up behind it in their accept() calls). > > If Mongrel''s suitable as an Internet-facing front-end, then there might > be, for many applications, no need for a load balancing proxy. Simply > fire up a pool of mongrels at port 80, and they''ll sort it all out among > themselves. Even for applications requiring multiple machines a scheme > like this would simplify load balancer proxy configurations (100 > mongrels in a pool? No problem -- all one port!) > > I''m sure the folks who wrote Mongrel thought of this and either tried it > or rejected it beforehand for good reason. And I had the luxury of > coding for just one platform. Perhaps others impose hurdles that make > this impractical. But even there, isn''t there the Apache model of a > parent acceptor() passing client sockets to ready children? > > > Thoughts? >I believe that the main issue here is on the win32 platform, Luis? We do have something similar in the works for a future release, however I am unsure as to how your suggestion ties in at the moment. It appears to be well worth investigation for what we have planned. Thank you kindly for this, ~Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20071023/3ad50a03/attachment.html
On 10/23/07, Jeremy Wilkins <jeremy at ibexinternet.co.uk> wrote:> > How fast is your app? How many mongrels do you figure you need to > > handle that volume? > > Hopefully very fast - I''m aiming to get it down to 1 or 2 database > queries per (ajax) request, with just a small amount to text being > sent back. We''re currently planning this part of the so don''t have > any stats yet. My concern is that it will probably be being hosted on > our existing (heavily loaded) PHP server till the clients needs > enough instances to justify their own server.It''s hard to guage when you are sharing machine cycles with a heavily loaded PHP app, but, depending on what those queries really do, 20 r/s should be trivial to get with modest hardware and a tiny number of mongrels (just a single mongrel is likely practical for a load that low).> Sounds very interesting - I''m pretty new to all this but this seems > to be one area where FastCGI (and I presume SCGI) has significant > advantage over the mongrel cluster approach. How long do you > anticipate it will take to develop your solution (just curious - I > know its a when its done thing).It''s part of the Swiftiply 0.7.0 feature set. I''m already late on when I wanted to release it, but realistically, it''s probably another month or so away. Kirk Haines