I read this in a previous post ( http://rubyforge.org/pipermail/mongrel-users/2006-December/002354.html) .... First, Mongrel accepts remote clients and creates one Thread for each request. Mongrel also enforces a single request/response using Connect:close headers because Ruby only supports 1024 files (so far). If Mongrel doesn''t do this then people like yourself can write a simple "trickle attack" client that hits the Mongrel server, opens a bunch of continuous connections, and then eat up all available files very quickly. Basically, a DDoS attack that''s very simple to do. .... Is this still a problem? If it is, I think it might be sweet if it were optional (then load balancer''s could keep open connections--if only load balancers can hit it...). Just a thought :) -Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070914/86b8533c/attachment.html
On 9/14/07, Roger Pack <rogerpack2005 at gmail.com> wrote:> I read this in a previous post > (http://rubyforge.org/pipermail/mongrel-users/2006-December/002354.html) > .... > First, Mongrel accepts remote clients and creates one Thread for each > request. Mongrel also enforces a single request/response using > Connect:close headers because Ruby only supports 1024 files (so far). If > Mongrel doesn''t do this then people like yourself can write a simple > "trickle attack" client that hits the Mongrel server, opens a bunch of > continuous connections, and then eat up all available files very quickly. > Basically, a DDoS attack that''s very simple to do. > .... > > > Is this still a problem? If it is, I think it might be sweet if it were > optional (then load balancer''s could keep open connections--if only load > balancers can hit it...). Just a thought :)It''s still possible, and probably will remain so for quite a while. Ruby uses a select() loop to manage it''s threads. It''s fd_setsize is 1024. select()''s performance also degrades as the count of handles it is managing goes up. With the next version of evented_mongrel I am going to provide a way for people to specify, if they are on a platform that supports epoll (Linux 2.6.x), the max number of connections that they want to be able to handle. This would, in theory, reduce the threat of the trickle attack because an evented_mongrel could have many more than 1024 concurrent connections without any problem. If you read the archives, the subject of keep-alive is somewhat controversial, though we (the current admin/developer crew on mongrel) have discussed it at least once. I think it is something we are willing to explore further, if I recall the discussion correctly. Kirk Haines
I see--so if I understand correctly, proxies (i.e. nginx) pass ''each request'' straight back to mongrel (a new TCP port), which would enable malicious users to create lots of keep-alive requests with mongrel itself, hence the fear of enabling keep-alive''s, and mongrel currently limiting the number of simultaneous requests. Enabling this feature would only be useful, then, to a proxy which ''shares'' requests to mongrel from several clients. Hmm. I''m not sure if any proxies do that (besides swiftiply). I wonder then if the gain to said type of proxies would be worth making it an option (or rather, I wonder if the gain would actually be small so not worth trying for). Then again a few ms here and there, right? ha ha. Thank you! -Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070917/e233f110/attachment.html