In my understanding, the mongrel request uri limit is 12k. My request uri is certainly not that long (about 1445 bytes), but for some reason I''m getting 414 Request-URI Too Long from the server. In my experiment, if I shrink the request URI to 1016 bytes, it works. Getting this error on all GET, POST, PUT, and DELETE requests. There is no cookie being sent and the number of headers and their length are all very small. Any idea why?
Nuo Yan <yan.nuo at gmail.com> wrote:> In my understanding, the mongrel request uri limit is 12k.Correct, that''s the entire REQUEST_URI, though: REQUEST_PATH?QUERY_STRING#FRAGMENT> My request uri is certainly not that long (about 1445 bytes), but for > some reason I''m getting 414 Request-URI Too Long from the server.Perhaps one of your URI components (perhaps REQUEST_PATH) is too long? The current limits are as follows (in ext/unicorn_http/global_variables.h) DEF_MAX_LENGTH(REQUEST_URI, 1024 * 12); DEF_MAX_LENGTH(FRAGMENT, 1024); /* Don''t know if this length is specified somewhere or not */ DEF_MAX_LENGTH(REQUEST_PATH, 1024); DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10)); I will consider upping REQUEST_PATH to 4096 (and REQUEST_URI to 15K) since 4096 is a common filesystem PATH_MAX on modern systems. Would anybody object to this? Given we already allow huge headers and Ruby uses quite a bit of memory, I don''t think a potential extra 3K will negatively impact anybody.
Hi Eric, On Apr 11, 2012, at 1:30 PM, Eric Wong wrote:> > I will consider upping REQUEST_PATH to 4096 (and REQUEST_URI to 15K) > since 4096 is a common filesystem PATH_MAX on modern systems. > > Would anybody object to this? Given we already allow huge headers and > Ruby uses quite a bit of memory, I don''t think a potential extra 3K > will negatively impact anybody.Lately I figured REQUEST_PATH should probably be the reason as well. It would be great if you bump that up to 4096 - so that we could continue to use released versions instead of having to patch on our own. 4k makes sense to me. When do you think you can cut a gem on this? Also, what do you think about making these values configurable in the conf file? I understand these were hard coded by design to protect the server. However, I think it would be nice if it defaults to these small values and can be configured flexibly. Thanks a lot, Nuo
Hi Eric & All,> DEF_MAX_LENGTH(REQUEST_URI, 1024 * 12); > DEF_MAX_LENGTH(FRAGMENT, 1024); /* Don''t know if this length is specified somewhere or not */ > DEF_MAX_LENGTH(REQUEST_PATH, 1024); > DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10)); > > I will consider upping REQUEST_PATH to 4096 (and REQUEST_URI to 15K) > since 4096 is a common filesystem PATH_MAX on modern systems.IE browsers up to and including IEv8 have a max URL length of 2,083 characters : http://support.microsoft.com/kb/208427. IE9 seems to allow up to 5K last time I checked. There may be other clients (ruby http clients, bots, traffic analysers, database clients with limited column spaces to hold URLs, etc.) that can''t handle long URLs if you take it too far. So practically it''s advisable to keep your URLs within limits. Having said that, the HTTP/1.1 spec state that servers SHOULD be able to handle URIs of unbounded length. So yes, unicorn should allow this if one chooses.> Given we already allow huge headers and > Ruby uses quite a bit of memory, I don''t think a potential extra 3K > will negatively impact anybody.Should there be a limit at all in unicorn? Should it not be assumed this is configured at the webserver level, like: http://wiki.nginx.org/NginxHttpCoreModule#large_client_header_buffers nginx currently uses a default of 4 buffers of 8K. (i.e. a path_max of 8K) The webserver takes care of bombing out with a 414 before it even reaches unicorn. If traffic does reach unicorn then unicorn should be able to handle whatever length it''s given. PS. I''m fine with 4096 as well of course, I keep to the 2K limit anyways ;-) Cheers, Lawrence
Lawrence Pit <lawrence.pit at gmail.com> wrote:> IE browsers up to and including IEv8 have a max URL length of 2,083 > characters : http://support.microsoft.com/kb/208427.Ah, that probably explains why MAX_URI_LENGTH=2083 in WEBrick (lib/webrick/httprequest.rb).> There may be other clients (ruby http clients, bots, traffic > analysers, database clients with limited column spaces to hold URLs, > etc.) that can''t handle long URLs if you take it too far. So > practically it''s advisable to keep your URLs within limits.Agreed. I''m really hoping no apps out there depend on the Mongrel/unicorn internal limits to enforce database column length :)> Should there be a limit at all in unicorn? Should it not be assumed > this is configured at the webserver level, like: > > http://wiki.nginx.org/NginxHttpCoreModule#large_client_header_buffersThere should be a limit in unicorn, it''s cheap to enforce and there could be corner cases (nginx bugs, internal security probes) where it''s helpful. The unicorn parser is also used by servers (Rainbows!) that expect untrusted/malicious clients without nginx to protect it.
Nuo Yan <yan.nuo at gmail.com> wrote:> On Apr 11, 2012, at 1:30 PM, Eric Wong wrote: > > I will consider upping REQUEST_PATH to 4096 (and REQUEST_URI to 15K) > > since 4096 is a common filesystem PATH_MAX on modern systems. > > 4k makes sense to me. When do you think you can cut a gem on this?I might wait a week for others to chime in before making an official release. However, packaging your own gem should be relatively easy and it is documented in the HACKING file.> Also, what do you think about making these values configurable in the > conf file? I understand these were hard coded by design to protect the > server. However, I think it would be nice if it defaults to these > small values and can be configured flexibly.I''m against adding too many configuration variables as it''s difficult to support (needs documentation, testing, including testing for corner cases, error handling, etc...). Too many exposed configuration values scares off new users. Given how few people have an issue with these values, I don''t think it''s worth the effort.
Eric Wong <normalperson at yhbt.net> wrote:> Lawrence Pit <lawrence.pit at gmail.com> wrote: > > Should there be a limit at all in unicorn? Should it not be assumed > > this is configured at the webserver level, like: > > > > http://wiki.nginx.org/NginxHttpCoreModule#large_client_header_buffers > > There should be a limit in unicorn, it''s cheap to enforce and there > could be corner cases (nginx bugs, internal security probes) where it''s > helpful. The unicorn parser is also used by servers (Rainbows!) that > expect untrusted/malicious clients without nginx to protect it.On the other hand, the _granularity_ of limits may unnecessary. There is already a 112K limit on the overall header size (which IMHO is really huge). However, this 112K overall limit is tunable on Rainbows! because Rainbows! is designed to handle hundreds/thousands of clients in one process.