I am using mongrel_cluster with mod_proxy_balancer and would like to enable compression (assuming it improves throughtput) and limit file size upload. I configured mod_deflate and LimitRequestSize in Apache, but in my trials looks like the proxied calls bypass those directives (the conf goes below). Is there a way to get this? -- fxn # Adapt this .example locally, as usual. # # To be included in the main httpd.conf with a line at the bottom like this # # Include /path/to/example/conf/httpd.conf NameVirtualHost *:80 # Configure the balancer to dispatch to the Mongrel cluster. <Proxy balancer://example_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 BalancerMember http://127.0.0.1:3003 </Proxy> # Setup the VirtualHost for your Rails application <VirtualHost *:80> ServerAdmin admin at example.com ServerName www.example.com ServerAlias localhost LimitRequestSize 102400 DocumentRoot /path/to/example/public <Directory ''/path/to/example/public''> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass / balancer://example_cluster/ ProxyPassReverse / balancer://example_cluster/ # Setup your Rewrite rules here RewriteEngine On # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Send all requests that are not found as existing files to the cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://example_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml text/ css application/x-javascript # Error logs ErrorLog /path/to/example/log/apache_error_log CustomLog /path/to/example/log/apache_access_log combined </VirtualHost>
On Dec 7, 2006, at 7:03 PM, Xavier Noria wrote:> I am using mongrel_cluster with mod_proxy_balancer and would like to > enable compression (assuming it improves throughtput) and limit file > size upload. I configured mod_deflate and LimitRequestSize in Apache, > but in my trials looks like the proxied calls bypass those directives > (the conf goes below).Heh, sorry I was doing a telnet to check the response by hand and forgot to add the header that triggers compression. It is working fine with the config I sent (minus the typo I mention below).> LimitRequestSize 102400This should read LimitRequestBody, and still looks like it does not affect to requests proxied to Mongrel. Which is the way to set the limit Apache + mod_proxy_balancer + Mongrel Cluster? -- fxn
Xavier Noria
2006-Dec-09 10:19 UTC
[Mongrel] max upload size? (was: compress and max upload size?)
On Dec 7, 2006, at 9:54 PM, Xavier Noria wrote:>> LimitRequestSize 102400 > > This should read LimitRequestBody, and still looks like it does not > affect to requests proxied to Mongrel. Which is the way to set the > limit Apache + mod_proxy_balancer + Mongrel Cluster?I have not found a way within Apache, and the constants defined in Mongrel /* Defines the maximum allowed lengths for various input elements.*/ DEF_MAX_LENGTH(FIELD_NAME, 256); DEF_MAX_LENGTH(FIELD_VALUE, 80 * 1024); DEF_MAX_LENGTH(REQUEST_URI, 1024 * 2); DEF_MAX_LENGTH(REQUEST_PATH, 1024); DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10)); DEF_MAX_LENGTH(HEADER, (1024 * (80 + 32))); do no take the request body into account. There is a MAX_BODY somewhere but that is just a threshold Mongrel uses to decide when to switch from StringIO into a tempfile when it handles uploads. I am not sure what to do next, the client wants a limit. Could you accomplish that with lighty or any other web server + balancer. -- fxn
Zed A. Shaw
2006-Dec-09 10:41 UTC
[Mongrel] max upload size? (was: compress and max upload size?)
On Sat, 9 Dec 2006 11:19:45 +0100 Xavier Noria <fxn at hashref.com> wrote:> On Dec 7, 2006, at 9:54 PM, Xavier Noria wrote: > > >> LimitRequestSize 102400 > > > > This should read LimitRequestBody, and still looks like it does not > > affect to requests proxied to Mongrel. Which is the way to set the > > limit Apache + mod_proxy_balancer + Mongrel Cluster? > > I have not found a way within Apache, and the constants defined in > MongrelIf you desperately need this then your best bet is to look how the handler in mongrel_upload_progress is written, modify it to monitor the request and abort anything over your limit, and then attach that to the front of your rails request. Should be easy if you read the code and follow it. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/ http://www.lingr.com/room/3yXhqKbfPy8 -- Come get help.
Hi, On Don 07.12.2006 21:54, Xavier Noria wrote:>On Dec 7, 2006, at 7:03 PM, Xavier Noria wrote: > >> LimitRequestSize 102400 > >This should read LimitRequestBody, and still looks like it does not >affect to requests proxied to Mongrel. Which is the way to set the >limit Apache + mod_proxy_balancer + Mongrel Cluster?Do you mean this Option? http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestbody If it doesn''t work with mod_proxy_* then it could be a bug in apache, how about to ask on http-user list? Hth ;-) Aleks
On Dec 9, 2006, at 12:01 PM, Aleksandar Lazic wrote:> Hi, > > On Don 07.12.2006 21:54, Xavier Noria wrote: >> On Dec 7, 2006, at 7:03 PM, Xavier Noria wrote: >> >>> LimitRequestSize 102400 >> >> This should read LimitRequestBody, and still looks like it does not >> affect to requests proxied to Mongrel. Which is the way to set the >> limit Apache + mod_proxy_balancer + Mongrel Cluster? > > Do you mean this Option? > > http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestbody > > If it doesn''t work with mod_proxy_* then it could be a bug in apache, > how about to ask on http-user list?Yeah, I tried that yesterday and got no response so far. Before patching Mongrel, looks like ngnix supports that limit and would be a good or even better choice for our application than Apache, I am right now testing it. -- fxn
On Dec 9, 2006, at 2:38 PM, Xavier Noria wrote:> Yeah, I tried that yesterday and got no response so far. Before > patching Mongrel, looks like ngnix supports that limit and would be a > good or even better choice for our application than Apache, I am > right now testing it.Well, looks like the limit is working with Nginx, but I get very poor performance compared to Apache, same server machine, same (remote) stress machine. Apache is serving about 17 req/s, whereas Nginx is serving about 5 req/s. Since Nginx is known to be fast I bet my config, albeit simple, is somehow wrong. I attach it below in case some experienced eye catches something. -- fxn tested with: ab -t 60 -c 5 -H ''Accept-Encoding: gzip'' url_to_dynamic_page Nginx (5 req/s) --------------- user daemon; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include conf/mime.types; default_type application/octet-stream; sendfile on; gzip on; gzip_proxied any; gzip_types text/html text/plain text/xml text/css application/x- javascript; client_max_body_size 1M; upstream mongrel { server 127.0.0.1:3001; server 127.0.0.1:3002; server 127.0.0.1:3003; } server { listen 80; server_name www.example.com; root /home/oper/www/public; location ^~ \.flv$ { flv; } location / { if (!-f $request_filename) { proxy_pass http://mongrel; } } error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } Apache (17 req/s) ----------------- NameVirtualHost *:80 # Configure the balancer to dispatch to the Mongrel cluster. <Proxy balancer://example_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 BalancerMember http://127.0.0.1:3003 </Proxy> # Setup the VirtualHost for your Rails application <VirtualHost *:80> ServerAdmin admin at example.com ServerName www.example.com ServerAlias localhost DocumentRoot /path/to/example/public <Directory ''/path/to/example/public''> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass / balancer://example_cluster/ ProxyPassReverse / balancer://example_cluster/ # Setup your Rewrite rules here RewriteEngine On # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Send all requests that are not found as existing files to the cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://example_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml text/ css application/x-javascript # Error logs ErrorLog /path/to/example/log/apache_error_log CustomLog /path/to/example/log/apache_access_log combined </VirtualHost>
Hi, On Sam 09.12.2006 16:57, Xavier Noria wrote:> >Well, looks like the limit is working with Nginx, but I get very poor >performance compared to Apache, same server machine, same (remote) >stress machine. Apache is serving about 17 req/s, whereas Nginx is >serving about 5 req/s. Since Nginx is known to be fast I bet my config, >albeit simple, is somehow wrong. I attach it below in case some >experienced eye catches something.This sounds very bad?! Sorry if I have overseen but on with os,hw and so on do you run both server?!>tested with: ab -t 60 -c 5 -H ''Accept-Encoding: gzip'' >url_to_dynamic_page > >Nginx (5 req/s) > >user daemon; > >error_log logs/error.log; >pid logs/nginx.pid;Just for my curiosity please can you add: worker_processes 4;>events { > worker_connections 1024; >} > >http { > include conf/mime.types; > default_type application/octet-stream; > > sendfile on; > > gzip on; > gzip_proxied any;can you try to add this to your config after the test with the worker_processes ;-): gzip_min_length 1100; gzip_buffers 4 8k; Have your some errors in the error log?! Hth && regards Aleks
Aleksandar Lazic
2006-Dec-09 16:34 UTC
[Mongrel] [addenum] Re: compress and max upload size?
On Sam 09.12.2006 16:57, Xavier Noria wrote:>On Dec 9, 2006, at 2:38 PM, Xavier Noria wrote:[snipp]> location / { > if (!-f $request_filename) { > proxy_pass http://mongrel;What do you get if you turn of proxy_buffering: http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffering Regards Aleks
On Dec 9, 2006, at 4:57 PM, Xavier Noria wrote:> Well, looks like the limit is working with Nginx, but I get very poor > performance compared to Apache, same server machine, same (remote) > stress machine. Apache is serving about 17 req/s, whereas Nginx is > serving about 5 req/s. Since Nginx is known to be fast I bet my > config, albeit simple, is somehow wrong. I attach it below in case > some experienced eye catches something.I think I got something. The Content-Length reported by ab for Apache is 4K, and for Nginx is 19K, so my guess is that compression is not being triggered. If I use wget content is not compressed either. But it comes compressed if I use Firefox. That would explain the difference. I will try another stress tool then. -- fxn
On Dec 9, 2006, at 7:15 PM, Xavier Noria wrote:> On Dec 9, 2006, at 4:57 PM, Xavier Noria wrote: > >> Well, looks like the limit is working with Nginx, but I get very poor >> performance compared to Apache, same server machine, same (remote) >> stress machine. Apache is serving about 17 req/s, whereas Nginx is >> serving about 5 req/s. Since Nginx is known to be fast I bet my >> config, albeit simple, is somehow wrong. I attach it below in case >> some experienced eye catches something. > > I think I got something. The Content-Length reported by ab for Apache > is 4K, and for Nginx is 19K, so my guess is that compression is not > being triggered. If I use wget content is not compressed either. But > it comes compressed if I use Firefox. > > That would explain the difference. I will try another stress tool > then.Indeed, that was the problem. Once I''ve got compressed content, we go up to about 10 req/s, still not close to the 17 req/s of Apache. I think I don''t have evidence of the speed of Nginx + mongrel_cluster for my particular application at least, so I''ll get back to Apache 2.2.3 + mod_proxy_balancer. -- fxn The Nginx config is: user daemon; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include conf/mime.types; default_type application/octet-stream; gzip on; gzip_types text/plain text/css application/x-javascript text/xml; gzip_proxied any; gzip_comp_level 2; gzip_http_version 1.0; # for ab and wget upstream mongrel { server 127.0.0.1:3001; server 127.0.0.1:3002; server 127.0.0.1:3003; } # based on http://brainspl.at/articles/2006/09/12/new-nginx-conf- with-rails-caching server { listen 80; root /home/oper/www/public; client_max_body_size 1M; # this rewrites all the requests to the maintenance.html # page if it exists in the doc root. This is for capistrano''s # disable web task if (-f $document_root/maintenance.html){ rewrite ^(.*)$ /maintenance.html last; break; } location ^~ \.flv$ { flv; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; if (!-f $request_filename) { proxy_pass http://mongrel; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
On Dec 9, 2006, at 11:05 AM, Xavier Noria wrote:> On Dec 9, 2006, at 7:15 PM, Xavier Noria wrote: > >> On Dec 9, 2006, at 4:57 PM, Xavier Noria wrote: >> >>> Well, looks like the limit is working with Nginx, but I get very >>> poor >>> performance compared to Apache, same server machine, same (remote) >>> stress machine. Apache is serving about 17 req/s, whereas Nginx is >>> serving about 5 req/s. Since Nginx is known to be fast I bet my >>> config, albeit simple, is somehow wrong. I attach it below in case >>> some experienced eye catches something. >> >> I think I got something. The Content-Length reported by ab for Apache >> is 4K, and for Nginx is 19K, so my guess is that compression is not >> being triggered. If I use wget content is not compressed either. But >> it comes compressed if I use Firefox. >> >> That would explain the difference. I will try another stress tool >> then. > > Indeed, that was the problem. > > Once I''ve got compressed content, we go up to about 10 req/s, still > not close to the 17 req/s of Apache. I think I don''t have evidence of > the speed of Nginx + mongrel_cluster for my particular application at > least, so I''ll get back to Apache 2.2.3 + mod_proxy_balancer. > > -- fxn<snip> I think the reason you are seeing a difference between apache and nginx in this situation is that your apache config is rewriting stuff so that apache serves all static content. With your nginx config you are sending all requests including those for static files to mongrel. You need the appropriate rewite rules in nginx.conf in order to make a fair comparison. Here is a complete nginx config for use with mongrel. It does all the rewites toi assure static content is served by nginx. try this one out for a more fair comparison http://brainspl.at/nginx.conf.txt Cheers- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273)
On Dec 9, 2006, at 9:37 PM, Ezra Zygmuntowicz wrote:> I think the reason you are seeing a difference between apache and > nginx in this situation is that your apache config is rewriting stuff > so that apache serves all static content. With your nginx config you > are sending all requests including those for static files to mongrel. > You need the appropriate rewite rules in nginx.conf in order to make > a fair comparison. > > Here is a complete nginx config for use with mongrel. It does all > the rewites toi assure static content is served by nginx. try this > one out for a more fair comparison > > http://brainspl.at/nginx.conf.txtThank you Ezra. I am not sure that is the case however, my config file is based on that one. Both are very similar because of that, and in particular both have the test if (!-f $request_filename) { proxy_pass http://mongrel; break; } -- fxn
On Sam 09.12.2006 20:05, Xavier Noria wrote:> >Once I''ve got compressed content, we go up to about 10 req/s, still not >close to the 17 req/s of Apache. I think I don''t have evidence of the >speed of Nginx + mongrel_cluster for my particular application at >least, so I''ll get back to Apache 2.2.3 + mod_proxy_balancer.Hm, with which options do you have build nginx? On witch OS, HW, ...? Have you tried the worker_processes option?>The Nginx config is: > >user daemon; > >error_log logs/error.log; >pid logs/nginx.pid; > >events { > worker_connections 1024; >} > >http { > include conf/mime.types; > default_type application/octet-stream;Please add: sendfile on;> gzip on; > gzip_types text/plain text/css application/x-javascript text/xml; > gzip_proxied any; > gzip_comp_level 2;^^^^^^^^ Please comment this out> gzip_http_version 1.0; # for ab and wgetPlease can you tell me what''s in error log! Thanks aleks
On Dec 9, 2006, at 10:16 PM, Aleksandar Lazic wrote:> On Sam 09.12.2006 20:05, Xavier Noria wrote: >> >> Once I''ve got compressed content, we go up to about 10 req/s, >> still not >> close to the 17 req/s of Apache. I think I don''t have evidence of the >> speed of Nginx + mongrel_cluster for my particular application at >> least, so I''ll get back to Apache 2.2.3 + mod_proxy_balancer. > > Hm, with which options do you have build nginx?I used --with-http_ssl_module --with-http_flv_module.> On witch OS, HW, ...?That''s a Debian $ uname -a Linux machine.dedi.acens.net 2.6.8-3-686-smp #1 \ SMP Thu Sep 7 04:39:15 UTC 2006 i686 GNU/Linux> Have you tried the worker_processes option?Yes, I tried all the things proposed in your email, but there was no noticeable difference.> Please add: > sendfile on;That was on most of the time, I think I removed it towards the end doing configuration combinatorics :-).> >> gzip on; >> gzip_types text/plain text/css application/x-javascript text/ >> xml; >> gzip_proxied any; >> gzip_comp_level 2; > ^^^^^^^^ > Please comment this outWith default compression level I get a response which is bigger than the one sent by Apache. Since I am comparing both setups I tweaked that documented parameter to match them.>> gzip_http_version 1.0; # for ab and wget > > Please can you tell me what''s in error log!Sure, there''s nothing. There were some traces when in my trials I execeeded client_max_body_size on purpose. When benchmarks run there''s nothing printed to the error log. -- fxn
Aleksandar Lazic
2006-Dec-09 22:05 UTC
[Mongrel] nginx conf (was: Re: compress and max upload size?)
On Sam 09.12.2006 22:50, Xavier Noria wrote:>On Dec 9, 2006, at 10:16 PM, Aleksandar Lazic wrote: > >> Hm, with which options do you have build nginx? > >I used --with-http_ssl_module --with-http_flv_module. > >> On witch OS, HW, ...? > >That''s a Debian > > $ uname -a > Linux machine.dedi.acens.net 2.6.8-3-686-smp #1 \ > SMP Thu Sep 7 04:39:15 UTC 2006 i686 GNU/LinuxThanks, you should use the worker_processes option if you want to use the other cpus ;-)>> Have you tried the worker_processes option? > >Yes, I tried all the things proposed in your email, but there was no >noticeable difference.Aha?!>> Please add: >> sendfile on; > >That was on most of the time, I think I removed it towards the end >doing configuration combinatorics :-).I think if you haven''t problems yust let it on, mabe some other on this list have some bad expirence with this option?>>> gzip on; >>> gzip_types text/plain text/css application/x-javascript text/ >>> xml; >>> gzip_proxied any; >>> gzip_comp_level 2; >> ^^^^^^^^ >> Please comment this out > >With default compression level I get a response which is bigger than >the one sent by Apache. Since I am comparing both setups I tweaked that >documented parameter to match them.Ah ok thanks for explanation.>>> gzip_http_version 1.0; # for ab and wget >> >> Please can you tell me what''s in error log! > >Sure, there''s nothing. There were some traces when in my trials I >execeeded client_max_body_size on purpose. When benchmarks run there''s >nothing printed to the error log.Thanks ;-) Regards Aleks
You need to have your stress testing tool send the following request header to trigger compressed content: Accept-Encoding: gzip,deflate for wget, that''s: wget --header="Accept-Encoding: gzip,deflate" =Will Xavier Noria wrote:> On Dec 9, 2006, at 4:57 PM, Xavier Noria wrote: > >> Well, looks like the limit is working with Nginx, but I get very poor >> performance compared to Apache, same server machine, same (remote) >> stress machine. Apache is serving about 17 req/s, whereas Nginx is >> serving about 5 req/s. Since Nginx is known to be fast I bet my >> config, albeit simple, is somehow wrong. I attach it below in case >> some experienced eye catches something. > > I think I got something. The Content-Length reported by ab for Apache > is 4K, and for Nginx is 19K, so my guess is that compression is not > being triggered. If I use wget content is not compressed either. But > it comes compressed if I use Firefox. > > That would explain the difference. I will try another stress tool then. > > -- fxn > > > > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Xavier Noria
2006-Dec-10 10:03 UTC
[Mongrel] [SUMMARY] max upload size? (was: compress and max upload size?)
Aleksandar Lazic and I spent a couple of hours on the IRC yesterday night playing around with different configurations. We were comparing the setups discussed in this thread: Apache 2.2.3 + mod_proxy_balancer + mod_deflate versus Nginx, both servers proxying dynamic pages to 3 Mongrels. The test page is a dynamic page, requested hundreds of times with ab: ab [-k] -t 60 -c 5 -H ''Accept-Encoding: gzip'' URL Both servers run on the same machine, alternatively as needed. We got much better performance than before in Nginx increasing the worker processes and disabling buffering in the proxy (proxy_buffering off). In Aleksandar''s machine that raised the speed to almost 17 req/s, versus almost 19 req/s of Apache (always in mean, a couple of runs, you know). Compression level was the same in both servers. If we disabled compression both Apache and Nginx served that page at about the same speed, 11 req/s, so looks like the small difference comes from the usage of zlib somehow. Aleksandar has already written to the Nginx mailing list about it. -- fxn