Andres Rojas <arojas at oogalabs.com> wrote:> Hi all, we''ve recently migrated from a mongrel setup to a Unicorn
> setup on most of our projects and we''re trying to figure out how
best
> to get stats about queued requests so we can tune accordingly.
>
> Our old setup:
> Hardware load balancer -----> Virtual Machine -----> Nginx ----->
> HAProxy -----> Mongrel
>
>
> New setup:
> Hardware load balancer -----> Virtual Machine -----> Nginx ----->
Unicorn!
>
>
> The nice thing about HAProxy is that it gives you an easy way of
> seeing how many requests are queued up, which helps us determine how
> many slices we need, etc. With the loss of HAProxy in our Unicorn
> setup, I''m curious as how best to determine this. I''ve
looked around
> in the mailing list and haven''t come across much. Any help would
be
> much appreciated.
Hi Andres,
You don''t *have* to take HAproxy out if you really want to see the
queue
when using Unicorn :) But yes, the extra data copies with HAproxy will
hurt performance slightly and it''s more to manage.
The obvious way would be send occasional monitoring requests to a
"hello world" endpoint in your app, and then compare Unicorn response
times (via the X-Runtime header in the application or your application log)
with the actual time seen by the client (from the same LAN):
time curl -iI http://example.com/_test | grep ^X-Runtime:
This lets you notice bottlenecks in the entire stack, not just
the nginx <-> unicorn path. Most folks I know do this (and monitor
overall server load).
Another possible way (untested with real traffic) to get rough data for
the nginx <-> unicorn path is to have two listeners in Unicorn. One
with a large backlog and the other with a small backlog:
------------ unicorn config ------------
listen "/tmp/.primary", :backlog => 5
listen "/tmp/.backup", :backlog => 1024
Then watching the nginx error log for errors going to the primary
socket. The backup socket will be used whenever errors happen on the
primary socket due to listen queue being full.
------------ nginx config ------------
error_log /tmp/nginx.error.log info;
upstream app_server {
server unix:/tmp/.primary fail_timeout=0;
server unix:/tmp/.backup fail_timeout=0 backup;
}
If you see lots of errors going to .primary, then you can try increasing
the backlog for that to something you''re more comfortable with.
--
Eric Wong