Has anyone tried to synchronously restart their unicorns, to ensure that things restart OK? I imagine I could write a script that sent USR2 and then watched the log for a successful before exiting, but I dream there is something more MAGICAL Yours in mythical web servers, -jamie
Jamie, Here is a sample unicorn config file that automatically sends the old master process a QUIT signal if the new master process successfully boots and starts forking (which is what I assume you are asking about). We get 0 downtime deploys using this method (and changes typically propagate live in 5-10 seconds). If the new master process fails at booting, you can tail the unicorn.stdout/err.log files to see why. http://pastie.org/1129610 Most of that configuration file was from a github blog post about unicorn (you can try googling for it). I also added a section that writes out PID files for workers so you can monitor their memory usage (and send them quit signals when they exceed the limit). We do that via god. Hope that helps, Clifton On Tue, Aug 31, 2010 at 1:30 PM, Jamie Wilkinson <jamie at tramchase.com> wrote:> Has anyone tried to synchronously restart their unicorns, to ensure that things restart OK? > > I imagine I could write a script that sent USR2 and then watched the log for a successful before exiting, but I dream there is something more MAGICAL > > > Yours in mythical web servers, > > -jamie > > _______________________________________________ > Unicorn mailing list - mongrel-unicorn at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-unicorn > Do not quote signatures (like this one) or top post when replying >
On Aug 31, 2010, at 12:08 PM, Clifton King wrote:> If the new master process > fails at booting, you can tail the unicorn.stdout/err.log files to see > why.I should clarify... the above is exactly what I''m trying to avoid. i.e. how do you know if your new master failed to boot unless you are actively tailing the logs? It is extremely infrequent that our unicorns fail to start, but when it does we sometimes don''t notice for quite some time. Our unicorns also restart so quickly that it is not an issue to do the restarts synchronously during deployment and trade speed for peace of mind I will probably just replace our basic `kill -USR2` with a small script that sends the signal and doesn''t exit until the pidfile handover is complete. I''ll be sure to share my results in case anyone else might find this useful. -jamie http://jamiedubs.com | http://fffff.at> On Tue, Aug 31, 2010 at 1:30 PM, Jamie Wilkinson <jamie at tramchase.com> wrote: >> Has anyone tried to synchronously restart their unicorns, to ensure that things restart OK? >> >> I imagine I could write a script that sent USR2 and then watched the log for a successful before exiting, but I dream there is something more MAGICAL >> >> >> Yours in mythical web servers, >> >> -jamie >> >> _______________________________________________ >> Unicorn mailing list - mongrel-unicorn at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-unicorn >> Do not quote signatures (like this one) or top post when replying >> > _______________________________________________ > Unicorn mailing list - mongrel-unicorn at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-unicorn > Do not quote signatures (like this one) or top post when replying
Jamie, Check to see that the new unicorn master process has a different PID than the old one. You could have a script that sleeps for X seconds and checks the PID after the deploy and if it matches the old one alerts you with a tail of the stdout/stderr log files. I personally do "ps aux | grep unicorn" a few times during the process if there are any changes being deployed I''m wary of. Clifton On Tue, Aug 31, 2010 at 3:08 PM, Jamie Wilkinson <jamie at tramchase.com> wrote:> On Aug 31, 2010, at 12:08 PM, Clifton King wrote: > >> If the new master process >> fails at booting, you can tail the unicorn.stdout/err.log files to see >> why. > > > I should clarify... the above is exactly what I''m trying to avoid. i.e. how do you know if your new master failed to boot unless you are actively tailing the logs? > > It is extremely infrequent that our unicorns fail to start, but when it does we sometimes don''t notice for quite some time. Our unicorns also restart so quickly that it is not an issue to do the restarts synchronously during deployment and trade speed for peace of mind > > I will probably just replace our basic `kill -USR2` with a small script that sends the signal and doesn''t exit until the pidfile handover is complete. I''ll be sure to share my results in case anyone else might find this useful. > > > -jamie > > http://jamiedubs.com | http://fffff.at > >> On Tue, Aug 31, 2010 at 1:30 PM, Jamie Wilkinson <jamie at tramchase.com> wrote: >>> Has anyone tried to synchronously restart their unicorns, to ensure that things restart OK? >>> >>> I imagine I could write a script that sent USR2 and then watched the log for a successful before exiting, but I dream there is something more MAGICAL >>> >>> >>> Yours in mythical web servers, >>> >>> -jamie >>> >>> _______________________________________________ >>> Unicorn mailing list - mongrel-unicorn at rubyforge.org >>> http://rubyforge.org/mailman/listinfo/mongrel-unicorn >>> Do not quote signatures (like this one) or top post when replying >>> >> _______________________________________________ >> Unicorn mailing list - mongrel-unicorn at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-unicorn >> Do not quote signatures (like this one) or top post when replying > > _______________________________________________ > Unicorn mailing list - mongrel-unicorn at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-unicorn > Do not quote signatures (like this one) or top post when replying >
On Tue, Aug 31, 2010 at 4:08 PM, Jamie Wilkinson <jamie at tramchase.com> wrote:> > I should clarify... the above is exactly what I''m trying to avoid. i.e. how do you know if your new master failed to boot unless you are actively tailing the logs?Well, first, you can specify a .pid file in your Unicorn configuration file, then look at it before and a few seconds after the USR2 signal to see if the process IDs are different. Unicorn''s pretty smart about maintaining that file, and copying the old one out until you kill the old process. Second... I''m confused as to why you think tailing the logs is something to be avoided. Isn''t finding out status what logs are *for?* You could certainly automate the inspection and have some process that emails you with an alarm if things don''t look right. Regardless of how you do it, if you''re hoping to get away with frequent automated deploys without *some* verification that requests are working, you''re taking big risks. Even if your unicorn process runs, you could still have problems at the application level throwing exceptions at every user. -- Have Fun, ?? Steve Eley (sfeley at gmail.com) ?? ESCAPE POD - The Science Fiction Podcast Magazine ?? http://www.escapepod.org
On Tue, Aug 31, 2010 at 3:08 PM, Clifton King <cliftonk at gmail.com> wrote:> > http://pastie.org/1129610 > > Most of that configuration file was from a github blog post about > unicorn (you can try googling for it).Cool. In that spirit, here''s the config file I use with Vlad the Deployer in all of my Rails apps: http://gist.github.com/559710 Four Unicorn-related tasks are included: * start_unicorn: Calls the unicorn server with the path to the appropriate config/unicorn.rb file * reload_unicorn: Sends HUP for a graceful reload (and falls back to start_unicorn if any status errors come back) * kick_unicorn: Sends USR2 for synchronous restart. If successful, sends WINCH to the old process and then QUIT after one minute (to give time for testing and fallback if necessary). * rescue_unicorn: Recovers from a failed kick_unicorn if necessary. Does a HUP to revive the old master and QUITs the new one. I''ve been meaning to bundle this into a Vlad plugin gem, but haven''t gotten around to it yet. In the meantime, here it is in case anyone would find it useful. (I also have Git automation in there that reads the branch name for the environment. I.e., I have branches for ''staging'', ''production'', etc. -- and named users and server subdomains for them, too.) -- Have Fun, ?? Steve Eley (sfeley at gmail.com) ?? ESCAPE POD - The Science Fiction Podcast Magazine ?? http://www.escapepod.org
I totally get what Jamie is asking for, I''m in the same boat. I think Jamie was saying he wants to avoid tailing the log while deploying new code. The only thing you want to know is 0 for success or -1 for failure, only in the case of -1 would you make the effort to have your brain waste time parsing log data. Jamie, I was thinking of modifying my unicorn init.d script somewhat similar to this from a suggested nginx init.d script: http://wiki.nginx.org/Nginx-init-ubuntu see the quietupgrade() function. As Unicorn follows the nginx patterns it''ll be mostly a copy-paste I think. Cheers, Lawrence> On Tue, Aug 31, 2010 at 4:08 PM, Jamie Wilkinson <jamie at tramchase.com> wrote: > >> I should clarify... the above is exactly what I''m trying to avoid. i.e. how do you know if your new master failed to boot unless you are actively tailing the logs? >> > > Well, first, you can specify a .pid file in your Unicorn configuration > file, then look at it before and a few seconds after the USR2 signal > to see if the process IDs are different. Unicorn''s pretty smart about > maintaining that file, and copying the old one out until you kill the > old process. > > Second... I''m confused as to why you think tailing the logs is > something to be avoided. Isn''t finding out status what logs are > *for?* You could certainly automate the inspection and have some > process that emails you with an alarm if things don''t look right. > Regardless of how you do it, if you''re hoping to get away with > frequent automated deploys without *some* verification that requests > are working, you''re taking big risks. Even if your unicorn process > runs, you could still have problems at the application level throwing > exceptions at every user. > > > > >