Matte Edens
2007-Apr-17 04:09 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Hey folks. Sorry for the SUPER long email but if you''ve been experiencing the same problems with restarting your mongrel cluster with Capistrano, then I have two solutions that have worked for me and I''m pretty sure will for you as well. THE PROBLEM I was having trouble restarting my clusters using Capistrano. I''ve seen this come up before on the mailing list and looking through the archive I haven''t been able to find a suitable answer or fix. All my machines are updated to the latest mongrel and mongrel_cluster gems. 1.0.1 and 1.0.1.1 respectively. Running a "cap restart" runs the correct command to restart the cluster (edited for brevity) ... "sudo mongrel_rails cluster::restart -C [valid path to config] --clean" This works when you are sitting in your rails app root, however, Capistrano runs it''s commands from the ssh user''s home directory. I ran that same command from there and I got this dreaded error output. /*** begin err output ***/ already stopped port 8000 already stopped port 8001 starting port 8000 !!! PID file log/mongrel.8000.pid already exists. Mongrel could be running already. Check your log/mongrel.8000.log for errors. !!! Exiting with error. You must stop mongrel and clear the .pid before I''ll attempt a start. starting port 8001 !!! PID file log/mongrel.8001.pid already exists. Mongrel could be running already. Check your log/mongrel.8001.log for errors. !!! Exiting with error. You must stop mongrel and clear the .pid before I''ll attempt a start. /*** end err output ***/ What was brought up earlier [1] was that it appears that mongrel is not changing the working directory to the one specified in the config file. A patch was submitted [2] but by my reckoning, has not been applied and released. However, I believe the patch mentioned above may not be necessary because according to my research, the problem isn''t with mongrel at all. It''s with mongrel_cluster, no offense Bradley. :) I''ve found two issues. One I believe causes the other. 1) The basic problem is that the "start" and "stop" commands, when they are scanning for existing pid files, are not being run from the working directory, as specified by the :cwd setting in the mongrel_cluster config file. mongrel_cluster does not use the working directory setting until it is past that point and finally calling the mongrel_rails command. Thus, it isn''t going to find the pid files if you are also susceptible to problem #2. 2) A relative directory :pid_file setting in the mongrel_cluster config. If you''re like me, your :pid_file setting is "log/mongrel.pid". Using a relative directory like that is supposed to be based on the value of the :cwd setting. But mongrel_cluster is not applying the :cwd setting when parsing the :pid_file setting for it''s internal pid file variables. SOLUTIONS... FINALLY!! :) 1) The solution to the first problem is to patch mongrel_cluster/init.rb. Add some directory change commands, like the "status" command uses. I''ve uploaded my patch to Pastie at the address below. <http://pastie.caboo.se/54340> 2) Don''t use relative directories for the pid_file setting. Once I changed to an absolute directory setting of, for example, "/www/app/shared/log/mongrel.pid" then mongrel_cluster correctly found my pid files. Solution #1 is NOT needed in this instance. Both solutions require the user to perform an action but I believe that the first solution requires less steps for the end user. Instead of updating ALL of your mongrel_cluster config files, for every single app you''re running, just update to the patched mongrel_cluster. I suppose there''s a THIRD solution and that''s to patch the "read_options" function in init.rb. Lines 28 and 29 need to be updated to prepend @options["cwd"] if @options["pid_file"] or @options["log_file"] are relative paths. Am I off base with all this? Let me know. And thanx for reading all the way to the end. :) matte - matte at silent-e.com 1: <http://rubyforge.org/pipermail/mongrel-users/2007-March/003341.html> 2: <http://rubyforge.org/pipermail/mongrel-users/2007-March/003343.html>
Wayne E. Seguin
2007-Apr-17 10:24 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Matte, On Apr 17, 2007, at 00:09 , Matte Edens wrote:> "sudo mongrel_rails cluster::restart -C [valid path to config] -- > clean"Is this really a problem with mongrel cluster? A "fourth" solution is to simply modify your restart task in your Capistrano recipe: task :restart, :roles => :app do run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - C [valid path to config] --clean" end ~Wayne
Michael Steinfeld
2007-Apr-17 15:36 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Well, I have fought this so many times, and another issue that comes up is if you had fastcgi previously in use before migrating to mongrel. I have spent countless hours trying to deal with the mongrel pids and stopping and starting after each deploy. I lost the battle. I can start/stop manually using the /etc/init.d/mongrel_cluster script without fail. I am attempting to offer a fifth option here which I have not yet put into effect but plan to if I can''t find any other solution. Wayne''s solutiion has not worked for me and setting the full path in mongrel_cluster.yml has not worked either. This is not the most advised approach I am sure.. but it will work with a little tweaking. Then again I don''t want my app to be down during each deploy... *sighs --------------- desc "Kill all the pids in case there are some zombies and remove the .pid files" task :before_before_deploy, :roles => :app do begin run "sudo kill -9 `ps -ef | grep mongrel | egrep -v grep | awk ''{print $2}''`" run "cd #{previous_release}/logs && sudo rm -rf *.pid" end task :after_after_deploy, :roles => :app do begin " sudo /etc/init.d/mongrel_cluster start" end -------------- This is probably overkill, but I ran out of patience. Let me know what you guys think. --mike On 4/17/07, Wayne E. Seguin <wayneeseguin at gmail.com> wrote:> Matte, > On Apr 17, 2007, at 00:09 , Matte Edens wrote: > > "sudo mongrel_rails cluster::restart -C [valid path to config] -- > > clean" > > Is this really a problem with mongrel cluster? > > A "fourth" solution is to simply modify your restart task in your > Capistrano recipe: > > task :restart, :roles => :app do > run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - > C [valid path to config] --clean" > end > > ~Wayne > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- -mike
Zed A. Shaw
2007-Apr-17 18:14 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
On Tue, 17 Apr 2007 11:36:27 -0400 "Michael Steinfeld" <mikeisgreat at gmail.com> wrote: It was discussed earlier, but have you tried kill -USR2? It does the proper restart where it waits for mongrel to stop internally and then start again with the same command. Here''s how you''d change your script: desc "Make mongrel restart after deployment" task :after_after_deploy, :roles => :app do begin run "sudo killall -USR2 mongrel_rails" end Let me know if that works. There *might* be an issue with the current directory symlinking on restart. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Matte Edens
2007-Apr-17 19:27 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Sorry. another long one. Wayne, I used to use that actually. And I even tried it last night, and today, before sending the email. It didn''t work using capistrano, or from anywhere that wasn''t a rails_root location. There''s still a problem of where the command is run and where mongrel_cluster thinks it''s looking for the pid files. Here''s how I see it happening... 1) we run "cap restart" on local machine where it logs in via ssh to execute remote commands 2) located in the /home/ssh_user directory, the remote commands run from here. 3) restart runs a "stop/start" command sequence. yes, I''ve been reading the discussion of using a -USR2 command, but for this discussion we ignore that until the bottom of this email. 4) "stop" line 101, reads the options 5) "read_options" line 28 calls "process_pid_file" to parse the pid_file setting 6) "process_pid_file" sets up several variables for future use. Here is where it breaks down with a relative path pid_file setting. None of the File.* commands are run from the cwd in that function. They''re run from the /home/ssh_user directory. Now I don''t know about you but I don''t run my applications from that directory. Thus, with a setting of "log/mongrel.pid", the port_pid_file function returns "log/mongrel.8000.pid" and check_process is looking for /home/ssh_user/log/mongrel.8000.pid which obviously doesn''t exist. Making a small change, that I am not suggesting is a fix, just temporary, I rewrote this line in process_pid_file (line 40). @pid_file_dir = File.dirname("#{@options[''cwd'']}/#{pid_file}") That''s just a test to see if it would work with the addition of the cwd. It worked of course, because now File.dirname had an absolute path to parse. So, my suggestions from the first email, IMHO, are still valid. Patch mongrel_cluster/init.rb to either ... 1) Change directories in "stop" and "start" before the check_process functions are called so that relative directories are handled correctly, (see my pastie) or 2) Change the process_pid_file function to handle relative directory pid_file settings by prepending the cwd setting. or 3) have everyone change their mongrel_cluster config files to us absolute directory paths. And, then there''s the more recent discussion of changing the restart command to just call a -USR2 on mongrel_rails. Personally, I''d like it to be fixed within mongrel_cluster so that it''s just picked up by everyone when they update their gem. And instead of asking everyone to put in a "after_after_deploy" capistrano task like Zed mentioned in this thread. However, I just tried the following capistrano task... task :restart do sudo "killall -USR2 mongrel_rails" end and got this error "No matching processes were found". No idea about that except that when I "ps aux | grep mongrel_rails", each command starts... /usr/local/bin/ruby18 /usr/local/bin/mongrel_rails start -d -e production -a 127.0.0.1 -c /home/... My linux_fu is not strong enough to know how to diagnose this last issue. matte Wayne E. Seguin wrote:> Matte, > On Apr 17, 2007, at 00:09 , Matte Edens wrote: > >> "sudo mongrel_rails cluster::restart -C [valid path to config] -- >> clean" >> > > Is this really a problem with mongrel cluster? > > A "fourth" solution is to simply modify your restart task in your > Capistrano recipe: > > task :restart, :roles => :app do > run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - > C [valid path to config] --clean" > end-------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070417/2bef75ff/attachment.html
Bradley Taylor
2007-Apr-17 20:13 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Hi: As I mentioned earlier, I''ll fix this for the final release (any day now - been really busy). http://rubyforge.org/pipermail/mongrel-users/2007-April/003454.html However, as I wrote before, its not a good idea to put pidfiles in a relative directory as they won''t get cleaned up after a server crash. For linux, /var/run/mongrel_cluster is a better location. http://rubyforge.org/pipermail/mongrel-users/2007-February/003000.html Bradley Matte Edens wrote:> Sorry. another long one. > > Wayne, I used to use that actually. And I even tried it last night, and > today, before sending the email. It didn''t work using capistrano, or > from anywhere that wasn''t a rails_root location. There''s still a > problem of where the command is run and where mongrel_cluster thinks > it''s looking for the pid files. Here''s how I see it happening... > > 1) we run "cap restart" on local machine where it logs in via ssh to > execute remote commands > 2) located in the /home/ssh_user directory, the remote commands run from > here. > 3) restart runs a "stop/start" command sequence. yes, I''ve been reading > the discussion of using a -USR2 command, but for this discussion we > ignore that until the bottom of this email. > 4) "stop" line 101, reads the options > 5) "read_options" line 28 calls "process_pid_file" to parse the pid_file > setting > 6) "process_pid_file" sets up several variables for future use. > > Here is where it breaks down with a relative path pid_file setting. > None of the File.* commands are run from the cwd in that function. > They''re run from the /home/ssh_user directory. Now I don''t know about > you but I don''t run my applications from that directory. Thus, with a > setting of "log/mongrel.pid", the port_pid_file function returns > "log/mongrel.8000.pid" and check_process is looking for > /home/ssh_user/log/mongrel.8000.pid which obviously doesn''t exist. > Making a small change, that I am not suggesting is a fix, just > temporary, I rewrote this line in process_pid_file (line 40). > > @pid_file_dir = File.dirname("#{@options[''cwd'']}/#{pid_file}") > > That''s just a test to see if it would work with the addition of the > cwd. It worked of course, because now File.dirname had an absolute path > to parse. > > So, my suggestions from the first email, IMHO, are still valid. Patch > mongrel_cluster/init.rb to either ... > > 1) Change directories in "stop" and "start" before the check_process > functions are called so that relative directories are handled correctly, > (see my pastie) or > 2) Change the process_pid_file function to handle relative directory > pid_file settings by prepending the cwd setting. > > or 3) have everyone change their mongrel_cluster config files to us > absolute directory paths. And, then there''s the more recent discussion > of changing the restart command to just call a -USR2 on mongrel_rails. > > Personally, I''d like it to be fixed within mongrel_cluster so that it''s > just picked up by everyone when they update their gem. And instead of > asking everyone to put in a "after_after_deploy" capistrano task like > Zed mentioned in this thread. However, I just tried the following > capistrano task... > > task :restart do > sudo "killall -USR2 mongrel_rails" > end > > and got this error "No matching processes were found". No idea about > that except that when I "ps aux | grep mongrel_rails", each command > starts... > > /usr/local/bin/ruby18 /usr/local/bin/mongrel_rails start -d -e > production -a 127.0.0.1 -c /home/... > > My linux_fu is not strong enough to know how to diagnose this last issue. > > matte > > Wayne E. Seguin wrote: >> Matte, >> On Apr 17, 2007, at 00:09 , Matte Edens wrote: >> >>> "sudo mongrel_rails cluster::restart -C [valid path to config] -- >>> clean" >>> >> >> Is this really a problem with mongrel cluster? >> >> A "fourth" solution is to simply modify your restart task in your >> Capistrano recipe: >> >> task :restart, :roles => :app do >> run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - >> C [valid path to config] --clean" >> end > > > ------------------------------------------------------------------------ > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Michael Steinfeld
2007-Apr-17 20:34 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
Bradley, I can wait a few more days! Great News! Also, thank you everyone for the help. On 4/17/07, Bradley Taylor <bradley at railsmachine.com> wrote:> Hi: > > As I mentioned earlier, I''ll fix this for the final release (any day now > - been really busy). > > http://rubyforge.org/pipermail/mongrel-users/2007-April/003454.html > > However, as I wrote before, its not a good idea to put pidfiles in a > relative directory as they won''t get cleaned up after a server crash. > For linux, /var/run/mongrel_cluster is a better location. > > http://rubyforge.org/pipermail/mongrel-users/2007-February/003000.html > > Bradley > > Matte Edens wrote: > > Sorry. another long one. > > > > Wayne, I used to use that actually. And I even tried it last night, and > > today, before sending the email. It didn''t work using capistrano, or > > from anywhere that wasn''t a rails_root location. There''s still a > > problem of where the command is run and where mongrel_cluster thinks > > it''s looking for the pid files. Here''s how I see it happening... > > > > 1) we run "cap restart" on local machine where it logs in via ssh to > > execute remote commands > > 2) located in the /home/ssh_user directory, the remote commands run from > > here. > > 3) restart runs a "stop/start" command sequence. yes, I''ve been reading > > the discussion of using a -USR2 command, but for this discussion we > > ignore that until the bottom of this email. > > 4) "stop" line 101, reads the options > > 5) "read_options" line 28 calls "process_pid_file" to parse the pid_file > > setting > > 6) "process_pid_file" sets up several variables for future use. > > > > Here is where it breaks down with a relative path pid_file setting. > > None of the File.* commands are run from the cwd in that function. > > They''re run from the /home/ssh_user directory. Now I don''t know about > > you but I don''t run my applications from that directory. Thus, with a > > setting of "log/mongrel.pid", the port_pid_file function returns > > "log/mongrel.8000.pid" and check_process is looking for > > /home/ssh_user/log/mongrel.8000.pid which obviously doesn''t exist. > > Making a small change, that I am not suggesting is a fix, just > > temporary, I rewrote this line in process_pid_file (line 40). > > > > @pid_file_dir = File.dirname("#{@options[''cwd'']}/#{pid_file}") > > > > That''s just a test to see if it would work with the addition of the > > cwd. It worked of course, because now File.dirname had an absolute path > > to parse. > > > > So, my suggestions from the first email, IMHO, are still valid. Patch > > mongrel_cluster/init.rb to either ... > > > > 1) Change directories in "stop" and "start" before the check_process > > functions are called so that relative directories are handled correctly, > > (see my pastie) or > > 2) Change the process_pid_file function to handle relative directory > > pid_file settings by prepending the cwd setting. > > > > or 3) have everyone change their mongrel_cluster config files to us > > absolute directory paths. And, then there''s the more recent discussion > > of changing the restart command to just call a -USR2 on mongrel_rails. > > > > Personally, I''d like it to be fixed within mongrel_cluster so that it''s > > just picked up by everyone when they update their gem. And instead of > > asking everyone to put in a "after_after_deploy" capistrano task like > > Zed mentioned in this thread. However, I just tried the following > > capistrano task... > > > > task :restart do > > sudo "killall -USR2 mongrel_rails" > > end > > > > and got this error "No matching processes were found". No idea about > > that except that when I "ps aux | grep mongrel_rails", each command > > starts... > > > > /usr/local/bin/ruby18 /usr/local/bin/mongrel_rails start -d -e > > production -a 127.0.0.1 -c /home/... > > > > My linux_fu is not strong enough to know how to diagnose this last issue. > > > > matte > > > > Wayne E. Seguin wrote: > >> Matte, > >> On Apr 17, 2007, at 00:09 , Matte Edens wrote: > >> > >>> "sudo mongrel_rails cluster::restart -C [valid path to config] -- > >>> clean" > >>> > >> > >> Is this really a problem with mongrel cluster? > >> > >> A "fourth" solution is to simply modify your restart task in your > >> Capistrano recipe: > >> > >> task :restart, :roles => :app do > >> run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - > >> C [valid path to config] --clean" > >> end > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- -mike
Matte Edens
2007-Apr-17 20:46 UTC
[Mongrel] problem restarting mongrel_cluster outside RAILS_ROOT - patch and other option
DOH. missed that one. shoulda looked closer. That /var/run/mongrel_cluster location is a good idea. hadn''t thought of that. However, you mention the init.d script. My linux_fu is not strong so I''m guessing that the equivalent will work just fine in a FreeBSD setup. Thanx Bradley. I eagerly await the next release. Now, off to update my config files. matte Bradley Taylor wrote:> Hi: > > As I mentioned earlier, I''ll fix this for the final release (any day now > - been really busy). > > http://rubyforge.org/pipermail/mongrel-users/2007-April/003454.html > > However, as I wrote before, its not a good idea to put pidfiles in a > relative directory as they won''t get cleaned up after a server crash. > For linux, /var/run/mongrel_cluster is a better location. > > http://rubyforge.org/pipermail/mongrel-users/2007-February/003000.html > > Bradley > > Matte Edens wrote: > >> Sorry. another long one. >> >> <snip> >> >> matte >> >> Wayne E. Seguin wrote: >> >>> Matte, >>> On Apr 17, 2007, at 00:09 , Matte Edens wrote: >>> >>> >>>> "sudo mongrel_rails cluster::restart -C [valid path to config] -- >>>> clean" >>>> >>>> >>> Is this really a problem with mongrel cluster? >>> >>> A "fourth" solution is to simply modify your restart task in your >>> Capistrano recipe: >>> >>> task :restart, :roles => :app do >>> run(or sudo) "cd #{current_path}; mongrel_rails cluster::restart - >>> C [valid path to config] --clean" >>> end