Bradley Taylor
2007-Mar-20 14:53 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Hi all... Hopefully this is the last prelease. If people on non-linux systems could post back any problems with cluster::status, I''d appreciate it. Install with: gem install mongrel_cluster --source http://mongrel.rubyforge.org/releases/ Note: This is only an update to mongrel_cluster and not Mongrel or other gems. Details about what''s new (if you missed the first prerelease): http://blog.railsmachine.com/2007/2/26/mongrel_cluster-prerelease-1-0-1-1 Thanks, Bradley Taylor http://railsmachine.com
Bradley, It seems to work on FreeBSD (I had to delete all gems before it actually worked), even though I still get this: Checking all mongrel_clusters... mongrel_rails cluster::status -C config.yml ps: Process environment requires procfs(5) ps: Process environment requires procfs(5) ps: Process environment requires procfs(5) ps: Process environment requires procfs(5) ps: Process environment requires procfs(5) Best, - MF -- Michele Finotto http://finotto.org/ http://16bugs.com/ http://pagety.com/ On Mar 20, 2007, at 15:53 , Bradley Taylor wrote:> Hi all... > > Hopefully this is the last prelease. If people on non-linux systems > could post back any problems with cluster::status, I''d appreciate it. > > Install with: > gem install mongrel_cluster --source http://mongrel.rubyforge.org/ > releases/ > > Note: This is only an update to mongrel_cluster and not Mongrel or > other > gems. > > Details about what''s new (if you missed the first prerelease): > http://blog.railsmachine.com/2007/2/26/mongrel_cluster- > prerelease-1-0-1-1 > > Thanks, > Bradley Taylor > http://railsmachine.com
Bradley Taylor
2007-Mar-27 15:06 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Hi Michele: Thanks for the freebsd report. I think there is a way for you to mount /proc to support linux process environment emulation. I''ll also gladly accept a patch from anyone for ''ps'' syntax that cause this warning. I think we''ll release it as is today and due a quick patch if someone provides a fix. Regards, Bradley Taylor http://railsmachine.com Michele wrote:> Bradley, > > It seems to work on FreeBSD (I had to delete all gems before it > actually worked), even though I still get this: > > Checking all mongrel_clusters... > mongrel_rails cluster::status -C config.yml > ps: Process environment requires procfs(5) > ps: Process environment requires procfs(5) > ps: Process environment requires procfs(5) > ps: Process environment requires procfs(5) > ps: Process environment requires procfs(5) > > Best, > - MF > > -- > Michele Finotto > http://finotto.org/ > http://16bugs.com/ > http://pagety.com/ > > > On Mar 20, 2007, at 15:53 , Bradley Taylor wrote: > >> Hi all... >> >> Hopefully this is the last prelease. If people on non-linux systems >> could post back any problems with cluster::status, I''d appreciate it. >> >> Install with: >> gem install mongrel_cluster --source http://mongrel.rubyforge.org/ >> releases/ >> >> Note: This is only an update to mongrel_cluster and not Mongrel or >> other >> gems. >> >> Details about what''s new (if you missed the first prerelease): >> http://blog.railsmachine.com/2007/2/26/mongrel_cluster- >> prerelease-1-0-1-1 >> >> Thanks, >> Bradley Taylor >> http://railsmachine.com > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Carl Lerche
2007-Mar-27 16:01 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Hello, I''m not sure if it''s a slight bug with mongrel_cluster or me, but I''m having some trouble running mongrel cluster with --clean in different directories than the root of the rails app. It doesn''t look like mongrel_cluster changes the working directory anywhere (based off of the config file, but just passes the working directory on to mongrel_rails. This is fine, except when it tries to check the pid files. Does this make sense? thanks, -carl On 3/27/07, Bradley Taylor <bradley at railsmachine.com> wrote:> Hi Michele: > > Thanks for the freebsd report. I think there is a way for you to mount > /proc to support linux process environment emulation. > > I''ll also gladly accept a patch from anyone for ''ps'' syntax that cause > this warning. > > I think we''ll release it as is today and due a quick patch if someone > provides a fix. > > Regards, > Bradley Taylor > http://railsmachine.com > > Michele wrote: > > Bradley, > > > > It seems to work on FreeBSD (I had to delete all gems before it > > actually worked), even though I still get this: > > > > Checking all mongrel_clusters... > > mongrel_rails cluster::status -C config.yml > > ps: Process environment requires procfs(5) > > ps: Process environment requires procfs(5) > > ps: Process environment requires procfs(5) > > ps: Process environment requires procfs(5) > > ps: Process environment requires procfs(5) > > > > Best, > > - MF > > > > -- > > Michele Finotto > > http://finotto.org/ > > http://16bugs.com/ > > http://pagety.com/ > > > > > > On Mar 20, 2007, at 15:53 , Bradley Taylor wrote: > > > >> Hi all... > >> > >> Hopefully this is the last prelease. If people on non-linux systems > >> could post back any problems with cluster::status, I''d appreciate it. > >> > >> Install with: > >> gem install mongrel_cluster --source http://mongrel.rubyforge.org/ > >> releases/ > >> > >> Note: This is only an update to mongrel_cluster and not Mongrel or > >> other > >> gems. > >> > >> Details about what''s new (if you missed the first prerelease): > >> http://blog.railsmachine.com/2007/2/26/mongrel_cluster- > >> prerelease-1-0-1-1 > >> > >> Thanks, > >> Bradley Taylor > >> http://railsmachine.com > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
Alexey Verkhovsky
2007-Mar-27 16:56 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
On 3/27/07, Carl Lerche <carl.lerche at gmail.com> wrote:> > It doesn''t look like mongrel_cluster changes the working directory > anywhere (based off of the config file, but just passes the working > directory on to mongrel_rails. This is fine, except when it tries to > check the pid files.is that pid files or pid files directory? In the latter case, there is a bug in Mongrel where this check is done before changing into the current working directory specified in the command-line options. I actually sent Zed a patch that would fix this a coupe of days ago. See http://rubyforge.org/tracker/?func=detail&aid=9326&group_id=1306&atid=5145 Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070327/e8dfe922/attachment.html
Carl Lerche
2007-Mar-27 16:59 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
For me, I fixed this problem by adding the following on line 79 of init.rb: Dir.chdir(@options["cwd"]) if @options["cwd"] -carl On 3/27/07, Carl Lerche <carl.lerche at gmail.com> wrote:> Hello, > > I''m not sure if it''s a slight bug with mongrel_cluster or me, but I''m > having some trouble running mongrel cluster with --clean in different > directories than the root of the rails app. > > It doesn''t look like mongrel_cluster changes the working directory > anywhere (based off of the config file, but just passes the working > directory on to mongrel_rails. This is fine, except when it tries to > check the pid files. Does this make sense? > > thanks, > -carl > > On 3/27/07, Bradley Taylor <bradley at railsmachine.com> wrote: > > Hi Michele: > > > > Thanks for the freebsd report. I think there is a way for you to mount > > /proc to support linux process environment emulation. > > > > I''ll also gladly accept a patch from anyone for ''ps'' syntax that cause > > this warning. > > > > I think we''ll release it as is today and due a quick patch if someone > > provides a fix. > > > > Regards, > > Bradley Taylor > > http://railsmachine.com > > > > Michele wrote: > > > Bradley, > > > > > > It seems to work on FreeBSD (I had to delete all gems before it > > > actually worked), even though I still get this: > > > > > > Checking all mongrel_clusters... > > > mongrel_rails cluster::status -C config.yml > > > ps: Process environment requires procfs(5) > > > ps: Process environment requires procfs(5) > > > ps: Process environment requires procfs(5) > > > ps: Process environment requires procfs(5) > > > ps: Process environment requires procfs(5) > > > > > > Best, > > > - MF > > > > > > -- > > > Michele Finotto > > > http://finotto.org/ > > > http://16bugs.com/ > > > http://pagety.com/ > > > > > > > > > On Mar 20, 2007, at 15:53 , Bradley Taylor wrote: > > > > > >> Hi all... > > >> > > >> Hopefully this is the last prelease. If people on non-linux systems > > >> could post back any problems with cluster::status, I''d appreciate it. > > >> > > >> Install with: > > >> gem install mongrel_cluster --source http://mongrel.rubyforge.org/ > > >> releases/ > > >> > > >> Note: This is only an update to mongrel_cluster and not Mongrel or > > >> other > > >> gems. > > >> > > >> Details about what''s new (if you missed the first prerelease): > > >> http://blog.railsmachine.com/2007/2/26/mongrel_cluster- > > >> prerelease-1-0-1-1 > > >> > > >> Thanks, > > >> Bradley Taylor > > >> http://railsmachine.com > > > > > > _______________________________________________ > > > Mongrel-users mailing list > > > Mongrel-users at rubyforge.org > > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > > > > -- > EPA Rating: 3000 Lines of Code / Gallon (of coffee) >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
Bradley Taylor
2007-Apr-11 16:26 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Thanks Carl. I''ll patch this in and push the release. Last call! Bradley Carl Lerche wrote:> For me, I fixed this problem by adding the following on line 79 of init.rb: > > Dir.chdir(@options["cwd"]) if @options["cwd"] > > -carl > > On 3/27/07, Carl Lerche <carl.lerche at gmail.com> wrote: >> Hello, >> >> I''m not sure if it''s a slight bug with mongrel_cluster or me, but I''m >> having some trouble running mongrel cluster with --clean in different >> directories than the root of the rails app. >> >> It doesn''t look like mongrel_cluster changes the working directory >> anywhere (based off of the config file, but just passes the working >> directory on to mongrel_rails. This is fine, except when it tries to >> check the pid files. Does this make sense? >> >> thanks, >> -carl >> >> On 3/27/07, Bradley Taylor <bradley at railsmachine.com> wrote: >>> Hi Michele: >>> >>> Thanks for the freebsd report. I think there is a way for you to mount >>> /proc to support linux process environment emulation. >>> >>> I''ll also gladly accept a patch from anyone for ''ps'' syntax that cause >>> this warning. >>> >>> I think we''ll release it as is today and due a quick patch if someone >>> provides a fix. >>> >>> Regards, >>> Bradley Taylor >>> http://railsmachine.com >>> >>> Michele wrote: >>>> Bradley, >>>> >>>> It seems to work on FreeBSD (I had to delete all gems before it >>>> actually worked), even though I still get this: >>>> >>>> Checking all mongrel_clusters... >>>> mongrel_rails cluster::status -C config.yml >>>> ps: Process environment requires procfs(5) >>>> ps: Process environment requires procfs(5) >>>> ps: Process environment requires procfs(5) >>>> ps: Process environment requires procfs(5) >>>> ps: Process environment requires procfs(5) >>>> >>>> Best, >>>> - MF >>>> >>>> -- >>>> Michele Finotto >>>> http://finotto.org/ >>>> http://16bugs.com/ >>>> http://pagety.com/ >>>> >>>> >>>> On Mar 20, 2007, at 15:53 , Bradley Taylor wrote: >>>> >>>>> Hi all... >>>>> >>>>> Hopefully this is the last prelease. If people on non-linux systems >>>>> could post back any problems with cluster::status, I''d appreciate it. >>>>> >>>>> Install with: >>>>> gem install mongrel_cluster --source http://mongrel.rubyforge.org/ >>>>> releases/ >>>>> >>>>> Note: This is only an update to mongrel_cluster and not Mongrel or >>>>> other >>>>> gems. >>>>> >>>>> Details about what''s new (if you missed the first prerelease): >>>>> http://blog.railsmachine.com/2007/2/26/mongrel_cluster- >>>>> prerelease-1-0-1-1 >>>>> >>>>> Thanks, >>>>> Bradley Taylor >>>>> http://railsmachine.com >>>> _______________________________________________ >>>> Mongrel-users mailing list >>>> Mongrel-users at rubyforge.org >>>> http://rubyforge.org/mailman/listinfo/mongrel-users >>> _______________________________________________ >>> Mongrel-users mailing list >>> Mongrel-users at rubyforge.org >>> http://rubyforge.org/mailman/listinfo/mongrel-users >>> >> >> -- >> EPA Rating: 3000 Lines of Code / Gallon (of coffee) >> > >
Michael A. Schoen
2007-Apr-11 17:55 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Bradley Taylor wrote:> Thanks Carl. I''ll patch this in and push the release. Last call!Just a question: why does a cluster restart do a stop and a start, rather than an actual restart on each Mongrel? A restart is "nicer", and handles much better cases in which a Mongrel can''t (or shouldn''t) stop immediately. The current stop/start approach means that the start often fails, because the stop hasn''t actually shutdown the Mongrel yet. Would it be possible to make doing an actual restart an option? Or another command that does a true restart?
Bradley Taylor
2007-Apr-11 18:48 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Michael A. Schoen wrote:> Just a question: why does a cluster restart do a stop and a start, > rather than an actual restart on each Mongrel? A restart is "nicer", and > handles much better cases in which a Mongrel can''t (or shouldn''t) stop > immediately.Originally, "cluster::restart" called "mongrel_rails restart". Unfortunately, this is not reliable for major changes and doing stop/start is the only way to guarantee that code changes will be applied. From the mongrel code (rails.rb, line 164): # Reloads Rails. This isn''t too reliable really, but it # should work for most minimal reload purposes. The only reliable # way to reload properly is to stop and then start the process. I don''t think it is entirely true to say that restart is "nicer" than stop/start. ''stop'' waits for the current request to finish unless you use --force. In the context of a cluster, other cluster members will handle requests during the stop/start cycle. > The current stop/start approach means that the start often > fails, because the stop hasn''t actually shutdown the Mongrel yet. > It is possible that start will be called before the process is gone. I''ll think about adding some sort of check in cluster::restart to verify the process is gone before calling start. If your requests take a long time to complete, you might end up having other problems unless you have loads of ram and a million mongrels in your cluster. > Would it be possible to make doing an actual restart an option? Or > another command that does a true restart? No, because it''s unreliable. Bradley
Michael A. Schoen
2007-Apr-11 19:21 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Bradley Taylor wrote:> Unfortunately, this is not reliable for major changes and doing > stop/start is the only way to guarantee that code changes will be applied. > > From the mongrel code (rails.rb, line 164): > # Reloads Rails. This isn''t too reliable really, but it > # should work for most minimal reload purposes. The only reliable > # way to reload properly is to stop and then start the process.Ah, but that''s not what I''m suggesting -- a "reload" is distinct from a "restart". The "reload" option for Rails under Mongrel (from a HUP signal) just calls the Rails reload! method, and I understand how that can/will fail. A "restart" (from a USR2 signal) just a plain old regular stop, with the restart flag set such that once the Mongrel is stopped, it restarts. This should nicely handle the situation in which a Mongrel might take a few seconds to shutdown (thereby missing it''s start oppty). Make sense?
Bradley Taylor
2007-Apr-11 21:12 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Sorry too many things going on... I was looking at the soft option. Reviewing the code (Zed correct me if I''m wrong), stop and restart both call the same stop method. The graceful handling of an in-progress request is the same. Restart also has some funky semantics when used in a cluster where it reuses the the command line arguments. This means that you can''t modify the cluster configuration and apply the changes with a restart. The standard behavior of a linux (freebsd, etc) service is that configuration changes are reread on restart (apache, mysql,etc). So for the purposes of mongrel_cluster, restart == stop;start. Running a single mongrel with its own configuration file would behave as expected. Bradley Michael A. Schoen wrote:> Bradley Taylor wrote: >> Unfortunately, this is not reliable for major changes and doing >> stop/start is the only way to guarantee that code changes will be applied. >> >> From the mongrel code (rails.rb, line 164): >> # Reloads Rails. This isn''t too reliable really, but it >> # should work for most minimal reload purposes. The only reliable >> # way to reload properly is to stop and then start the process. > > Ah, but that''s not what I''m suggesting -- a "reload" is distinct from a > "restart". The "reload" option for Rails under Mongrel (from a HUP > signal) just calls the Rails reload! method, and I understand how that > can/will fail. > > A "restart" (from a USR2 signal) just a plain old regular stop, with the > restart flag set such that once the Mongrel is stopped, it restarts. > > This should nicely handle the situation in which a Mongrel might take a > few seconds to shutdown (thereby missing it''s start oppty). > > Make sense? > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Michael A. Schoen
2007-Apr-11 23:56 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Bradley Taylor wrote:> Reviewing the code (Zed correct me if I''m wrong), stop and restart both > call the same stop method. The graceful handling of an in-progress > request is the same.Yes, and that handling works for me. The problem is that a stop;start fails when the stop takes a bit, whereas a stop-with-restart will always be just fine. What happens now when I do a cluster restart is that some of my Mongrels end up just dead, as they actually stop (gracefully) after the start has already been called for. I could resolve this using a forced stop, but I''m looking for a more, not less, graceful process.> Restart also has some funky semantics when used in a cluster where it > reuses the the command line arguments. This means that you can''t modify > the cluster configuration and apply the changes with a restart. The > standard behavior of a linux (freebsd, etc) service is that > configuration changes are reread on restart (apache, mysql,etc). So for > the purposes of mongrel_cluster, restart == stop;start. Running a single > mongrel with its own configuration file would behave as expected.Ah, so I understand why you made the change to have a cluster restart do a stop;start. We don''t change the cluster configuration, so we aren''t hit by that problem. But would it be possible to get an alternative command added that does do an actual restart? If not, no worries, I''ll hack it in on my end.
Wayne E. Seguin
2007-Apr-12 10:49 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
This may be a bit simple, but couldn''t you concatenate the commands in a system call using either '';'' (or &&), doesn''t this (or both?) of them require that the previous command finishes before executing the current one? What I''m thinking is that you can do something like: `mongrel_rails stop... ; mongrel_rails start` To accomplish the correct wait for the graceful stop? I hope I''m not way off here as I just joined the discussion. ~Wayne On Apr 11, 2007, at 19:56 , Michael A. Schoen wrote:> Bradley Taylor wrote: >> Reviewing the code (Zed correct me if I''m wrong), stop and restart >> both >> call the same stop method. The graceful handling of an in-progress >> request is the same. > > Yes, and that handling works for me. The problem is that a stop;start > fails when the stop takes a bit, whereas a stop-with-restart will > always > be just fine. > > What happens now when I do a cluster restart is that some of my > Mongrels > end up just dead, as they actually stop (gracefully) after the > start has > already been called for. I could resolve this using a forced stop, but > I''m looking for a more, not less, graceful process. > >> Restart also has some funky semantics when used in a cluster where it >> reuses the the command line arguments. This means that you can''t >> modify >> the cluster configuration and apply the changes with a restart. The >> standard behavior of a linux (freebsd, etc) service is that >> configuration changes are reread on restart (apache, mysql,etc). >> So for >> the purposes of mongrel_cluster, restart == stop;start. Running a >> single >> mongrel with its own configuration file would behave as expected. > > Ah, so I understand why you made the change to have a cluster > restart do > a stop;start. We don''t change the cluster configuration, so we aren''t > hit by that problem. > > But would it be possible to get an alternative command added that does > do an actual restart? If not, no worries, I''ll hack it in on my end. > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Rob Kaufman
2007-Apr-12 15:04 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Hi Wayne, Though that is a good idea in general, it doesn''t get the job done in this case. The problem is that stop returns successfully as soon as it sends the signal to the mongrel processes. It goes out, says "hey please stop what your doing" and then returns, telling you "I told them", not "they have stopped". It seems to me like what we need is to have a --wait option. The idea would be that mongrel_rails stop --wait would not return until it had confirmed that all the processes had truly stopped what they where doing. It would be nice if wait took an optional timeout argument. I see two benefits to this solution. One it solves the problem we''re discussing here. Your cluster reset could be composed of stop --wait and start commands. Second it will allow your system shutdown or deployments to wait for every doggy to finish up and gracefully return your maintenance page instead of just timing out. Rob Kaufman On 4/12/07, Wayne E. Seguin <wayneeseguin at gmail.com> wrote:> This may be a bit simple, but couldn''t you concatenate the commands > in a system call using either '';'' (or &&), doesn''t this (or both?) of > them require that the previous command finishes before executing the > current one? > > What I''m thinking is that you can do something like: > `mongrel_rails stop... ; mongrel_rails start` > To accomplish the correct wait for the graceful stop? > > I hope I''m not way off here as I just joined the discussion. > > ~Wayne > > On Apr 11, 2007, at 19:56 , Michael A. Schoen wrote: > > > Bradley Taylor wrote: > >> Reviewing the code (Zed correct me if I''m wrong), stop and restart > >> both > >> call the same stop method. The graceful handling of an in-progress > >> request is the same. > > > > Yes, and that handling works for me. The problem is that a stop;start > > fails when the stop takes a bit, whereas a stop-with-restart will > > always > > be just fine. > > > > What happens now when I do a cluster restart is that some of my > > Mongrels > > end up just dead, as they actually stop (gracefully) after the > > start has > > already been called for. I could resolve this using a forced stop, but > > I''m looking for a more, not less, graceful process. > > > >> Restart also has some funky semantics when used in a cluster where it > >> reuses the the command line arguments. This means that you can''t > >> modify > >> the cluster configuration and apply the changes with a restart. The > >> standard behavior of a linux (freebsd, etc) service is that > >> configuration changes are reread on restart (apache, mysql,etc). > >> So for > >> the purposes of mongrel_cluster, restart == stop;start. Running a > >> single > >> mongrel with its own configuration file would behave as expected. > > > > Ah, so I understand why you made the change to have a cluster > > restart do > > a stop;start. We don''t change the cluster configuration, so we aren''t > > hit by that problem. > > > > But would it be possible to get an alternative command added that does > > do an actual restart? If not, no worries, I''ll hack it in on my end. > > > > _______________________________________________ > > Mongrel-users mailing list > > Mongrel-users at rubyforge.org > > http://rubyforge.org/mailman/listinfo/mongrel-users > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >
Wayne E. Seguin
2007-Apr-12 15:10 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Thanks Rob, That makes total sense, thanks for that. I agree that the best option seems to be to add a --wait option. ~Wayne On Apr 12, 2007, at 11:04 , Rob Kaufman wrote:> Hi Wayne, > Though that is a good idea in general, it doesn''t get the job done > in this case. The problem is that stop returns successfully as soon > as it sends the signal to the mongrel processes. It goes out, says > "hey please stop what your doing" and then returns, telling you "I > told them", not "they have stopped". It seems to me like what we need > is to have a --wait option. The idea would be that mongrel_rails stop > --wait would not return until it had confirmed that all the processes > had truly stopped what they where doing. It would be nice if wait > took an optional timeout argument. > > I see two benefits to this solution. One it solves the problem > we''re discussing here. Your cluster reset could be composed of stop > --wait and start commands. Second it will allow your system shutdown > or deployments to wait for every doggy to finish up and gracefully > return your maintenance page instead of just timing out. > > Rob Kaufman > > > On 4/12/07, Wayne E. Seguin <wayneeseguin at gmail.com> wrote: >> This may be a bit simple, but couldn''t you concatenate the commands >> in a system call using either '';'' (or &&), doesn''t this (or both?) of >> them require that the previous command finishes before executing the >> current one? >> >> What I''m thinking is that you can do something like: >> `mongrel_rails stop... ; mongrel_rails start` >> To accomplish the correct wait for the graceful stop? >> >> I hope I''m not way off here as I just joined the discussion. >> >> ~Wayne >> >> On Apr 11, 2007, at 19:56 , Michael A. Schoen wrote: >> >>> Bradley Taylor wrote: >>>> Reviewing the code (Zed correct me if I''m wrong), stop and restart >>>> both >>>> call the same stop method. The graceful handling of an in-progress >>>> request is the same. >>> >>> Yes, and that handling works for me. The problem is that a >>> stop;start >>> fails when the stop takes a bit, whereas a stop-with-restart will >>> always >>> be just fine. >>> >>> What happens now when I do a cluster restart is that some of my >>> Mongrels >>> end up just dead, as they actually stop (gracefully) after the >>> start has >>> already been called for. I could resolve this using a forced >>> stop, but >>> I''m looking for a more, not less, graceful process. >>> >>>> Restart also has some funky semantics when used in a cluster >>>> where it >>>> reuses the the command line arguments. This means that you can''t >>>> modify >>>> the cluster configuration and apply the changes with a restart. The >>>> standard behavior of a linux (freebsd, etc) service is that >>>> configuration changes are reread on restart (apache, mysql,etc). >>>> So for >>>> the purposes of mongrel_cluster, restart == stop;start. Running a >>>> single >>>> mongrel with its own configuration file would behave as expected. >>> >>> Ah, so I understand why you made the change to have a cluster >>> restart do >>> a stop;start. We don''t change the cluster configuration, so we >>> aren''t >>> hit by that problem. >>> >>> But would it be possible to get an alternative command added that >>> does >>> do an actual restart? If not, no worries, I''ll hack it in on my end. >>> >>> _______________________________________________ >>> Mongrel-users mailing list >>> Mongrel-users at rubyforge.org >>> http://rubyforge.org/mailman/listinfo/mongrel-users >> >> _______________________________________________ >> Mongrel-users mailing list >> Mongrel-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-users >> > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users
Matt Zukowski
2007-Apr-12 15:49 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Just a suggestion, but maybe the stop command should wait until all the servers are actually down before it exits. What Michael describes below is a fairly frustrating aspect of using mongrel_cluster. The restart process right now kind of sucks, and I suspect that making it behave more gracefully would make a lot of people happy. Michael A. Schoen wrote:> Bradley Taylor wrote: >> Reviewing the code (Zed correct me if I''m wrong), stop and restart both >> call the same stop method. The graceful handling of an in-progress >> request is the same. > > Yes, and that handling works for me. The problem is that a stop;start > fails when the stop takes a bit, whereas a stop-with-restart will always > be just fine. > > What happens now when I do a cluster restart is that some of my Mongrels > end up just dead, as they actually stop (gracefully) after the start has > already been called for. I could resolve this using a forced stop, but > I''m looking for a more, not less, graceful process. > >> Restart also has some funky semantics when used in a cluster where it >> reuses the the command line arguments. This means that you can''t modify >> the cluster configuration and apply the changes with a restart. The >> standard behavior of a linux (freebsd, etc) service is that >> configuration changes are reread on restart (apache, mysql,etc). So for >> the purposes of mongrel_cluster, restart == stop;start. Running a single >> mongrel with its own configuration file would behave as expected. > > Ah, so I understand why you made the change to have a cluster restart do > a stop;start. We don''t change the cluster configuration, so we aren''t > hit by that problem. > > But would it be possible to get an alternative command added that does > do an actual restart? If not, no worries, I''ll hack it in on my end. > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-usersThis e-mail message is privileged, confidential and subject to copyright. Any unauthorized use or disclosure is prohibited. Le contenu du pr''esent courriel est privil''egi''e, confidentiel et soumis `a des droits d''auteur. Il est interdit de l''utiliser ou de le divulguer sans autorisation.
Michael A. Schoen
2007-Apr-12 17:15 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Matt Zukowski wrote:> Just a suggestion, but maybe the stop command should wait until all the > servers are actually down before it exits. What Michael describes below > is a fairly frustrating aspect of using mongrel_cluster. The restart > process right now kind of sucks, and I suspect that making it behave > more gracefully would make a lot of people happy.And Mongrel itself does support a more graceful restart, mongrel_cluster just doesn''t use it at the moment. Even given the constraint that a true restart won''t re-read the cluster config, still seems worth being available as an option.
Zed A. Shaw
2007-Apr-12 17:40 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
On Thu, 12 Apr 2007 10:15:13 -0700 "Michael A. Schoen" <schoenm at earthlink.net> wrote:> Matt Zukowski wrote: > > Just a suggestion, but maybe the stop command should wait until all the > > servers are actually down before it exits. What Michael describes below > > is a fairly frustrating aspect of using mongrel_cluster. The restart > > process right now kind of sucks, and I suspect that making it behave > > more gracefully would make a lot of people happy. > > And Mongrel itself does support a more graceful restart, mongrel_cluster > just doesn''t use it at the moment. Even given the constraint that a true > restart won''t re-read the cluster config, still seems worth being > available as an option.I think I''m going to say, no, you don''t get this in mongrel_cluster. When we had it there were way too many problems because how Rails does this kind of soft restart isn''t very clear. It''s basically a bunch of black magic ruby that reloads all the stuff like in debug mode. That also means that it doesn''t work with modules and systems that are outside of rails. For example, you can''t hook into this restart process so that you can properly close connections to Jabber servers. This lead to people having weird problems like Mongrel not actually restarting and memory leaks. Of course, they don''t blame Rails or the plugin they''re using, they blame Mongrel. In order to keep the support problems to a minimum, we just stop the server and restart. What''s wrong with that for people? Apparently you all need constant and completely available up-time for your web applications. Great. You can''t get this from Mongrel, or from Rails, you need to look outside at your proxy server, network architecture, and other sources. However, you do have access to it. Every time mongrel starts up it tells you what posix signals cause what actions. If you want a graceful restart and you know it will work then you just hit your mongrels with that signal. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Michael A. Schoen
2007-Apr-12 18:52 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Zed A. Shaw wrote:> I think I''m going to say, no, you don''t get this in mongrel_cluster. > When we had it there were way too many problems because how Rails does > this kind of soft restart isn''t very clear. It''s basically a bunch of > black magic ruby that reloads all the stuff like in debug mode. ThatAm I misreading the code? I''m not talking about the HUP/reload stuff. I''m talking about the plain old regular Mongrel restart, which, from what I can tell, is a regular stop, with the restart flag set to true, such that it starts right back up again. No reloading of any Rails magic.
Matt Zukowski
2007-Apr-12 19:00 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
For me at least the issue isn''t so much constant and complete availability... It''s that the "restart" command in mongrel_cluster basically doesn''t work. After stopping, it tires to start without waiting for the servers to shut down. Most of the time this fails, as the old servers are still shutting down. Instead I find myself doing a manual ''stop'', then ps''ing repeatedly to see if all the servers have shut down, and then starting manually. Zed A. Shaw wrote:> On Thu, 12 Apr 2007 10:15:13 -0700 > "Michael A. Schoen" <schoenm at earthlink.net> wrote: > >> Matt Zukowski wrote: >>> Just a suggestion, but maybe the stop command should wait until all the >>> servers are actually down before it exits. What Michael describes below >>> is a fairly frustrating aspect of using mongrel_cluster. The restart >>> process right now kind of sucks, and I suspect that making it behave >>> more gracefully would make a lot of people happy. >> And Mongrel itself does support a more graceful restart, mongrel_cluster >> just doesn''t use it at the moment. Even given the constraint that a true >> restart won''t re-read the cluster config, still seems worth being >> available as an option. > > I think I''m going to say, no, you don''t get this in mongrel_cluster. > When we had it there were way too many problems because how Rails does > this kind of soft restart isn''t very clear. It''s basically a bunch of > black magic ruby that reloads all the stuff like in debug mode. That > also means that it doesn''t work with modules and systems that are > outside of rails. For example, you can''t hook into this restart > process so that you can properly close connections to Jabber servers. > > This lead to people having weird problems like Mongrel not actually > restarting and memory leaks. Of course, they don''t blame Rails or the > plugin they''re using, they blame Mongrel. In order to keep the support > problems to a minimum, we just stop the server and restart. > > What''s wrong with that for people? Apparently you all need constant > and completely available up-time for your web applications. Great. > You can''t get this from Mongrel, or from Rails, you need to look > outside at your proxy server, network architecture, and other sources. > > However, you do have access to it. Every time mongrel starts up it > tells you what posix signals cause what actions. If you want a > graceful restart and you know it will work then you just hit your > mongrels with that signal. >This e-mail message is privileged, confidential and subject to copyright. Any unauthorized use or disclosure is prohibited. Le contenu du pr''esent courriel est privil''egi''e, confidentiel et soumis `a des droits d''auteur. Il est interdit de l''utiliser ou de le divulguer sans autorisation.
On 4/12/07, Matt Zukowski <mzukowski at urbacon.net> wrote:> For me at least the issue isn''t so much constant and complete > availability... It''s that the "restart" command in mongrel_cluster > basically doesn''t work. After stopping, it tires to start without > waiting for the servers to shut down. Most of the time this fails, as > the old servers are still shutting down. > > Instead I find myself doing a manual ''stop'', then ps''ing repeatedly to > see if all the servers have shut down, and then starting manually.I have a server where I do the same thing. I know it''s stupid to do it manually, but I keep telling myself that it''ll get fixed in mongrel_cluster eventually, and thankfully I rarely ever have to do a restart anyway. 1. cap deploy 2. code is loaded, servers start to shutdown 3. shutdown is "complete" but not all mongrels are really done. 4. start is issued 5. watch the PID errors blow by 6. manually shut all of the mongrel processes down 7. manually start mongrel_cluster again This isn''t exactly ruining my life, but it''s annoying. A step 3.5 that said "Hey, let''s wait until the servers are actually stopped" would be super cool. -- James
Carl Lerche
2007-Apr-12 19:28 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Yes, I have the same problem. On high traffic apps, mongrel cluster''s restart will fail pretty consistently (for the reasons stated above). A --wait option seems like a reasonable solution. -carl On 4/12/07, Matt Zukowski <mzukowski at urbacon.net> wrote:> For me at least the issue isn''t so much constant and complete > availability... It''s that the "restart" command in mongrel_cluster > basically doesn''t work. After stopping, it tires to start without > waiting for the servers to shut down. Most of the time this fails, as > the old servers are still shutting down. > > Instead I find myself doing a manual ''stop'', then ps''ing repeatedly to > see if all the servers have shut down, and then starting manually. > > Zed A. Shaw wrote: > > On Thu, 12 Apr 2007 10:15:13 -0700 > > "Michael A. Schoen" <schoenm at earthlink.net> wrote: > > > >> Matt Zukowski wrote: > >>> Just a suggestion, but maybe the stop command should wait until all the > >>> servers are actually down before it exits. What Michael describes below > >>> is a fairly frustrating aspect of using mongrel_cluster. The restart > >>> process right now kind of sucks, and I suspect that making it behave > >>> more gracefully would make a lot of people happy. > >> And Mongrel itself does support a more graceful restart, mongrel_cluster > >> just doesn''t use it at the moment. Even given the constraint that a true > >> restart won''t re-read the cluster config, still seems worth being > >> available as an option. > > > > I think I''m going to say, no, you don''t get this in mongrel_cluster. > > When we had it there were way too many problems because how Rails does > > this kind of soft restart isn''t very clear. It''s basically a bunch of > > black magic ruby that reloads all the stuff like in debug mode. That > > also means that it doesn''t work with modules and systems that are > > outside of rails. For example, you can''t hook into this restart > > process so that you can properly close connections to Jabber servers. > > > > This lead to people having weird problems like Mongrel not actually > > restarting and memory leaks. Of course, they don''t blame Rails or the > > plugin they''re using, they blame Mongrel. In order to keep the support > > problems to a minimum, we just stop the server and restart. > > > > What''s wrong with that for people? Apparently you all need constant > > and completely available up-time for your web applications. Great. > > You can''t get this from Mongrel, or from Rails, you need to look > > outside at your proxy server, network architecture, and other sources. > > > > However, you do have access to it. Every time mongrel starts up it > > tells you what posix signals cause what actions. If you want a > > graceful restart and you know it will work then you just hit your > > mongrels with that signal. > > > > > > This e-mail message is privileged, confidential and subject to copyright. Any unauthorized use or disclosure is prohibited. > Le contenu du pr''esent courriel est privil''egi''e, confidentiel et soumis `a des droits d''auteur. Il est interdit de l''utiliser ou de le divulguer sans autorisation. > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >-- EPA Rating: 3000 Lines of Code / Gallon (of coffee)
Bradley Taylor
2007-Apr-12 20:07 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
mongrel_rails stop accepts a --wait argument. If I add that to mongrel_cluster, will it solve these issues? Bradley Carl Lerche wrote:> Yes, I have the same problem. On high traffic apps, mongrel cluster''s > restart will fail pretty consistently (for the reasons stated above). > A --wait option seems like a reasonable solution. > > -carl > > On 4/12/07, Matt Zukowski <mzukowski at urbacon.net> wrote: >> For me at least the issue isn''t so much constant and complete >> availability... It''s that the "restart" command in mongrel_cluster >> basically doesn''t work. After stopping, it tires to start without >> waiting for the servers to shut down. Most of the time this fails, as >> the old servers are still shutting down. >> >> Instead I find myself doing a manual ''stop'', then ps''ing repeatedly to >> see if all the servers have shut down, and then starting manually. >> >> Zed A. Shaw wrote: >>> On Thu, 12 Apr 2007 10:15:13 -0700 >>> "Michael A. Schoen" <schoenm at earthlink.net> wrote: >>> >>>> Matt Zukowski wrote: >>>>> Just a suggestion, but maybe the stop command should wait until all the >>>>> servers are actually down before it exits. What Michael describes below >>>>> is a fairly frustrating aspect of using mongrel_cluster. The restart >>>>> process right now kind of sucks, and I suspect that making it behave >>>>> more gracefully would make a lot of people happy. >>>> And Mongrel itself does support a more graceful restart, mongrel_cluster >>>> just doesn''t use it at the moment. Even given the constraint that a true >>>> restart won''t re-read the cluster config, still seems worth being >>>> available as an option. >>> I think I''m going to say, no, you don''t get this in mongrel_cluster. >>> When we had it there were way too many problems because how Rails does >>> this kind of soft restart isn''t very clear. It''s basically a bunch of >>> black magic ruby that reloads all the stuff like in debug mode. That >>> also means that it doesn''t work with modules and systems that are >>> outside of rails. For example, you can''t hook into this restart >>> process so that you can properly close connections to Jabber servers. >>> >>> This lead to people having weird problems like Mongrel not actually >>> restarting and memory leaks. Of course, they don''t blame Rails or the >>> plugin they''re using, they blame Mongrel. In order to keep the support >>> problems to a minimum, we just stop the server and restart. >>> >>> What''s wrong with that for people? Apparently you all need constant >>> and completely available up-time for your web applications. Great. >>> You can''t get this from Mongrel, or from Rails, you need to look >>> outside at your proxy server, network architecture, and other sources. >>> >>> However, you do have access to it. Every time mongrel starts up it >>> tells you what posix signals cause what actions. If you want a >>> graceful restart and you know it will work then you just hit your >>> mongrels with that signal. >>> >> >> >> This e-mail message is privileged, confidential and subject to copyright. Any unauthorized use or disclosure is prohibited. >> Le contenu du pr''esent courriel est privil''egi''e, confidentiel et soumis `a des droits d''auteur. Il est interdit de l''utiliser ou de le divulguer sans autorisation. >> >> >> _______________________________________________ >> Mongrel-users mailing list >> Mongrel-users at rubyforge.org >> http://rubyforge.org/mailman/listinfo/mongrel-users >> > >
Michael A. Schoen
2007-Apr-12 20:35 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Bradley Taylor wrote:> mongrel_rails stop accepts a --wait argument. If I add that to > mongrel_cluster, will it solve these issues?Not for me I don''t think. Again, I may be misreading it, but that --wait argument just looks like it literally waits, then does a hard kill. I would have expected that option to send a TERM, then wait up to @wait seconds for it to go away, and do a KILL if it was still there. If that were the implementation it sounds like it would work for most folks. I''m really just looking for the ability to do a restart, ie., a graceful stop following by an automatic (within Mongrel) start.
Ezra Zygmuntowicz
2007-Apr-12 20:43 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
On Apr 12, 2007, at 1:35 PM, Michael A. Schoen wrote:> Bradley Taylor wrote: >> mongrel_rails stop accepts a --wait argument. If I add that to >> mongrel_cluster, will it solve these issues? > > Not for me I don''t think. Again, I may be misreading it, but that -- > wait > argument just looks like it literally waits, then does a hard kill. I > would have expected that option to send a TERM, then wait up to @wait > seconds for it to go away, and do a KILL if it was still there. If > that > were the implementation it sounds like it would work for most folks. > > I''m really just looking for the ability to do a restart, ie., a > graceful > stop following by an automatic (within Mongrel) start.You can accomplish gracefull mongrel cluster restarts with monit. using cluster::stop and cluster::start in the start/stop programs. Set a ''mongrel'' group in the monit config and then use this for the restart task: $ sudo monit restart all -g mongrel That will do each mongrel one at a time and it will make sure they stop and get started again, avoiding the issue of starting a new mongrel on the same port before the other one finishes. This is what works best for me. Cheers -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- ez at engineyard.com -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273) check process mongrel_<%= @username %>_5000 with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% = @username %>/current/config/mongrel_cluster.yml --clean --only 5000" stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%= @username %>/current/config/mongrel_cluster.yml --clean --only 5000" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 20 restarts within 20 cycles then timeout # something is wrong, call the sys-admin group mongrel check process mongrel_<%= @username %>_5001 with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% = @username %>/current/config/mongrel_cluster.yml --clean --only 5001" stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%= @username %>/current/config/mongrel_cluster.yml --clean --only 5001" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 20 restarts within 20 cycles then timeout # something is wrong, call the sys-admin group mongrel check process mongrel_<%= @username %>_5002 with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% = @username %>/current/config/mongrel_cluster.yml --clean --only 5002" stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%= @username %>/current/config/mongrel_cluster.yml --clean --only 5002" if totalmem is greater than 110.0 MB for 4 cycles then restart # eating up memory? if cpu is greater than 50% for 2 cycles then alert # send an email to admin if cpu is greater than 80% for 3 cycles then restart # hung process? if loadavg(5min) greater than 10 for 8 cycles then restart # bad, bad, bad if 20 restarts within 20 cycles then timeout # something is wrong, call the sys-admin group mongrel
Zed A. Shaw
2007-Apr-12 21:20 UTC
[Mongrel] OHHHHHHHHHHHH!!!! Re: [ANN] Another mongrel_cluster prerelease 1.0.1.1
On Thu, 12 Apr 2007 11:52:26 -0700 "Michael A. Schoen" <schoenm at earthlink.net> wrote:> Zed A. Shaw wrote: > > I think I''m going to say, no, you don''t get this in mongrel_cluster. > > When we had it there were way too many problems because how Rails does > > this kind of soft restart isn''t very clear. It''s basically a bunch of > > black magic ruby that reloads all the stuff like in debug mode. That > > Am I misreading the code? I''m not talking about the HUP/reload stuff. > I''m talking about the plain old regular Mongrel restart, which, from > what I can tell, is a regular stop, with the restart flag set to true, > such that it starts right back up again. No reloading of any Rails magic.Ok, ok, ok, NOW we get it. The problem was that mongrel_cluster was calling start/stop on the Mongrel because of issues with the capistrano symlinks from back in the day. Now what we''ll do is the following change to mongrel_cluster: 1) When you do cluster::restart it sends the mongrel processes a USR2 signal. This is the signal that tells mongrel to stop everything, wait until that''s done, then re-run the start command again. *** NOTE: If you change your mongrel_cluster config you''ll have to use start/stop. 2) When you do cluster::stop it sends TERM to do the stop. THIS will not wait, so if you''re doing this and need to wait then use the mongrel.log and ps manually. 3) When you do cluster::start, it will try to start. It''s on you to handle this manually. Everyone with this problem currently can very simply do the following when they need the super graceful USR2: killall -USR2 mongrel_rails That will usually work on most linux systems, people on other systems can probably come up with their quick fix. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Luis Lavena
2007-Apr-12 21:25 UTC
[Mongrel] OHHHHHHHHHHHH!!!! Re: [ANN] Another mongrel_cluster prerelease 1.0.1.1
On 4/12/07, Zed A. Shaw <zedshaw at zedshaw.com> wrote: [... snipped lots of ramblings about HUP, USR2, restart, stop, start, restart...]> > Everyone with this problem currently can very simply do the following > when they need the super graceful USR2: > > killall -USR2 mongrel_rails > > That will usually work on most linux systems, people on other systems > can probably come up with their quick fix. >Thank God I''m on Windows ;-) BTW, Zed, we need to talk, could you during the weekend? (I know, you''re a busy guy). -- Luis Lavena Multimedia systems - Leaders are made, they are not born. They are made by hard effort, which is the price which all of us must pay to achieve any goal that is worthwhile. Vince Lombardi
Zed A. Shaw
2007-Apr-12 21:33 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
On Thu, 12 Apr 2007 15:00:31 -0400 Matt Zukowski <mzukowski at urbacon.net> wrote:> For me at least the issue isn''t so much constant and complete > availability... It''s that the "restart" command in mongrel_cluster > basically doesn''t work. After stopping, it tires to start without > waiting for the servers to shut down. Most of the time this fails, as > the old servers are still shutting down. > > Instead I find myself doing a manual ''stop'', then ps''ing repeatedly to > see if all the servers have shut down, and then starting manually.Matt, yep, we just figured out what everyone was talking about. Try this: killall -USR2 mongrel_rails and it''ll do a stop/wait/restart. Next version of mongrel_cluster will just do that. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Zed A. Shaw
2007-Apr-12 21:35 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
On Thu, 12 Apr 2007 13:35:06 -0700 "Michael A. Schoen" <schoenm at earthlink.net> wrote:> Bradley Taylor wrote: > > mongrel_rails stop accepts a --wait argument. If I add that to > > mongrel_cluster, will it solve these issues? > > Not for me I don''t think. Again, I may be misreading it, but that --wait > argument just looks like it literally waits, then does a hard kill. I > would have expected that option to send a TERM, then wait up to @wait > seconds for it to go away, and do a KILL if it was still there. If that > were the implementation it sounds like it would work for most folks.Michael, Try running: killall -USR2 mongrel_rails and see if that works the way you expect it to work. That''s the signal to tell Mongrel to do a stop/wait/start process (the real restart). Next version of mongrel_cluster will just do this. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/
Michael A. Schoen
2007-Apr-12 22:16 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Zed A. Shaw wrote:> Try running: > > killall -USR2 mongrel_rails > > and see if that works the way you expect it to work. That''s the signal > to tell Mongrel to do a stop/wait/start process (the real restart). > > Next version of mongrel_cluster will just do this.Perfect, thanks, exactly what I was looking for. Sorry for not being clear enough about this upfront.
Bradley Taylor
2007-Apr-12 23:45 UTC
[Mongrel] OHHHHHHHHHHHH!!!! Re: [ANN] Another mongrel_cluster prerelease 1.0.1.1
Zed A. Shaw wrote:> 1) When you do cluster::restart it sends the mongrel processes a USR2 > signal. This is the signal that tells mongrel to stop everything, wait > until that''s done, then re-run the start command again. > *** NOTE: If you change your mongrel_cluster config you''ll have > to use start/stop.I''m not big on breaking the existing semantics of cluster::restart. It should be possible to support reloading the configuration file via some kind of shadow mongrel_rails file. I will investigate... Bradley
Rob Kaufman
2007-Apr-13 05:23 UTC
[Mongrel] [ANN] Another mongrel_cluster prerelease 1.0.1.1
Thanks Zed, That sounds like an awesome solution. Not that anyone who has been around here long should be surprised when you come to the rescue Rob Kaufman On 4/12/07, Michael A. Schoen <schoenm at earthlink.net> wrote:> Zed A. Shaw wrote: > > Try running: > > > > killall -USR2 mongrel_rails > > > > and see if that works the way you expect it to work. That''s the signal > > to tell Mongrel to do a stop/wait/start process (the real restart). > > > > Next version of mongrel_cluster will just do this. > > Perfect, thanks, exactly what I was looking for. Sorry for not being > clear enough about this upfront. > > > _______________________________________________ > Mongrel-users mailing list > Mongrel-users at rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users >