similar to: Mongrel - HUP Signal

Displaying 20 results from an estimated 10000 matches similar to: "Mongrel - HUP Signal"

2006 Nov 05
2
logrotate, mongrel cluster and monit
While I could figure this out, I''m asking here first to see if anyone has already dealt with/created this. I''m running a mongrel cluster, running 4 mongrels on ports 8001-4. I''m using capistrano to deploy. And I''d like to use monit to check to make sure everything is running nice. I''d like to have monit restart only single mongrels if they fail, and
2008 Jan 09
9
mongrel, monit, and the many, many messages
Monit 4.9, Mongrel 1.0.1, Rails 1.2.6, Mac OS X 10.4.11 (PPC) I don''t know whether this is a mongrel issue or a monit issue. I''m trying to poke my way around a system set up by someone else. I have no more experience w/ mongrel that local Rails dev at this point, and a conceptual understanding of how monit is working. I have the Deploying Rails beta book, and I''m
2007 Nov 07
8
mongrel - monit issue
Hi, was wondering if anyone else had a similar problem and knows why or a solution. basically my mongrels seems to work fine. i am running three clusters all which are monitored by monit. monit has the ability to restart a mongrel if it doesn''t pass a port connection test. so the problem is that after some time. aprox. 6hrs. to 20hrs. after clusters are started, the mongrels get
2007 Aug 10
10
what is the correct way to stop/start a mongrel instance using monit with mongrel cluster
Hi -- I have been reading documentation and googling around to find the correct way to do this but I have found many ways that seem to not work, or the documentation makes no reference to. I am using mongrel cluster with 10 mongrels for each server. Recently I installed monit but which lead me to find the correct way to start/stop mongrel instances one pid at a time. I am assuming one pid at a
2007 Feb 02
1
Monit / mongel_cluster 0.2.2 / mongrel 1.0.1
Hi, I''ve a few problems with my rails app at the minute, causing mongrels in a cluster to die, while I debug I''ve setup monit to keep the site running. Problem is, whenever monit starts one of my mongrels via mongrel_rails cluster::start --only 8000 --clean -C /url/to/yml/file I get the following in my log/mongrel.log: ** Mongrel available at 127.0.0.1:8000 ** Writing PID file
2006 Nov 30
1
Restarting mongrel cluster from other directories
I want to restart my Mongrels from crontab periodically to free up memory. I tried this: [admin at mudcrapce ~]$ mongrel_rails cluster::restart -C /var/www/apps/mudcrapce/current/config/mongrel_cluster.yml Restarting 5 Mongrel servers... mongrel_rails restart -P log/mongrel.3040.pid !!! PID file log/mongrel.3040.pid does not exist. Not running? mongrel::restart reported an error. Use
2007 Aug 26
1
monit not executing start/stop/restart mongrels
Alright, I have googles and read through the docs on monit I get this error when adding -v starting monit monit: Cannot connect to the monit daemon. Did you start it with http support? monit: Cannot connect to the monit daemon. Did you start it with http support? Am I missing something here? set daemon 120 set logfile syslog facility log_daemon set mailserver localhost set httpd port 28212
2006 Sep 01
2
Making Mongrel play well with Monit
Hi! I run a mongrel cluster with 6 mongrels in it. I want to monitor them individually for process hangs (and then restart them) and this is the solution I came up with: Here''s my configuration file for monit (/usr/local/etc/monitrc): [snipped relevant bits] ------ #check lighttpd process check process lighttpd with pidfile /var/run/lighttpd.pid start program =
2007 Mar 29
4
Machine reboot - monit fails to start mongrels
Greetings - I dug around a bit and I couldn''t find a definitive answer to this question, apologies if it''s been covered before. A box running a apache 2.2 -> mongrel cluster for a rails app got power cycled at my ISP. Unfortunately monit couldn''t start the mongrel processes because the pid files were still there. Here is my monit config (for each mongrel
2007 Feb 26
2
Apache+mod_proxy_balancer+Mongrel+Mephisto, Apache kills CPU
Our Mephisto install kills Mongrels and causes Apache to pound the CPU. This started when we moved to Apache+mod_proxy_balancer+Mongrel. Here''s what we know: The following things are working OK, except when used in the combination listed above: mongrel, mongrel_rails, MySQL, Apache, mod_proxy_balancer. We believe these are all OK because we moved five other Rails apps to this
2009 Apr 19
4
httpd crashes after signal HUP
Hello I'm running CentOS 5.3 with httpd-2.2.3-22.el5.centos.x86_64 and php-5.1.6-23.2.el5_3.x86_64. When the logrotate scripts run and send the HUP signal to httpd, the httpd process quits instead of reloading. The only thing I can find in the logs is this: [Sun Apr 19 04:02:04 2009] [notice] seg fault or similar nasty error detected in the parent process There wasn't any segfault
2007 Apr 03
11
monit vs mongrel cluster
Is there anything mongrel cluster gives you that monit doesn''t? I''ll be using monit to monitor a number of other services anyways, so it seems logical to just use it for everything including mongrel. Chris
2007 Apr 03
2
are memory limits on mongrel possible?
Is there any documentation I can look at that might talk about how to put memory limits on mongrel? For instants, I might want to limit mongrel to 100 megs of ram. I know that I can monitor mongrel with monit and restart it automatically if it becomes a ram piggy.
2010 Oct 02
2
Unicorn doesn't reload the app after the HUP signal
Hi folks, I''ve experimented a problem while deploying my app, I''ve sent the HUP signal to the master process, I''ve checked everything is ok: new master and workers are spawned and the old ones are killed(I''ve checked the PIDs), but the new code deployed isn''t reflected in the living site, so I''ve to stop and start again unicorn in order to see
2007 Mar 26
4
Monit + Mongrel woes
Hello all, So, I''ve been using monit with mongrel for a while now, since the 0.3.x days (I think it was). It used to work fine, but now I seem to be having some trouble. I''m currently using mongrel 1.0.1 and I am using the same monit configuration I''ve always been using, yet everytime monit should restart mongrel, I get "Execution failed". For the start
2008 Mar 13
4
Merb in production with God/Monit
Hey All, I just wanted to get other peoples take on problems when running merb in production? Just as Rails used to do, the mongrels tend to get heavy/unresponsive over time so need a good kicking by a watcher daemon like god or monit. However, I have had serious problems getting God to restart the process, as the "merb -k <port>" command doesnt appear to work reliably
2009 Sep 08
1
variables on files
I have this configuration: class monit { # Installing packages. package { "monit": ensure => latest; } file { "/etc/default/monit": owner => "root", group => "root", mode => "0644", source => "puppet:///monit/etc/default/monit", } file { "/etc/monit/monitrc": owner => "root", group =>
2006 Oct 31
9
Problems with mongrel dying
Hi One of the two mongrel processes has died in the middle of the night four times in the past 9 days, and I need help debugging this. Each time the symptoms are the same: * Each time I can restart the process via cap -a restart_app. * Before the restart, there is nothing unusual in production.log or mongrel.log. * During the restart, about 100 repetitions of an error message are generated in
2008 Jan 21
14
properly restarting mongrel instances
Hi folks. Using mongrel_rails and the mongrel_cluster capistrano recipes, I often encounter a situation where some of the mongrel processes don''t die in time to be restarted. The output of capistrano will tell me something like "mongrel on port 8001 is already up", but that''s only because capistrano/mongrel_rails failed to take it down in the first place. The solution
2000 Jan 18
1
does 2.0.6 fix kill -HUP logrotation?
2.0.5 (on linux) has a small annoyance in that, when the log files get rotated, kill -HUP won't point the daemons to the new log files. They keeps logging to the old ones until they are stopped/started. Does 2.0.6 fix this?