Displaying 7 results from an estimated 7 matches for "mongrel_r".
2006 Jun 15
1
Performance leak with concurrent requests on static files (Rails)
...Requests per second: 10.79 [#/sec] (mean)
Time per request: 2779.148 [ms] (mean)
Time per request: 92.638 [ms] (mean, across all concurrent requests)
lsof tool give indications about what happened :
$ lsof -p 32200
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
mongrel_r 32200 curio cwd DIR 8,3 4096 67168034 /home/curio/truc
mongrel_r 32200 curio rtd DIR 8,3 4096 128 /
mongrel_r 32200 curio txt REG 8,3 782262 671306909
/usr/local/bin/ruby
mongrel_r 32200 curio mem REG 0,0 0 [heap]
(stat: No such file o...
2007 Jul 19
1
one mongrel with hundreds of CLOSE_WAIT tcp connections
...action is responsible -- it''s probably the
one that gets the files from s3, but I''ll make sure.
If you have any thoughts or other ideas, please let me know. Thanks a ton
for your help!
Some sample output from lsof:
lsof -i -P | grep CLOSE_ | grep mongrel
CLOSE_WAIT --mysite
mongrel_r 831 root 6u IPv4 95162945 TCP localhost.localdomain
:8011->localhost.localdomain:59311 (CLOSE_WAIT)
mongrel_r 831 root 9u IPv4 95161753 TCP
mysite.com:49269->xxx-xxx-xxx-xxx.amazon.com:80<http://xxx-xxx-xxx-xxx.amazon.com/>(CLOSE_WAIT)
mongrel_r 831 ro...
2007 Jul 19
0
one mongrel with *lots* of close_wait tcp connections
...action is responsible -- it''s
probably the
one that gets the files from s3, but I''ll make sure.
If you have any thoughts or other ideas, please let me know. Thanks a
ton
for your help!
Some sample output from lsof:
lsof -i -P | grep CLOSE_ | grep mongrel
CLOSE_WAIT --mysite
mongrel_r 831 root 6u IPv4 95162945 TCP
localhost.localdomain
:8011->localhost.localdomain:59311 (CLOSE_WAIT)
mongrel_r 831 root 9u IPv4 95161753 TCP
mysite.com:49269->xxx-xxx-xxx-xxx.amazon.com:80<http://xxx-xxx-xxx-
xxx.amazon.com/>(CLOSE_WAIT)
mongrel_r 831 r...
2009 Jan 29
0
File descriptor leak in Mongrel server?
...rver with a cPanel
installation. One of the clients sites went belly up with the error
message:
Errno::EMFILE
Too many open files - socket(2)
Digging a bit I found that the mongrel server had a large number of
sockets open and appears to be leaking them fairly frequently. The
lsof looks like this
mongrel_r 11184 username 193u unix 0xda4fa480 637209350
socket
mongrel_r 11184 username 194u unix 0xf685a680 637408911
socket
mongrel_r 11184 username 195u unix 0xcc2ea3c0 637684747
socket
The application doesn''t do anything explicitly with sockets. As far as
I know the...
2006 Aug 03
4
Mongrel processes
Whilst monitoring my host I noticed that there were three
mongrel_rails processes running for a single (non mongrel_clustered)
application. Is this normal? Do the processes share the memory top
indicates they''re each using?
Cheers,
--
Dave Murphy (Schwuk)
http://schwuk.com
2006 Nov 16
3
Monrel Woes on Solaris x86
...he
log files in the mongrel_debug directory.
BTW, I am running mongrel on port 50042 because this is the port I''ve
been given permission to
run Mongrel on.
2) After hitting mongrel with a bunch of (curl locahost:50042) calls, I did
(lsof -i -P | grep CLOSE_WAIT) and here is the output:
mongrel_r 5522 agile 6u IPv4 0xffffffffa02b8e00 0t617 TCP
localhost:50042->localhost:51299 (CLOSE_WAIT)
curl 20784 agile 4u IPv4 0xffffffffb1083200 0t0 TCP
webfarm-dev.Berkeley.EDU:51330->webfarm-dev.Berkeley.EDU:80 (CLOSE_WAIT)
curl 20785 agile 4u IPv...
2007 Oct 24
28
random cpu spikes, EBADF errors
In May I had problem with mongrels suddenly consuming huge cpu resources for
a minute or two and then returning to normal (load average spikes up
to 3.8and then back down to a regular
0.2 over the course of 5 minutes, then again 1/2 hour later. or 4 hours
later, no predictable rhythm).
I posted to Litespeed forums because I thought the problem was there but
didn''t get far. And a week