Displaying 5 results from an estimated 5 matches for "num_proc".
Did you mean:
num_procs
2005 Sep 19
1
pam and sasl2-sample-server failure
...And /usr/lib/sasl2/smtpd.conf (also linked to sample.conf)
# cat smtpd.conf
loglevel: 7
pwcheck_method: saslauthd
mech_list: PLAIN LOGIN
Here are the results of a failed attempt:
-------------------------------------
# saslauthd -m /var/run/saslauthd -a pam -d
saslauthd[3176] :main : num_procs : 5
saslauthd[3176] :main : mech_option: NULL
saslauthd[3176] :main : run_path : /var/run/saslauthd
saslauthd[3176] :main : auth_mech : pam
saslauthd[3176] :ipc_init : using accept lock file:
/var/run/saslauthd/mux.accept
saslauthd[3176] :detach_tty...
2005 Apr 07
0
[OT] snmp not reporting traffic values for a network interface
...: notWritable (that object does not support modification)
Failed object: SNMPv2-SMI::enterprises.9.9.16.1.1.1.16.1919
03:24:15 : H 3 : I 19 : P 15 :
cisco_snmp_ping_start:cisco_snmp_ping_start(): -5 -> buffer(): 4 (time
P:47.76 | 0.36)
03:24:15 : H 3 : I 21 : P 15 :
snmp_counter:num_procs(.1.3..6.0): 65 -> buffer(): 5 (time P:46.83 |
0.18)
03:24:15 : H 3 : I 18 : P 16 : interface_oper_status(8): up ->
alarm(3,,180): Nothing was done (time P:92.45 | 1.18)
03:24:15 : H 3 : I 19 : P 16 : interface_oper_status(8): up ->
alarm(3,,180): Nothing was done (time P:4...
2007 Jul 19
0
[mongrel_cluster] hosting multiple web sites via apache mod_proxy_balancer
...diant site is going to be served, the one that corresponds to cwd parameter of the mongrel_cluster.yml config file
user: mongrel
group: mongrel
cwd: /var/radiant/domainA.com
log_file: /tmp/mongrel.log
port: 8000"
environment: production
address: 127.0.0.1
pid_file: /tmp/mongrel.pid
servers: 3
num_procs: 4
I report one of my VirtulHost added in the dir sites-available of the apache config tree, I omit only codahale''s rewrite rules.
<VirtualHost *>
ServerName domainA.com
ServerAlias www.domainA.com
DocumentRoot /var/radiant/domainA.com/public
<Directory...
2009 Aug 26
3
saslauthd
...u mail testuser
3) testsaslauthd -u testomat -p <mypassword> -s smtp -r mail
shell output of testsaslauthd:
0: NO "authentication failed"
shell output of saslauthd:
[root at x02-new ~]# saslauthd -d -a shadow -O /usr/lib64/sasl2/smtpd.conf
-r -l
saslauthd[1936] :main : num_procs : 5
saslauthd[1936] :main : mech_option: /usr/lib64/sasl2/smtpd.conf
saslauthd[1936] :main : run_path : /var/run/saslauthd
saslauthd[1936] :main : auth_mech : shadow
saslauthd[1936] :detach_tty : master pid is: 0
saslauthd[1936] :ipc_init : listenin...
2007 Nov 05
29
Mongrel and memory usage
Hello,
I''m running a Rails application which must sort and manipulate a lot of data
which are loaded in memory.
The Rails app runs on 2 mongrel processes.
When I first load the app, both are 32Mb in memory.
After some days, both are between 200Mb and 300Mb.
My question is : is there some kind of garbage collector in Mongrel?
I never see the two Mongrel processes memory footprint