We are seeing a few (0-15) proxy failures like the following out of ~3m successful proxied connections a day. Average session creation load over our peak hour is about 47/sec. The backend servers aren't logging anything that would suggest any internal problem like insufficient processes to handle the load. It doesn't seem to happen when utilization is lowest at night. dovecot: imap-login: Error: proxy(foo): connect(1.1.1.1, 143) failed: Connection timed out (after 63 secs) I'm curious if anyone else has seen any similar problems or has any suggestions. # dovecot -n # 2.1.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-279.5.2.el6.x86_64 x86_64 Scientific Linux release 6.3 (Carbon) auth_master_user_separator = * auth_username_format = %Ln auth_verbose = yes auth_verbose_passwords = sha1 auth_worker_max_count = 64 mail_fsync = always mail_log_prefix = "%s(%u): session=%{session} " mail_plugins = stats zlib maildir_very_dirty_syncs = yes mmap_disable = yes passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { args = imap driver = pam } plugin { lazy_expunge = DELETED_MESSAGES. stats_refresh = 30 secs stats_track_cmds = yes } protocols = imap pop3 service anvil { client_limit = 10000 } service auth { client_limit = 10000 vsz_limit = 512 M } service doveadm { inet_listener { port = 1842 } unix_listener doveadm-server { mode = 0666 } } service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 ssl = yes } process_limit = 7000 process_min_avail = 32 } service imap-postlogin { executable = script-login -d /etc/dovecot/bin/sonic-imap-postlogin user = $default_internal_user } service imap { executable = imap imap-postlogin process_limit = 4096 vsz_limit = 512 M } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { port = 995 ssl = yes } process_limit = 2000 process_min_avail = 32 } service pop3-postlogin { executable = script-login -d /etc/dovecot/bin/sonic-pop3-postlogin user = $default_internal_user } service pop3 { executable = pop3 pop3-postlogin process_limit = 4096 } service stats { fifo_listener stats-mail { mode = 0666 } } shutdown_clients = no ssl = required ssl_ca = </etc/dovecot/ssl/gd_bundle.crt ssl_cert = </etc/dovecot/ssl/imap.sonic.net.crt ssl_key = </etc/dovecot/ssl/imap.sonic.net.key ssl_parameters_regenerate = 1 days syslog_facility = local0 userdb { driver = passwd } verbose_proctitle = yes protocol imap { imap_id_send = support-url support-email mail_max_userip_connections = 20 mail_plugins = stats zlib mwi_update mail_log notify imap_stats imap_zlib ssl_ca = </etc/dovecot/ssl/gd_bundle.crt ssl_cert = </etc/dovecot/ssl/imap.sonic.net.crt ssl_key = </etc/dovecot/ssl/imap.sonic.net.key } protocol pop3 { mail_plugins = stats zlib lazy_expunge pop3_fast_size_lookups = yes pop3_uidl_format = %f ssl_ca = </etc/dovecot/ssl/pop.sonic.net.ca-bundle ssl_cert = </etc/dovecot/ssl/pop.sonic.net.crt ssl_key = </etc/dovecot/ssl/pop.sonic.net.key }
On 18.9.2012, at 2.02, Kelsey Cummings wrote:> We are seeing a few (0-15) proxy failures like the following out of ~3m successful proxied connections a day. Average session creation load over our peak hour is about 47/sec. The backend servers aren't logging anything that would suggest any internal problem like insufficient processes to handle the load. It doesn't seem to happen when utilization is lowest at night. > > dovecot: imap-login: Error: proxy(foo): connect(1.1.1.1, 143) failed: Connection timed out (after 63 secs) > > I'm curious if anyone else has seen any similar problems or has any suggestions.I once had similar problems when the proxy backend was Courier. The problems went away after migration to Dovecot was complete. The possibilities are either: a) The backend server is busy and doesn't have a chance to accept() the connection. b) Packets get dropped in the network and the retry packet is slow in coming (or also gets lost). Changing some kernel settings might help with a). There are also kernel settings that specify how SYN resend is attempted, you could try reducing it to a few seconds.