Jan-Frode Myklebust
2013-Feb-01 17:00 UTC
[Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?
We upgraded our two dovecot directors from v2.0.14 to dovecot-ee 2.1.10.3 this week, and after that mail seems to be flowing a lot slower than before. The backend mailstores are untouched, on v2.0.14 still. After the upgrade we've been hitting process_limit for lmtp a lot, and we're struggeling with large queues in the incoming mailservers that are using LMTP virtual transport towards our two directors. I seem to remember 2.1 should have a new lmtp-proxying code. Is there anything in this that maybe needs to be tuned that's different from v2.0 ? I'm a bit scheptical to just increasing the process_limit for LMTP proxying, as I doubt running many hundreds of simultaneous deliveries should work that much better against the backend storage.. ###### doveconf -n ########## # 2.1.10.3: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-194.32.1.el5 x86_64 Red Hat Enterprise Linux Server release 5.5 (Tikanga) default_client_limit = 4000 director_mail_servers = 192.168.42.7 192.168.42.8 192.168.42.9 192.168.42.10 192.168.42.28 192.168.42.29 director_servers = 192.168.42.15 192.168.42.17 disable_plaintext_auth = no listen = * lmtp_proxy = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = proxy=y nopassword=y driver = static } protocols = imap pop3 lmtp sieve service anvil { client_limit = 6247 } service auth { client_limit = 8292 unix_listener auth-userdb { user = dovecot } } service director { fifo_listener login/proxy-notify { mode = 0666 } inet_listener { port = 5515 } unix_listener director-userdb { mode = 0600 } unix_listener login/director { mode = 0666 } } service imap-login { executable = imap-login director process_limit = 4096 process_min_avail = 4 service_count = 0 vsz_limit = 256 M } service lmtp { inet_listener lmtp { address = * port = 24 } process_limit = 100 } service managesieve-login { executable = managesieve-login director inet_listener sieve { address = * port = 4190 } process_limit = 50 } service pop3-login { executable = pop3-login director process_limit = 2048 process_min_avail = 4 service_count = 0 vsz_limit = 256 M } ssl_cert = </etc/pki/tls/certs/pop.example.com.cert.ca-bundle ssl_key = </etc/pki/tls/private/pop.example.com.key protocol lmtp { auth_socket_path = director-userdb } ################ -jf
Timo Sirainen
2013-Feb-01 22:00 UTC
[Dovecot] lmtp-proxying in 2.1 slower than in 2.0.14 ?
On 1.2.2013, at 19.00, Jan-Frode Myklebust <janfrode at tanso.net> wrote:> We upgraded our two dovecot directors from v2.0.14 to dovecot-ee > 2.1.10.3 this week, and after that mail seems to be flowing a lot > slower than before. The backend mailstores are untouched, on v2.0.14 > still. After the upgrade we've been hitting process_limit for lmtp a > lot, and we're struggeling with large queues in the incoming > mailservers that are using LMTP virtual transport towards our two > directors. > > I seem to remember 2.1 should have a new lmtp-proxying code. Is there > anything in this that maybe needs to be tuned that's different from > v2.0 ? I'm a bit scheptical to just increasing the process_limit for > LMTP proxying, as I doubt running many hundreds of simultaneous > deliveries should work that much better against the backend storage..Hmm. The main difference is that v2.1 writes temporary files to mail_temp_dir. If that's in tmpfs (and probably even if it isn't), it should still be pretty fast.. Have you checked if there's an increase in disk I/O usage, or system cpu usage? Or actually .. It could simply be that in v2.0.15 service lmtp { client_limit } default was changed to 1 (from default_client_limit=1000). This is important with the backend, because writing to message store can be slow, but proxying should be able to handle more than 1 client per process, even with the new temporary file writing. So you could see if it helps to set lmtp { client_limit = 100 } or something.