Angel L. Mateo
2012-Jul-04 18:49 UTC
[Dovecot] dovecot and nfs readdir vs readdirplus operations
Hello, We are having performance problems trying to migrate our pop/imap servers to a new version. Our old servers are 4 debian lenny with 5GB of RAM running of XenServer VMs with kernel 2.6.32-4-amd64 and dovecot 1.1.16. New servers are 4 ubuntu 12.04 with dovecot 2.1.5 running on vmware vm with 6 cores and 16GB of RAM and kernel 3.2.0-24-generic. On both server we are using nfs 3 with same configuration (regardless of internal kernel differences, but we have no customized any of them, we are using vanilla kernels with default configurations). The problem we have is that new servers have performance problems. Even when have a small part of our total users (about 25%) directed to the new farm, performance is very poor, even useless. Looking for NFS problems, we have found a lot of differences in nfs operations. For example, this is the nfsstat of one of a new servers at this moment: myotis21:~# nfsstat Client rpc stats: calls retrans authrefrsh 414528349 885 37 Client nfs v3: null getattr setattr lookup access readlink 0 0% 95673837 23% 3961938 0% 89586364 21% 110097351 26% 2930961 0% read write create mkdir symlink mknod 20009850 4% 6065319 1% 3757720 0% 1557 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 6378134 1% 281 0% 2602358 0% 555097 0% 53126619 12% 15615402 3% fsstat fsinfo pathconf commit 113256 0% 26152 0% 0 0% 4026151 0% and this is the same on one of the new ones: amateo_adm at myotis31:~$ nfsstat Server rpc stats: calls badcalls badclnt badauth xdrcall 0 0 0 0 0 Client rpc stats: calls retrans authrefrsh 178040318 675 178040800 Client nfs v3: null getattr setattr lookup access readlink 0 0% 24350345 13% 5045924 2% 10939469 6% 30185146 16% 142865 0% read write create mkdir symlink mknod 8818016 4% 6058614 3% 2877653 1% 420 0% 0 0% 0 0% remove rmdir rename link readdir readdirplus 2842562 1% 69 0% 2961239 1% 634038 0% 0 0% 82921863 46% fsstat fsinfo pathconf commit 70861 0% 18754 0% 9377 0% 152702 0% Although nfs configuration is the same, there are a lot of differences on readdir vs readdirplus nfs operations. In fact, in the old one we have 12% readdir operations and 3% of readdirplus. And in the new one we have 46% of readdirplus and no readdir operations. Although readdirplus is supposed to be an optimization in nfs3, in situations when you have big directories and uses just a few entries of these directories it could be worse. So we think if this could be the problem (or one of them). Any idea of this difference? And if this difference could be significative? PS: I have attached doveconf -n of the new server. -- Angel L. Mateo Mart?nez Secci?n de Telem?tica ?rea de Tecnolog?as de la Informaci?n y las Comunicaciones Aplicadas (ATICA) http://www.um.es/atica Tfo: 868887590 Fax: 868888337 -------------- next part -------------- # 2.1.5: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-24-generic x86_64 Ubuntu 12.04 LTS auth_cache_size = 20 M auth_cache_ttl = 1 days auth_debug = yes auth_master_user_separator = * auth_verbose = yes default_process_limit = 1000 disable_plaintext_auth = no log_timestamp = %Y-%m-%d %H:%M:%S login_trusted_networks = 155.54.211.176/28 mail_debug = yes mail_location = maildir:~/Maildir:INDEX=/var/indexes/%n mail_nfs_storage = yes mail_privileged_group = mail mdbox_rotate_size = 20 M passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes pass = yes } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } passdb { args = session=yes dovecot driver = pam } plugin { lazy_expunge = .EXPUNGED/ .DELETED/ .DELETED/.EXPUNGED/ sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +imapflags sieve_max_redirects = 15 zlib_save = gz zlib_save_level = 6 } postmaster_address = postmaster at um.es service anvil { client_limit = 2003 } service auth { client_limit = 3000 unix_listener auth-userdb { mode = 0666 } } service doveadm { inet_listener { port = 24245 } } service imap { process_limit = 5120 process_min_avail = 6 vsz_limit = 512 M } service lmtp { inet_listener lmtp { port = 24 } process_min_avail = 10 vsz_limit = 512 M } service pop3 { process_min_avail = 6 } ssl = no ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = prefetch } userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } protocol lda { mail_plugins = " sieve" } protocol lmtp { mail_plugins = " sieve" } protocol pop3 { pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s, in=%i, out=%o } local 155.54.211.160/27/27 { doveadm_password = ]dWhu5kB }
Timo Sirainen
2012-Jul-04 21:55 UTC
[Dovecot] dovecot and nfs readdir vs readdirplus operations
On 4.7.2012, at 21.49, Angel L. Mateo wrote:> Although nfs configuration is the same, there are a lot of differences on readdir vs readdirplus nfs operations. In fact, in the old one we have 12% readdir operations and 3% of readdirplus. And in the new one we have 46% of readdirplus and no readdir operations.I'm not entirely sure, but I think it's the kernel that decides if readdir or readdirplus is used and Dovecot can't affect that decision. (Unless maybe kernel does some heuristics.)> PS: I have attached doveconf -n of the new server.At least this reduces performance: mail_nfs_storage = yes Also maildir_very_dirty_syncs=yes improves performance by reducing readdirs. It's safe to use as long as only Dovecot is reading the Maildir.