It happened now twice that replication created folders and mails in the
wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating
via a VPN connection
- Dovecot directors in both datacenters talks to both backends with
vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via
tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing
home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a
server failure or a planed reboot of a node of the kubernetes cluster.
In that situation it can happen that some requests are not answered,
even with Kubernetes running multiple instances of the dict.
I can only speculate what happens then: it seems the connection failure
to the remote dict is not correctly handled and leads to situation in
which last mailbox/home directory is used for the replication :(
When it happened the first time we attributed it to the fact that the
Sqlite database at that time contained no home directory information,
which we fixed after. This first time (server failure) took a couple of
minutes and lead to many mailboxes containing mostly folders but also
some new arrived mails belonging to other mailboxes/users. We could only
resolve that situation by rolling back to a zfs snapshot before the
downtime.
The second time was last Friday night during a (much shorter) reboot of
a Kubernetes node and lead only to a single mailbox containing folders
and mails of other mailboxes. That was verified by looking at timestamps
of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage.
I can not tell if adding the home directory to the Sqlite database or
the shorter time of the failure limited the wrong replication to a
single mailbox.
Can someone with more knowledge of the Dovecot code please check/verify
how replication deals with failures in proxy dict. I'm of cause happy to
provide more information of our configuration if needed.
Here is an exert of our configuration (full doveconf -n is attached):
passdb {
? args = /etc/dovecot/dovecot-dict-master-auth.conf
? driver = dict
? master = yes
}
passdb {
? args = /etc/dovecot/dovecot-dict-auth.conf
? driver = dict
}
userdb {
? driver = prefetch
}
userdb {
? args = /etc/dovecot/dovecot-dict-auth.conf
? driver = dict
}
userdb {
? args = /etc/dovecot/dovecot-sql.conf
? driver = sql
}
dovecot-dict-auth.conf:
uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
password_key = passdb/%u/%w
user_key = userdb/%u
iterate_disable = yes
dovecot-dict-master-auth.conf:
uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
password_key = master/%{login_user}/%u/%w
iterate_disable = yes
dovecot-sql.conf:
driver = sqlite
connect = /etc/dovecot/users.sqlite
user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid
= '%n' AND domain = '%d'
iterate_query = SELECT userid AS username, domain FROM users
--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Gesch?ftsf?hrer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
-------------- next part --------------
# 2.2.32 (dfbe293d4): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.20 (7cd71ba)
# OS: Linux 4.4.0-97-generic x86_64
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars =
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password = # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
mail_access_groups = dovecot
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mdbox_rotate_size = 50 M
namespace inboxes {
inbox = yes
location =
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Junk {
auto = subscribe
special_use = \Junk
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox Templates {
auto = subscribe
}
mailbox Trash {
auto = subscribe
special_use = \Trash
}
prefix = INBOX/
separator = /
subscriptions = no
}
namespace subs {
hidden = yes
list = no
location =
prefix =
separator = /
}
namespace users {
location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u
prefix = user/%%n/
separator = /
subscriptions = no
type = shared
}
passdb {
args = /etc/dovecot/dovecot-dict-master-auth.conf
driver = dict
master = yes
}
passdb {
args = /etc/dovecot/dovecot-dict-auth.conf
driver = dict
}
plugin {
acl = vfile
acl_shared_dict = file:/var/dovecot/imap/%d/shared-mailboxes.db
mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
mail_log_fields = uid box msgid size
mail_replica = tcp:10.44.99.1
quota = dict:User quota::ns=INBOX/:file:%h/dovecot-quota
quota_rule = *:storage=100GB
sieve = ~/sieve/dovecot.sieve
sieve_after = /var/dovecot/sieve/after.d/
sieve_before = /var/dovecot/sieve/before.d/
sieve_dir = ~/sieve
sieve_extensions = +editheader
sieve_user_log = ~/.sieve.log
}
postmaster_address = admins at egroupware.org
protocols = imap pop3 lmtp sieve
quota_full_tempfail = yes
replication_dsync_parameters = -d -n INBOX -l 30 -U
service aggregator {
fifo_listener replication-notify-fifo {
user = dovecot
}
unix_listener replication-notify {
user = dovecot
}
}
service auth-worker {
user = $default_internal_user
}
service auth {
drop_priv_before_exec = no
inet_listener {
port = 113
}
}
service doveadm {
inet_listener {
port = 12345
}
inet_listener {
port = 26
}
vsz_limit = 512 M
}
service imap-login {
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}
process_min_avail = 5
service_count = 1
vsz_limit = 64 M
}
service imap {
executable = imap
process_limit = 2048
vsz_limit = 512 M
}
service lmtp {
inet_listener lmtp {
port = 24
}
unix_listener lmtp {
mode = 0666
}
vsz_limit = 512 M
}
service managesieve-login {
inet_listener sieve {
port = 4190
}
inet_listener sieve_deprecated {
port = 2000
}
}
service pop3-login {
inet_listener pop3 {
port = 110
}
inet_listener pop3s {
port = 995
ssl = yes
}
}
service pop3 {
executable = pop3
}
service postlogin {
executable = script-login -d rawlog -b -t
}
service replicator {
process_min_avail = 1
unix_listener replicator-doveadm {
group = dovecot
mode = 0660
user = dovecot
}
}
ssl_cert = </etc/certs/mail.egroupware.org.pem
ssl_key = # hidden, use -P to show it
userdb {
driver = prefetch
}
userdb {
args = /etc/dovecot/dovecot-dict-auth.conf
driver = dict
}
userdb {
args = /etc/dovecot/dovecot-sql.conf
driver = sql
}
verbose_proctitle = yes
protocol lda {
mail_plugins = acl quota notify replication mail_log acl sieve quota
}
protocol imap {
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log acl imap_acl quota
imap_quota
}
protocol lmtp {
mail_max_lock_timeout = 25 secs
mail_plugins = acl quota notify replication mail_log acl sieve quota
}
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: OpenPGP digital signature
URL:
<https://dovecot.org/pipermail/dovecot/attachments/20171030/16358934/attachment.sig>
No one any idea? Replication into wrong mailboxes caused by an unavailable proxy dict backend is a serious privacy and/or security problem! Ralf Am 30.10.17 um 10:05 schrieb Ralf Becker:> It happened now twice that replication created folders and mails in the > wrong mailbox :( > > Here's the architecture we use: > - 2 Dovecot (2.2.32) backends in two different datacenters replicating > via a VPN connection > - Dovecot directors in both datacenters talks to both backends with > vhost_count of 100 vs 1 for local vs remote backend > - backends use proxy dict via a unix domain socket and socat to talk via > tcp to a dict on a different server (kubernetes cluster) > - backends have a local sqlite userdb for iteration (also containing > home directories, as just iteration is not possible) > - serving around 7000 mailboxes in a roughly 200 different domains > > Everything works as expected, until dict is not reachable eg. due to a > server failure or a planed reboot of a node of the kubernetes cluster. > In that situation it can happen that some requests are not answered, > even with Kubernetes running multiple instances of the dict. > I can only speculate what happens then: it seems the connection failure > to the remote dict is not correctly handled and leads to situation in > which last mailbox/home directory is used for the replication :( > > When it happened the first time we attributed it to the fact that the > Sqlite database at that time contained no home directory information, > which we fixed after. This first time (server failure) took a couple of > minutes and lead to many mailboxes containing mostly folders but also > some new arrived mails belonging to other mailboxes/users. We could only > resolve that situation by rolling back to a zfs snapshot before the > downtime. > > The second time was last Friday night during a (much shorter) reboot of > a Kubernetes node and lead only to a single mailbox containing folders > and mails of other mailboxes. That was verified by looking at timestamps > of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. > I can not tell if adding the home directory to the Sqlite database or > the shorter time of the failure limited the wrong replication to a > single mailbox. > > Can someone with more knowledge of the Dovecot code please check/verify > how replication deals with failures in proxy dict. I'm of cause happy to > provide more information of our configuration if needed. > > Here is an exert of our configuration (full doveconf -n is attached): > > passdb { > ? args = /etc/dovecot/dovecot-dict-master-auth.conf > ? driver = dict > ? master = yes > } > passdb { > ? args = /etc/dovecot/dovecot-dict-auth.conf > ? driver = dict > } > userdb { > ? driver = prefetch > } > userdb { > ? args = /etc/dovecot/dovecot-dict-auth.conf > ? driver = dict > } > userdb { > ? args = /etc/dovecot/dovecot-sql.conf > ? driver = sql > } > > dovecot-dict-auth.conf: > uri = proxy:/var/run/dovecot_auth_proxy/socket:backend > password_key = passdb/%u/%w > user_key = userdb/%u > iterate_disable = yes > > dovecot-dict-master-auth.conf: > uri = proxy:/var/run/dovecot_auth_proxy/socket:backend > password_key = master/%{login_user}/%u/%w > iterate_disable = yes > > dovecot-sql.conf: > driver = sqlite > connect = /etc/dovecot/users.sqlite > user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid > = '%n' AND domain = '%d' > iterate_query = SELECT userid AS username, domain FROM users-- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Gesch?ftsf?hrer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: <https://dovecot.org/pipermail/dovecot/attachments/20171102/8ed9dc7e/attachment.sig>
Can you somehow reproduce this issue with auth_debug=yes and mail_debug=yes and provide those logs? Aki On 02.11.2017 10:55, Ralf Becker wrote:> No one any idea? > > Replication into wrong mailboxes caused by an unavailable proxy dict > backend is a serious privacy and/or security problem! > > Ralf > > Am 30.10.17 um 10:05 schrieb Ralf Becker: >> It happened now twice that replication created folders and mails in the >> wrong mailbox :( >> >> Here's the architecture we use: >> - 2 Dovecot (2.2.32) backends in two different datacenters replicating >> via a VPN connection >> - Dovecot directors in both datacenters talks to both backends with >> vhost_count of 100 vs 1 for local vs remote backend >> - backends use proxy dict via a unix domain socket and socat to talk via >> tcp to a dict on a different server (kubernetes cluster) >> - backends have a local sqlite userdb for iteration (also containing >> home directories, as just iteration is not possible) >> - serving around 7000 mailboxes in a roughly 200 different domains >> >> Everything works as expected, until dict is not reachable eg. due to a >> server failure or a planed reboot of a node of the kubernetes cluster. >> In that situation it can happen that some requests are not answered, >> even with Kubernetes running multiple instances of the dict. >> I can only speculate what happens then: it seems the connection failure >> to the remote dict is not correctly handled and leads to situation in >> which last mailbox/home directory is used for the replication :( >> >> When it happened the first time we attributed it to the fact that the >> Sqlite database at that time contained no home directory information, >> which we fixed after. This first time (server failure) took a couple of >> minutes and lead to many mailboxes containing mostly folders but also >> some new arrived mails belonging to other mailboxes/users. We could only >> resolve that situation by rolling back to a zfs snapshot before the >> downtime. >> >> The second time was last Friday night during a (much shorter) reboot of >> a Kubernetes node and lead only to a single mailbox containing folders >> and mails of other mailboxes. That was verified by looking at timestamps >> of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. >> I can not tell if adding the home directory to the Sqlite database or >> the shorter time of the failure limited the wrong replication to a >> single mailbox. >> >> Can someone with more knowledge of the Dovecot code please check/verify >> how replication deals with failures in proxy dict. I'm of cause happy to >> provide more information of our configuration if needed. >> >> Here is an exert of our configuration (full doveconf -n is attached): >> >> passdb { >> ? args = /etc/dovecot/dovecot-dict-master-auth.conf >> ? driver = dict >> ? master = yes >> } >> passdb { >> ? args = /etc/dovecot/dovecot-dict-auth.conf >> ? driver = dict >> } >> userdb { >> ? driver = prefetch >> } >> userdb { >> ? args = /etc/dovecot/dovecot-dict-auth.conf >> ? driver = dict >> } >> userdb { >> ? args = /etc/dovecot/dovecot-sql.conf >> ? driver = sql >> } >> >> dovecot-dict-auth.conf: >> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend >> password_key = passdb/%u/%w >> user_key = userdb/%u >> iterate_disable = yes >> >> dovecot-dict-master-auth.conf: >> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend >> password_key = master/%{login_user}/%u/%w >> iterate_disable = yes >> >> dovecot-sql.conf: >> driver = sqlite >> connect = /etc/dovecot/users.sqlite >> user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid >> = '%n' AND domain = '%d' >> iterate_query = SELECT userid AS username, domain FROM users
On 30 Oct 2017, at 11.05, Ralf Becker <rb at egroupware.org> wrote:> > It happened now twice that replication created folders and mails in the > wrong mailbox :( > > Here's the architecture we use: > - 2 Dovecot (2.2.32) backends in two different datacenters replicating > via a VPN connection > - Dovecot directors in both datacenters talks to both backends with > vhost_count of 100 vs 1 for local vs remote backend > - backends use proxy dict via a unix domain socket and socat to talk via > tcp to a dict on a different server (kubernetes cluster) > - backends have a local sqlite userdb for iteration (also containing > home directories, as just iteration is not possible) > - serving around 7000 mailboxes in a roughly 200 different domains > > Everything works as expected, until dict is not reachable eg. due to a > server failure or a planed reboot of a node of the kubernetes cluster. > In that situation it can happen that some requests are not answered, > even with Kubernetes running multiple instances of the dict. > I can only speculate what happens then: it seems the connection failure > to the remote dict is not correctly handled and leads to situation in > which last mailbox/home directory is used for the replication :(It sounds to me like a userdb lookup changes the username during a dict failure. Although I can't really think of how that could happen. The only thing that comes to my mind is auth_cache, but in that case I'd expect the same problem to happen even when there aren't dict errors. For testing you could see if it's reproducible with: - get random username - do doveadm user <user> - verify that the result contains the same input user Then do that in a loop rapidly and restart your test kubernetes once in a while.
Hi Timo, Am 02.11.17 um 10:34 schrieb Timo Sirainen:> On 30 Oct 2017, at 11.05, Ralf Becker <rb at egroupware.org> wrote: >> It happened now twice that replication created folders and mails in the >> wrong mailbox :( >> >> Here's the architecture we use: >> - 2 Dovecot (2.2.32) backends in two different datacenters replicating >> via a VPN connection >> - Dovecot directors in both datacenters talks to both backends with >> vhost_count of 100 vs 1 for local vs remote backend >> - backends use proxy dict via a unix domain socket and socat to talk via >> tcp to a dict on a different server (kubernetes cluster) >> - backends have a local sqlite userdb for iteration (also containing >> home directories, as just iteration is not possible) >> - serving around 7000 mailboxes in a roughly 200 different domains >> >> Everything works as expected, until dict is not reachable eg. due to a >> server failure or a planed reboot of a node of the kubernetes cluster. >> In that situation it can happen that some requests are not answered, >> even with Kubernetes running multiple instances of the dict. >> I can only speculate what happens then: it seems the connection failure >> to the remote dict is not correctly handled and leads to situation in >> which last mailbox/home directory is used for the replication :( > It sounds to me like a userdb lookup changes the username during a dict failure. Although I can't really think of how that could happen.Me neither. Users are in multiple MariaDB databases on a Galera cluster. We have no problems or unexpected changes there. The dict is running multiple time, but that might not guarantee no single request might fail.> The only thing that comes to my mind is auth_cache, but in that case I'd expect the same problem to happen even when there aren't dict errors. > > For testing you could see if it's reproducible with: > > - get random username > - do doveadm user <user> > - verify that the result contains the same input user > > Then do that in a loop rapidly and restart your test kubernetes once in a while.Ok, I'll give that a try. It's would be a lot easier then the whole replication setup. Ralf -- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Gesch?ftsf?hrer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: <https://dovecot.org/pipermail/dovecot/attachments/20171102/0b6beeed/attachment.sig>