Lennart Oldenburg
2017-May-22 11:00 UTC
Using Dovecot for a Geo-replicated World-wide Email Service
Hello everyone, my colleagues and I from Technische Universit?t Berlin are currently looking into the best way to setup a Dovecot-powered email service that is simultaneously accessible world-wide and still provides short response times by redirecting users to the geographically-closest cluster. We imagine the setup to make use of multiple data center regions in public clouds around the world, e.g. in Amazon Web Services (AWS) or Google Compute Platform (GCP). At the same time, state is considered to be shared, i.e. changes made on file system in any local cluster will have to be synchronized with all other clusters around the globe (mailbox replication) and mailboxes might be manipulated from any data center in the world, potentially at the same time. Naturally, problems arise. We have multiple questions and would be very interested in the best way to approach them based on your experiences and recommendations: 1) Which path should we take replication-wise? Please keep in mind that we are targeting replication between more than 2 clusters. Traditionally, in a rather small local setup with mostly reliable network, a distributed file system such as NFS or GlusterFS might be the best option. Unfortunately, though, when scaling to more than 2 data centers geographically-distributed around the globe, latency becomes severe and distributed file systems might not be the way to go anymore. What is your recommendation for replicating state? 2) We are mostly interested in providing short response times to client requests while maintaining global state consistency (that is, avoiding conflicts in state and not losing any user data). What are your recommendations in that direction? Any particular performance optimizations we could apply to Dovecot? 3) Is there a recommended way to measure and monitor Dovecot's service quality? For example, is there a way to export "live" metrics of a running Dovecot deployment to something such as Prometheus (https://prometheus.io)? 4) Maildir as the way to represent mailboxes and deliver emails on file system is kind of a given. I wonder, though, how well is dbox replicatable? What happens when dbox files are synchronized (e.g. via dsync) and a conflict arises? Does dbox offer a performance improvement over Maildir? 5) We perceive Dovecot to come with many optimizations to speed up "read" IMAP commands, i.e. non-state-changing ones. Is this perception correct? Or is Dovecot able to use its various index and log files for fast "write" IMAP commands (e.g. CREATE, DELETE, RENAME, APPEND, STORE, ...) as well? 6) Would DovecotPRO by Dovecot Oy offer any advantage to state replication? For example, could the Object Storage Plugin be used to replicate mailboxes through some object store service offered world-wide? 7) Is the setup we have in mind based on Dovecot a feasible solution for world-wide email comparable to services like Gmail? If not, what would you change or not do at all? Please excuse if some questions might be of rather basic nature, we are not at all Dovecot experts. I tried to gather as much information on that topic as possible from the wiki and sources found online but I might have missed an important page. In that case, please point me towards existing resources on the questions raised above. Thanks in advance and kind regards, Lennart Oldenburg -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x4A70E1C7.asc Type: application/pgp-keys Size: 6068 bytes Desc: not available URL: <http://dovecot.org/pipermail/dovecot/attachments/20170522/f1a5be8c/attachment.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: <http://dovecot.org/pipermail/dovecot/attachments/20170522/f1a5be8c/attachment.sig>
Seemingly Similar Threads
- [JOB] Softwareentwickler/-in, prometheus-Bildarchiv, Universität zu Köln
- [JOB] Softwareentwickler/-in, prometheus-Bildarchiv, Universität zu Köln
- [JOB] Softwareentwickler/-in, prometheus-Bildarchiv, Universität zu Köln
- Problem with 1.2 and sieve: failed with unsuccessful implicit keep
- Asterisk and Prometheus