Displaying 20 results from an estimated 70000 matches similar to: "Clustering (replication and proxying) plans for the future"
2007 May 17
7
Replication plans
Several companies have been interested in getting replication support
for Dovecot. It looks like I could begin implementing it in a few months
after some other changes. So here's how I'm planning on doing it:
What needs to be replicated:
- Saving new mails (IMAP, deliver)
- Copying existing mails
- Expunges
- Flag and keyword changes
- Mailbox creations, deletions and renames
-
2007 Dec 06
3
Roadmap to future
v1.1.0 is finally getting closer, so it's time to start planning what
happens after it. Below is a list of some of the larger features I'm
planning on implementing. I'm not yet sure in which order, so I think
it's time to ask again what features companies would be willing to pay
for?
Sponsoring Dovecot development gets you:
- listed in Credits in www.dovecot.org
- listed in
2008 May 03
1
Replication milestone 1
5 milestones planned currently. I'll start with the always-on
multi-master replication, because that's the most difficult one to get
working correctly and efficiently. So if during its implementation I
find some design problems, less code needs to be fixed.
Milestone 0 will hopefully be done within a month. This includes reading
and replying to the rest of the mails on this list and
2008 May 01
5
Replication protocol design #2
Changes:
- Added goal 8 and rewrote mailbox synchronization plan.
- Added new SELECT command to change active mailbox and removed mailbox
ID from command parameters
Goals
-----
1. If a single (or configurable number of) server dies, no mails must be
lost that have been reported to IMAP/SMTP clients as being successfully
stored.
2. Must be able to automatically recover from a server
2008 Apr 28
2
Replication protocol design
I'll probably be implementing multi-master replication this summer. My
previous thoughts about it are here:
http://dovecot.org/list/dovecot/2007-December/027284.html
Below is a description of how the replication protocol will probably
work. It should work just as well for master-slave and multi-master
setups. Comments welcome. I'll write later a separate mail about how
it'll be
2007 May 18
0
Virtual mailbox plans
Configuration
-------------
There could be global and user-specific configuration files, similar to
how ACLs work. I think the global virtual mailboxes should be only
defaults though, so that users could delete them and create a new
mailbox (virtual or non-virtual) with the same name.
The global virtual mailboxes could be described in a single file, such
as:
Trash
deleted
Work/My Unseen
2019 Jun 03
0
Potfix+Dovecot with dsync replication problem
Hi all,
I am still struggling with this issue.
I scraped my previous configuration and started from beginning.
Thanks to Aki Tuomi and his suggestion I changed the users in my
configurations from vmail --> root and that helped.
Now I have a working replication but with some problems and strange
behavior.
Replication works only when I manually execute a command:
"doveadm sync -A
2020 Mar 30
2
replication and spam removal ("doveadm expunge")
Hello everybody,
since now I did no replication and spam is delivered into users folder "spambox"
Every night there is a cronjob which deletes spam older than 30 days via something like
"find .... -ctime +30 -delete"
Now I'm going to set up replication (two way) and I thought that
doing "rm" is not a good idea.
So I modified the job to something like
2006 Feb 07
4
inexpensive ways to make a rails application highly available? mysql replication?
I''m interested in making a low volume rails application very available. This
means that I would like to have
an alternate server for those times when the primary server is unavailable
for whatever reason.
Virtual private servers are fairly inexpensive so one could have a rails
application on 2 different vps systems (not on the same server, possibly not
even in the same city).
2007 Oct 06
2
near-realtime file system replication
Whats available for doing near-realtime master->slave file replication
between two CentOS systems?
Cronjobs running rsync won't cut it.
Ideally, I'd like something that works like SLONY-I does for Postgres
databases, where all file system block level transactions on the master
get replicated to the 2nd system on a 'as fast as practical' basis (a
second or two of lag
2018 Feb 07
2
samba 4.7.5 and db replication ( refering to closed bug : 13228 )
Hai,
Question..
2 DC's. ( debian stretch samba 4.7.5 ) .
DC1 upgraded, ( DC with FSMO roles)
samba-tool dbcheck --fix
samba-tool dbcheck --cross-nc --fix
That fixed on DC1 6 errors.
samba-tool drs showrepl show no error.
login on DC2. upgraded.
samba-tool drs showrepl show no error and is in sync.
samba-tool dbcheck --fix shows 16 errors
uhm, synced, but not synced..
2018 Jul 02
0
force-resync and replication
Hi list,
I am using dovecot 2.2.27 (c0f36b0) on two servers with (dovecot)
replication between them. I do not use dovecot-directors or -proxies. I
just have a floating IP on the servers to do failover.
I had the problem today that in one user's INBOX and one if his
mailfolders got flooded with hundreds of thousands of duplicates.
How this happened is another issue (of someone has an idea it
2020 Jul 03
0
Mail replication fails between v2.2.27 and v2.3.4.1
Hello,
I have two installations of dovecot configured to replicate mailboxes
between them. recently, i upgraded the operating system on one of them
(mx2.example.com) and now i'm running one installation on version 2.2.27
(debian stretch) and another on version 2.3.4.1 (mx1.example.com debian
buster).
My setup includes 3 shared namespaces that point to the mailboxes of 3
accounts. these
2015 Jun 15
0
dsync replication issues with shared mailboxes
Gentlemen,
I've setup 2 servers with dsync replication and hit a serious issue -
some messages got duplicated thousands times in some shared mailboxes(~5).
There is actually no reason to replicate anything from shared name space
and I've tried to limit replication scope with just 'inbox' name space
but it didn't help.
dovecot version ||2.2.18 (2de3c7248922)
errors in the
2015 Jun 11
0
Replication: "cross-updates" of mail meta-data
Hello
I have a two-server dovecot setup using replication. Each server runs
two dovecot instances, one for director and another for the backend.
Initially I set up a single server, got it all working, then rsync'd the
data and index partitions to the new one and started the clusters (I
used rsync as a way to speed-up dovecot's initial replication). Both
servers listen on a virtual IP
2015 Jan 08
0
Dovecot replication - notify not working
Dear Dovecot-Admins,
I've set up a pair of Dovecot Servers, please find config of server one
attached.
They are configured to replicate changes over a tcp connection using port
12345, set up as described in http://wiki2.dovecot.org/Replication article
page.
Adding the user postboxes to replication using "doveadm replicator add '*'
" syncs the mailboxes as expected.
2002 Feb 18
2
NEWS : 2.2.3a, Status of the SAMBA_2_2, future plans, etc...
Folks,
Greetings. I just wanted to take a few moments a give everyone
an update on the plans for the 2.2 branch.
First we would like to thank everyone who has provided feedback
regarding the 2.2.3a release. The positive feedback has been
encouraging and the bug reports are proving to be very fruitful.
Jeremy, Herb, and myself are planning on to continue development
on the SAMBA_2_2 cvs tree
2012 Feb 28
2
Dovecot clustering with dsync-based replication
This document describes a design for a dsync-replicated Dovecot cluster.
This design can be used to build at least two different types of dsync
clusters, which are both described here. Ville has also drawn overview
pictures of these two setups, see
http://www.dovecot.org/img/dsync-director-replication.png and
http://www.dovecot.org/img/dsync-director-replication-ssh.png
First of all, why dsync
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2014 Feb 24
1
A question regarding doveadm replicator status
Hello,
I am using Dovecot version 2.2.10. I am quite familiar with ssh replication
and I managed to set it up correctly. The only problem I see is that when i
run the command:
doveadm replicator status I get a wrong "Total number of known users" which
is
a) is different between the two (2) replicated dovecot servers, i.e. in
mail1 I get a total of 21 users and in mail2 I get a total of