similar to: High availability of Dovecot

Displaying 20 results from an estimated 5000 matches similar to: "High availability of Dovecot"

2019 Apr 11
2
High availability of Dovecot
Gerald Galster via dovecot schrieb: > mail1.yourdomain.com <http://mail1.yourdomain.com> IN A 192.168.10.1 > mail2.yourdomain.com <http://mail2.yourdomain.com>?IN A 192.168.20.1 > > mail.yourdomain.com <http://mail.yourdomain.com> ?IN A 192.168.10.1 > mail.yourdomain.com <http://mail.yourdomain.com> ?IN A 192.168.20.1 > > > mail1/mail2 is for
2020 Jan 11
1
Dovecot HA/Resilience
If you just want active/standby, you can simply use corosync/pacemaker as other already suggest and don?t use Director. I have a dovecot HA server that uses floating IP and pacemaker to managed it, and it works quite well. The only real hard part is having a HA storage. You can simply use a NFS storage shared by both servers (as long as only one has the floating IP, you won?t have issue with the
2016 Jan 27
6
HA firewall with tinc
I have 2 firewall in HA with keepalived. Can I use active the same tinc configuration on 2 firewalls ? using tun Interface with same ip on all 2 nodes is a problem ? tun device advertise itself on the network having an IP/MAC pairs (ARP) or the IP is only used by the system internally for routing so using the same configuration is right ? so one firewall be active, the other is passive. With this
2019 Jun 14
3
What does Solr index do and how to handle its high avaliablity?
Hi, guys, Can you give me an example of solr usage in dovecot? As far as I know, you can search email easily by MUA like outlook, so which role does solr play? And based on https://dovecot.org/pipermail/dovecot/2019-April/115575.html I'm going to use an VIP to host 2 mail servers. Currently, it works in fail over and fail back test except solr index, so how to resolve this? Is it
2019 Apr 11
0
High availability of Dovecot
> Am 11.04.2019 um 13:45 schrieb Patrick Westenberg via dovecot <dovecot at dovecot.org>: > > Gerald Galster via dovecot schrieb: > >> mail1.yourdomain.com <http://mail1.yourdomain.com> IN A 192.168.10.1 >> mail2.yourdomain.com <http://mail2.yourdomain.com> IN A 192.168.20.1 >> >> mail.yourdomain.com <http://mail.yourdomain.com> IN A
2012 May 11
2
Floating VIP...
Hi, right now I am using only one external server as a gateway for the internal servers. I would like to enable a fail-over on a second server. To implement the floating VIP, should I use heartbeat+pacemaker? Or is there something more "lightweight"? Basically, I just need server2 up the VIP when server1 is down, and server2 down the VIP when server1 is back up (or server 1 does not up
2020 Apr 08
2
alternatives for imapproxy
Hi System debian 8.11 and dovecot-2.2.36.4 My webmail is roundcube with imapproxy. I have one problem. My dovecot servers is are in a cluster with keepalived like: dovecot1----VIP-IP--------dovecot2 All works fine I have a problem with imapproxy when a server dovecot1 had a problem (kernel panic sic!) Keepalived works perfecty and moved VIP to dovecot2 - all works fine for normal users but
2019 Apr 11
0
High availability of Dovecot
On 11.4.2019 11.44, luckydog xf via dovecot wrote: > Hi, list, > > ? ? ?I'm going to deploy postfix?+ dovecot?+ CephFS( as Mail Storage). > Basically I want to use two servers for them, which? is kind of HA. > ? > ? ? My idea is that using keepalived or Pacemaker to host a VIP, which > could fail over the other server once one is down. And I'll use > Haproxy or
2016 Jan 22
1
tinc with ha firewall
Hi, I have HA firewalls configuration (keepalived) on one site. Each firewall has its own IP and a Virtual IP (VIP) that keepalived activate on one of the firewall (active/passive HA configuration). I think I can set all two firewalls with same configuration, generating key pairs on one firewall and copying that to the second, so the remote host can see always one of the other firewall as the
2013 Dec 17
1
Project pre planning
Hello GlusterFS users, can anybody give me please his opinion about the following facts and questions: 4 storage server with 16 SATA bays, connected by GigE: Q1: Volume will be set up as distributed-replicated. Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big GlusterVolume or each dir in it's own GlusterVolume? Q2: Set up the bricks as a collection of
2019 Apr 11
0
High availability of Dovecot
> Am 11.04.2019 um 11:48 schrieb luckydog xf <luckydogxf at gmail.com>: > > As your statement, nothing speical is needed to do except setting up DNS MX records, right? MX records are for incoming MAIL: yourdomain.com <http://yourdomain.com/> IN MX 100 mail1.yourdomain.com <http://server1.yourdomain.com/> yourdomain.com <http://yourdomain.com/> IN MX 100
2017 Jan 17
1
disable/mask NetworkManager leads to unit startup fails
Thank you very much James for your deep explaining answer! That cleared everything up for me and yes that helped a lot. Have a nice day :) -----Urspr?ngliche Nachricht----- Von: CentOS [mailto:centos-bounces at centos.org] Im Auftrag von James Hogarth Gesendet: Montag, 16. J?nner 2017 19:37 An: CentOS mailing list <centos at centos.org> Betreff: Re: [CentOS] disable/mask NetworkManager
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2014 Sep 17
2
Active/Passive Samba Cluster for Shared NFS Backend
Hello, I am working on setting up an Active/Passive Samba cluster on Ubuntu 14.04 using Samba 4.1.6. Samba will be sharing an NFS mount so that it can be accessible to CIFS clients. Thus, the server setup is as follows: -- cifs0 -- / \ / \ NFS_Server VIP --- CIFS clients \ / \ /
2018 Feb 19
3
NFS Ganesha HA w/ GlusterFS
On 2/19/2018 12:09 PM, Kaleb S. KEITHLEY wrote: Sounds good and no problem at all. Will look out for this update in the future. In the meantime, three's a few things I'll try including your suggestion. Was looking for a sense of direction with the projects and now you've given that. Ty. Appreciated! Cheers, Tom > On 02/19/2018 11:37 AM, TomK wrote: >> On 2/19/2018
2009 Aug 25
3
Two server certificates for two common names
Hi there! I have two DNS records mail1.domain.tld mail2.domain.tld I have issued SSL server Certificates for both my domain names. Is it possible to tell dovecot to use both , depending on client access; clients using the mail1.domain.tld be served by the mail1.domain.tld .key and .cert and those using mail2.domain.tld be served by the mail2.domain.tld .key and .cert Thanks in advance
2015 Sep 29
3
Keepalived vrrp problem
Hey guys, I'm trying to install keepalived 1.2.19 on a centos 6.5 machine. I did an install from source. And when I start keepalived this is what I'm seeing in the logs. It's reporting that the VRRP_Instance(VI_1) Now in FAULT state. Here's more of that log entry: Sep 29 12:06:58 USECLSNDMNRDBA Keepalived_vrrp[44943]: VRRP Instance = VI_1 Sep 29 12:06:58 USECLSNDMNRDBA
2015 Sep 29
1
Keepalived vrrp problem
Em 29-09-2015 15:03, Gordon Messmer escreveu: > On 09/29/2015 09:14 AM, Tim Dunphy wrote: >> And if I do an ifconfig command I see no evidence of an eth1 existing. > > "ifconfig -a" will show you all of your interfaces. Maybe there is a confusion here. Sounds like Tim thought keepalived would create that eth1, like a tunnel interface, but it won't. You have to
2013 Apr 19
2
Dovecot Failover
Hello, Assuming we have two (low traffic) servers (on different data centers) replicated using dsync, what is the best way to automatically direct users to the main server when it is up and to the redundant one when the main server is down? Using DNS? I've seen that DNS-based failover has generally issues (for example:
2017 Jul 12
1
Load balanced VIP
Hello everyone! I am working on implementing my first Gluster/Ganesha NFS setup and I am flowing this guide: http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time Everything is working fine. I've got Gluster and Ganesha NFS working and I have VIPs on each node and it is failing over fine, if a little slowly. However, the VIPs don't