Hi Everyone, I wish to create a Postfix/Dovecot active-active cluster (each node will run Postfix *and* Dovecot), which will obviously have to use central storage. I'm looking for ideas to see what's the best out there. All of this will be running on multiple Xen hosts, however I don't think that matters as long as I make sure that the cluster nodes are on different physical boxes. Here are my ideas so far for the central storage: 1) NFS Server using DRBD+LinuxHA. Export the same NFS share to each mail server. Which this seems easy, how well does Dovecot work with NFS? I've read the wiki page, and it doesn't sound promising. But it may be outdated.. 2) Export block storage using iSCSI from targets which have GFS2 on DRBD+LinuxHA. This is tricky to get working well, and it's only a theory. 3) GlusterFS. Easy to set up, but apparently very slow to run. So what's everybody using? I know that Postfix runs well on NFS (according to their docs). I intend to use Maildir Thanks
Jonathan Tripathy put forth on 1/13/2011 1:22 AM:> I wish to create a Postfix/Dovecot active-active cluster (each node will run > Postfix *and* Dovecot), which will obviously have to use central storage. I'm > looking for ideas to see what's the best out there. All of this will be running > on multiple Xen hosts, however I don't think that matters as long as I make sure > that the cluster nodes are on different physical boxes.I've never used Xen. Doesn't it abstract the physical storage layer in the same manner as VMWare ESX? If so, everything relating to HA below is pretty much meaningless except for locking.> Here are my ideas so far for the central storage: > > 1) NFS Server using DRBD+LinuxHA. Export the same NFS share to each mail server. > Which this seems easy, how well does Dovecot work with NFS? I've read the wiki > page, and it doesn't sound promising. But it may be outdated.. > > 2) Export block storage using iSCSI from targets which have GFS2 on > DRBD+LinuxHA. This is tricky to get working well, and it's only a theory. > > 3) GlusterFS. Easy to set up, but apparently very slow to run. > > So what's everybody using? I know that Postfix runs well on NFS (according to > their docs). I intend to use MaildirIn this Xen setup, I think the best way to accomplish your goals is to create 6 guests: 2 x Linux Postfix 2 x Linux Dovecot 1 x Linux NFS server 1 x Linux Dovecot director Each of these can be painfully small stripped down Linux instances. Configure each Postfix and Dovecot server to access the same NFS export. Configure Postfix to use native local delivery to NFS/maildir. Don't use LDA (deliver). With Postfix HA is automatic: you simply setup both servers with the same DNS MX priority. DNS automatically takes care of HA for MX mail by design. If a remote SMTP client can't reach one MX it'll try the other automatically. Of course, you already knew this (or should have). Configure each Dovecot instance to use the NFS/maildir export. Disable indexing unless or until you've confirmed that director is working sufficiently well to keep each client hitting the same Dovecot server. Have Xen run Postfix+Dovecot paired on two different hosts and have the NFS server and director on a third Xeon host. This ordering will obviously change if hosts fail and your Xen scripts auto restart the guests on other hosts. Now, all of the above assumes that since you are running a Xen cluster that you are using shared fiber channel or iSCSI storage arrays on the back end, and that each Xen host has a direct (or switched) connection to such storage and thus has block level access to the LUNs on each SAN array. If you do not have shared storage for the cluster, disregard everything above, and pondering why you asked any of this in the first place. For any meaningful use of virtualized clusters with Xen, ESX, etc, a prerequisite is shared storage. If you don't have it, get it. The hypervisor is what gives you fault tolerance. This requires shared storage. If you do not intend to install shared storage, and intend to use things like drbd between guests to get your storage redundancy, then you really need to simply throw out your hypervisor, in this case Xen, and do direct bare metal host clustering with drbd, gfs2, NFS, etc. -- Stan
Am 13.01.2011 08:22, schrieb Jonathan Tripathy:> Hi Everyone, > > I wish to create a Postfix/Dovecot active-active cluster (each node will > run Postfix *and* Dovecot), which will obviously have to use central > storage. I'm looking for ideas to see what's the best out there. All of > this will be running on multiple Xen hosts, however I don't think that > matters as long as I make sure that the cluster nodes are on different > physical boxes. > > Here are my ideas so far for the central storage: > > 1) NFS Server using DRBD+LinuxHA. Export the same NFS share to each mail > server. Which this seems easy, how well does Dovecot work with NFS? I've > read the wiki page, and it doesn't sound promising. But it may be > outdated.. > > 2) Export block storage using iSCSI from targets which have GFS2 on > DRBD+LinuxHA. This is tricky to get working well, and it's only a theory. > > 3) GlusterFS. Easy to set up, but apparently very slow to run. > > So what's everybody using? I know that Postfix runs well on NFS > (according to their docs). I intend to use Maildir > > Thanks >i have drbd and ocfs with keepalive on ubuntu lucid, 2 loadbalancers, 2 mailservers with postfix and dovecot2 maildirs,clamav-milter,spamass-milter,sqlgrey, master-master mysql additional horde webmail on apache at both servers no problem so far, but for now i have only ca 100 mailboxes yet i wouldnt recommend nfs for mailstore if you want use gfs you might use better some redhat (clone) last time i tested it on ubuntu , i couldnt get it running as i expected it ( this may changed now...) i dont think there is some best solution depends on you hardware, finance resources, number of wanted mailboxes etc -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria
On 13/01/11 10:57, Stan Hoeppner wrote:> Jonathan Tripathy put forth on 1/13/2011 2:24 AM: > >> Ok so this is interesting. As long as I use Postfix native delivery, >> along with >> Dovecot director, NFS should work ok? > One has nothing to do with the other. Director doesn't touch smtp > (afaik), only > imap. The reason for having Postfix use its native local(8) delivery > agent for > writing into the maildir, instead of Dovecot deliver, is to avoid > Dovecot index > locking/corruption issues with a back end NFS mail store. So if you > want to do > sorting you'll have to use something other than sieve, such as > maildrop or > procmail. These don't touch Dovecot's index files, while Deliver > (LSA) does > write to them during message delivery into the maildir.Yes, I thought it had something to do with that>>> For any meaningful use of virtualized clusters with Xen, ESX, etc, a >>> prerequisite is shared storage. If you don't have it, get it. The >>> hypervisor >>> is what gives you fault tolerance. This requires shared storage. >>> If you do not >>> intend to install shared storage, and intend to use things like drbd >>> between >>> guests to get your storage redundancy, then you really need to >>> simply throw out >>> your hypervisor, in this case Xen, and do direct bare metal host >>> clustering with >>> drbd, gfs2, NFS, etc. >> Why is this the case? Apart from the fact that Virtualisation becomes >> "more >> useful" with shared storage (which I agree with), is there anything >> wrong with >> doing DR between guests? We don't have shared storage set up yet for the >> location this email system is going. We will get one in time though. > I argue that datacenter virtualization is useless without shared > storage. This > is easy to say for those of us who have done it both ways. You > haven't yet. > Your eyes will be opened after you do Xen or ESX atop a SAN. If > you're going to > do drbd replication between two guests on two physical Xen hosts then > you may as > well not use Xen at all. It's pointless.Where did I say I havn't done that yet? I have indeed worked with VM infrastructures using SAN storage, and yes, it's fantastic. Just this particular location doesn't have a SAN box installed. And we will have to agree to disagree as I personally do see the benefit of using VMs with local storage> What you need to do right now is build the justification case for > installing the > SAN storage as part of the initial build out and setup your virtual > architecture > around shared SAN storage. Don't waste your time on this other > nonsense of > replication from one guest to another, with an isolated storage pool > attached to > each physical Xen server. That's just nonsense. Do it right or don't > do it at all. > > Don't take my word for it. Hit Novell's website and VMWare's and pull > up the > recommended architecture and best practices docs.You don't need to tell me :) I already know how great it is> One last thing. I thought I read something quite some time ago about Xen > working on adding storage layer abstraction which would allow any Xen > server to > access directly connected storage on another Xen server, creating a > sort of > quasi shared SAN storage over ethernet without the cost of the FC > SAN. Did > anything ever come of that? >I haven't really been following how the 4.x branch is going as it wasn't stable enough for our needs. Random lockups would always occur. The 3.x branch is rock solid. There have been no crashes (yet!) Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind experimenting with DRBD and GFS2 is it means fewer problems?
Am 14.01.2011 20:16, schrieb Patrick Westenberg:> Hello, > > just to get it right: > DRBD for shared storage replication is OK? > > Patrickusing it allready -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria
On 23 Jan 2011, at 08:51, Jan-Frode Myklebust wrote:> On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote: >> >> It is better, because now we have an decent webmail ( horde with dimp >> enable, before were just imp ) , and most people use to have pop configured, >> becasue of quota of 200mb, and little user use webmail. Now much more people >> use the webmail and imap cause quota is 1gb now. >> Any better free webmail to point out tu us test ? > > We were considering horde, and the upcoming horde-4, but IMHO the > interface is too old fashion.. I stumbled over SOGo a few months ago, > and IMHO it looks great, and are doing (almost) everything right. > > http://www.sogo.nu/ > http://www.sogo.nu/english/tour/online_demo.html >Have a look at roundcube http://roundcube.net/>
* Jan-Frode Myklebust <janfrode at tanso.net>:> On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote: > > > > It is better, because now we have an decent webmail ( horde with dimp > > enable, before were just imp ) , and most people use to have pop configured, > > becasue of quota of 200mb, and little user use webmail. Now much more people > > use the webmail and imap cause quota is 1gb now. > > Any better free webmail to point out tu us test ? > > We were considering horde, and the upcoming horde-4, but IMHO the > interface is too old fashion.. I stumbled over SOGo a few months ago, > and IMHO it looks great, and are doing (almost) everything right.+1 p at rick -- state of mind Digitale Kommunikation http://www.state-of-mind.de Franziskanerstra?e 15 Telefon +49 89 3090 4664 81669 M?nchen Telefax +49 89 3090 4666 Amtsgericht M?nchen Partnerschaftsregister PR 563