Hi, we have a medium setup (8000 pop and imap users using almost every available client, 800GB of stored mails using maildir on a Celerra NFS server, with index files on local disks, and procmail for local delivery), being served by a Dell PowerEdge 2850 (2GB RAM and dual P4 Xeon 3,2GHz). Our current not-so-high availability setup is based on a similar server with the same setup and a easy but manual process to switch from one server to another. We are thinking about setting up some kind of serious high availability, but for every strategy we think about some problems appear, and I'd like to hear your opinions about them: - The recommended setup, with each user being sent always to the same server, is not possible because our load balancers (Cisco Catalyst 6000) can't do that. - We could put both servers behind the load balancer, and keep local index files on each server. Usually the same ip we'll be redirected to the same server, so few problems will arise. When a user is sent to a new server, index will be rebuilt so performance will be bad but we should not expect other problems, right? - We could also put the index files on a nfs share. No problems, but pretty bad performance. - We could also get more ram for the servers and keep indices in memory. How can we compare these solutions? Apart from performance, are other problems expected? Using deliver instead of procmail could improve performance? - We've also thought about some more or less weird setups, like setting up a GFS filesystem for the index files, or setting up a proxy on every server which redirect users to their fixed server, but they seem too complex for few advantages. Any recommendations? How are you doing this? -- Joseba Torre. Vicegerencia de TICs, ?rea de Explotaci?n
Joseba Torre schrieb:> Hi, > > we have a medium setup (8000 pop and imap users using almost every > available client, 800GB of stored mails using maildir on a Celerra NFS > server, with index files on local disks, and procmail for local > delivery), being served by a Dell PowerEdge 2850 (2GB RAM and dual P4 > Xeon 3,2GHz). > > Our current not-so-high availability setup is based on a similar > server with the same setup and a easy but manual process to switch > from one server to another. > > We are thinking about setting up some kind of serious high > availability, but for every strategy we think about some problems > appear, and I'd like to hear your opinions about them: > > - The recommended setup, with each user being sent always to the same > server, is not possible because our load balancers (Cisco Catalyst > 6000) can't do that. > > - We could put both servers behind the load balancer, and keep local > index files on each server. Usually the same ip we'll be redirected to > the same server, so few problems will arise. When a user is sent to a > new server, index will be rebuilt so performance will be bad but we > should not expect other problems, right? > > - We could also put the index files on a nfs share. No problems, but > pretty bad performance. > > - We could also get more ram for the servers and keep indices in > memory. > > How can we compare these solutions? Apart from performance, are other > problems expected? Using deliver instead of procmail could improve > performance? > > - We've also thought about some more or less weird setups, like > setting up a GFS filesystem for the index files, or setting up a proxy > on every server which redirect users to their fixed server, but they > seem too complex for few advantages. > > Any recommendations? How are you doing this?search the list archive, this was disscussed before a ha balance setup with gfs like filesystem is possible and working, sorry about cisco i only did small tests plain ha linux setups with a layout of 4 servers 2 ha loadbalancers and 2 postfix/dovecot servers on ubuntu with one cluster ip, but posibilities are serveral ways to goal what you need, so fitting to local network layouts is always needed as well as hard testing before production stage is always needed -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria
Joseba Torre wrote:> Hi, > > we have a medium setup (8000 pop and imap users using almost every > available client, 800GB of stored mails using maildir on a Celerra NFS > server, with index files on local disks, and procmail for local > delivery), being served by a Dell PowerEdge 2850 (2GB RAM and dual P4 > Xeon 3,2GHz). > > Our current not-so-high availability setup is based on a similar > server with the same setup and a easy but manual process to switch > from one server to another.If you don't care about keeping an active/standby setup and you're happy with what you've currently got going on, you can easily automate the process with heartbeat. ~Seth
On Jul 24, 2009, at 5:00 AM, Joseba Torre wrote:> we have a medium setup (8000 pop and imap users using almost every > available client, 800GB of stored mails using maildir on a Celerra NFS > server, with index files on local disks, and procmail for local > delivery), being served by a Dell PowerEdge 2850 (2GB RAM and dual P4 > Xeon 3,2GHz). > > Our current not-so-high availability setup is based on a similar > server with the same setup and a easy but manual process to switch > from one server to another.So you currenly have a single server serving all imap/pop3 users?> - The recommended setup, with each user being sent always to the same > server, is not possible because our load balancers (Cisco Catalyst > 6000) can't do that. > > - We could put both servers behind the load balancer, and keep local > index files on each server. Usually the same ip we'll be redirected to > the same server, so few problems will arise. When a user is sent to a > new server, index will be rebuilt so performance will be bad but we > should not expect other problems, right?If a single server can handle all users fine, I wouldn't try anything special here. Just have them work as a master/slave and install some kind of a heartbeat to switch between them.> - We could also put the index files on a nfs share. No problems, but > pretty bad performance.If there's only a single server accessing the mails, you can use mail_nfs_*=no and the performance shouldn't be that bad.> - We could also get more ram for the servers and keep indices in > memory.I'd say local disk is much better.> Using deliver instead of procmail could improve performance?http://wiki.dovecot.org/LDA/Indexing> - We've also thought about some more or less weird setups, like > setting up a GFS filesystem for the index files, or setting up a proxy > on every server which redirect users to their fixed server, but they > seem too complex for few advantages.Assuming still a master/slave setup, you could use DRBD to replicate indexes between local disks.