similar to: Is Dovecot Director stable for production??

Displaying 20 results from an estimated 2000 matches similar to: "Is Dovecot Director stable for production??"

2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2010 Nov 15
3
Local node indexes in a cluster backend with GFS2
Hi, all this days I'm testing a dovecot setup using lvs, director and a cluster email backend with two nodes using rhel5 and gfs2. In the two nodes of the email backend I configured mail location this way: mail_location = sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n /var/vmail is shared clustered filesystem with GFS2 shared by node1 and node2 /var/indexes is a local
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2009 Jan 14
2
csgfs 4 really outdated
Centos CSGFS is at this time really outdated compared to current rh updates and fixes, is the centos team still giving support to this version???. Best regards
2010 Nov 16
1
Email backend monitor script for Director
Hi people, I know I saw this at some point in the list but can't find it, I need a script wich monitor the health of the email backend and if a node fails remove it from the director server, once is up again add it, I plan tu run the script at the load balancer, if you have some let me know.. thank's in advance
2010 Jul 18
3
Proxy IMAP/POP/ManageSieve/SMTP in a large cluster enviroment
Hi to all in the list, we are trying to do some tests lab for a large scale mail system having this requirements: - Scale to maybe 1million users(Only for testing). - Server side filters. - User quotas. - High concurrency. - High performance and High Availability. We plan to test this using RHEL5 and maybe RHEL6. As a storage we are going to use an HP EVA 8400 FC(8 GB/s) We defined this
2008 Nov 06
1
Sync of csgfs and current centos 4 updates
Hi list, I've been trying to update my csgfs stuff to 4.7 this week but I notice that current csgfs is built against kernel-2.6.9-78.0.1 and is required as a dependence, right now current kernel is 2.6.9-78.0.5 it would be great if csgfs can be rebuilt or updated to this, so dependences works ok??? Best regards
2008 Nov 20
2
Sync csgfs in centos 4 with current kernel updates
Hi list, any timeframe for the sync of csgfs in centos 4 with current kernel updates???. Best regards
2010 Nov 14
1
Recommended quota backend for a virtual users setup using ldap and sdbox
Just that, wich is the recommended quota backend for sdbox in terms of performance and flexibility running a setup of virtual users with ldap??... thank's in advance...
2010 Nov 17
1
Dovecot ldap connection reconnecting after inactivity
Hi people, I have a setup configured using ldap, I have noticed that after a period of user inactivity if a client open connections to dovecot first attemps fails with this: Nov 16 19:34:43 cl05-02 dovecot: auth: Error: ldap(user at xxx.xx.xx,172.29.13.26): Connection appears to be hanging, reconnecting After the connections to ldap has been restablished everything starts working ok, is this a
2011 Nov 03
1
How to define ldap connection idle
I'm having a problem with dovecot ldap connection when ldap server is in another firewall zone, firewall kills the ldap connection after a determined period of inactivity, this is good from the firewall point of view but is bad for dovecot because it never knows the connections has been dropped, this creates longs timeouts in dovecot and finally it reconnects, meanwhile many users fails to
2011 Jul 04
0
Dovecot error on rhel 6.1 using GFS2(bug)
Hi, just to let you know to all the people testing dovecot in a RHEL 6.1 setup where nodes are Active/Active and sharing a GFS2 filesystem that there is a bug on latest rhel6.1 GFS2 kernel modules, and latest updates in 6.0 wich makes dovecot crash a GFS2 filesystem, with a corruption and other related errors, redhat people have posted a fix for the kernel wich is in QA:
2014 Aug 12
4
Does any other company and production use the ocfs2?
Hi all, I want to know whether any other company and production use the ocfs2 ? Thanks. Jensen 2014.8.12
2014 Aug 12
4
Does any other company and production use the ocfs2?
Hi all, I want to know whether any other company and production use the ocfs2 ? Thanks. Jensen 2014.8.12
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM. % yum list | grep gfs2 gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2 It is missing from: http://msync.centos.org/centos-5/5/os/SRPMS/ What I need from the SRPM are the patches. I'm working through some issues using the source code, and the patches in the RedHat SRPM
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com> > Subject: Re: [Dovecot] SSD drives are really fast running Dovecot > > > Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN > storage unit that supports mixed SSD and spinning storage such as the Nexsan > SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php I can't speak for
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or