Displaying 20 results from an estimated 7000 matches similar to: "Quotas - per folder message limits?"
2010 Sep 06
1
dovecot Digest, Vol 89, Issue 25
Timo Sirainen <tss at iki.fi> Wrote:
> On Mon, 2010-09-06 at 14:26 +0100, Alan Brown wrote:
> > Is there any way of enforcing a Maildir per-folder message limit in
> > Dovecot?
> No.
Would you consider it as a feature request?
> > We're finding major performance hits once a threshold of
files/directory
> > is exceeded. This applies across all
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com>
> Subject: Re: [Dovecot] SSD drives are really fast running Dovecot
>
>
> Yes. Go with a cluster filesystem such as OCFS or GFS2 and an
inexpensive SAN
> storage unit that supports mixed SSD and spinning storage such as the
Nexsan
> SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php
I can't speak for
2012 May 29
4
per-mailbox message limits
This is something Timo hacked up for me a few years ago and I realised
should be on the list in case anyone else wants them.
The following patches will limit maildir folders to 4000 messages on
Dovecot 2.0.* and 2.1.*
The Specfile patch is against the Cityfan Redhat EL5 SRPM but is likely
to work on most build platforms
Changing the message limit requires a recompile. It's brutal and
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through.
Hello,
When using gfs2 with quotas on a SAN that is providing storage to two
clustered systems running CentOS6.5, one of the systems
can crash. This crash appears to be caused when a user tries
to add something to a SAN disk when they have exceeded their
quota on that disk. Sometimes a stack trace is produced in
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2012 May 28
3
Dovecot 2.1 mbox + maildir
What syntax is needed to make this work?
The 2.0 wiki recomendations don't work - I can see the inboxes or the
folders but not both at once and there are lots of error messages about
prefix clashes if I simply use the existing 2.0.20 conf file on 2.1.6
The layout I have is:
Inboxes in mbox format - /var/spool/mail/%u
Folders in maildir format - /var/spool/imap/%u/Maildir/
Control and
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM.
% yum list | grep gfs2
gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2
It is missing from:
http://msync.centos.org/centos-5/5/os/SRPMS/
What I need from the SRPM are the patches. I'm working through
some issues using the source code, and the patches in the RedHat
SRPM
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello,
I will create a Xen cluster and using GFS2 (with conga, ...) to create
a new Xen cluster.
I know that GFS2 is prod ready since RHEL 5.3.
Do you know whent Centos 5.3 will be ready ?
Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to
5.3 without reinstallation ?
Tx
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best
shared filesystem for hosting many users, here I share with you the results,
notice the bad perfomance of all the shared filesystems against the local
storage.
Is there any specific optimization/tunning on dovecot for use GFS2 on
rhel6??, we have configured the director to make the user mailbox persistent
in a node, we will
2011 Sep 20
2
Finding i/o bottleneck
Hi list !
We have a very busy webserver hosted in a clustered environment where the
document root and data is on a GFS2 partition off a fiber-attached disk
array.
Now on busy moments, I can see in htop, nmon that there is a fair percentage
of cpu that is waiting for I/O. In nmon, I can spot that the most busy block
device correspond to our gfs2 partition where many times, it shows that
2010 Nov 15
3
Local node indexes in a cluster backend with GFS2
Hi, all
this days I'm testing a dovecot setup using lvs, director and a cluster
email backend with two nodes using rhel5 and gfs2. In the two nodes of the
email backend I configured mail location this way:
mail_location =
sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
/var/vmail is shared clustered filesystem with GFS2 shared by node1 and
node2
/var/indexes is a local
2013 Mar 21
1
GFS2 hangs after one node going down
Hi guys,
my goal is to create a reliable virtualization environment using CentOS
6.4 and KVM, I've three nodes and a clustered GFS2.
The enviroment is up and working, but I'm worry for the reliability, if
I turn the network interface down on one node to simulate a crash (for
example on the node "node6.blade"):
1) GFS2 hangs (processes go in D state) until node6.blade get
2013 Aug 05
1
Corrupted mboxes with v2.2.4, posix_fallocate and GFS2
Hi,
on a clustered Dovecot server installation that was recently moved from a
shared GPFS filesystem to GFS2, occasional corruptions in the users'
INBOXes started appearing, where a new incoming message would be appended
directly after a block of NUL bytes, and be scanned by dovecot as being
glued to the preceding message.
I traced this to the file extension operation performed in
2010 Nov 13
2
Is Dovecot Director stable for production??
Today I will try Dovecot Director for a setup using a Cluster Backend with
GFS2 on rhel5, my question is if is Director stable for use in production
for large sites, I know is mainly designed for NFS but I believe it will do
the job also for a cluster filesystem like GFS2 and should solve the mail
persistence problem with a node and locking issues.
I plan to add a layer behind a load balancer to
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote:
> On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote:
>> I'm evaluating different cluster file systems that can work with large
>> clustered environment, e.g. hundreds of nodes connected to a SAN over
>> FC.
>>
>> So far I looked at OCFS2 and GFS2, they both worked nearly the same
>> in terms of performance, but since I ran my
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok,
I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific
SAN configuration).
Here's my layout:
I have a server cluster.
OS= RHEL 5.4 (both nodes...)
kernel= 2.6.18-194.11.3.el5
Samba= samba-3.0.33-3.14.el5
*On this cluster are 6 GFS2 Clustered filesystems.
*4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The
other 2 remaining volumes are 1