similar to: dovecot Digest, Vol 89, Issue 25

Displaying 20 results from an estimated 2000 matches similar to: "dovecot Digest, Vol 89, Issue 25"

2010 Sep 06
1
Quotas - per folder message limits?
Is there any way of enforcing a Maildir per-folder message limit in Dovecot? We're finding major performance hits once a threshold of files/directory is exceeded. This applies across all common filesystems with the threshold differing depending on the FS. GFS2 pretty much _STOPS_ for 10 minutes when users put 65k+ messages in one folder... I'm currently running 1.1.20 but looking to
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com> > Subject: Re: [Dovecot] SSD drives are really fast running Dovecot > > > Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN > storage unit that supports mixed SSD and spinning storage such as the Nexsan > SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php I can't speak for
2009 Jul 27
0
Problems with power management xen 3.4
I''ve a problem with managing power consumption on Intel Nehalem CPU. I''ve installed Xen 3.4 on our Dell PowerEdge T610 system on a Ubuntu 9.04 distribution. I recompiled the kernel 2.6.30rc5. Now I can see the c-states of the CPUs but no access to P-states information and the frequency scaling does not work. By the way, if I run the kernel 2.6.28.13 which is the last one
2003 Dec 09
4
was FXO cards
Hey guys, appreciate the input. Here are some thoughts. ADSI phones are out of the question. This is a business environment, I can't worry about my employees not knowing how to forward calls, answer calls when away from the multiline phone, and no ADSI phone will handle multiple lines that I have found. I would love to put 6 X100P cards in a case and run asterisk on it. but... I
2012 May 28
3
Dovecot 2.1 mbox + maildir
What syntax is needed to make this work? The 2.0 wiki recomendations don't work - I can see the inboxes or the folders but not both at once and there are lots of error messages about prefix clashes if I simply use the existing 2.0.20 conf file on 2.1.6 The layout I have is: Inboxes in mbox format - /var/spool/mail/%u Folders in maildir format - /var/spool/imap/%u/Maildir/ Control and
2012 May 29
4
per-mailbox message limits
This is something Timo hacked up for me a few years ago and I realised should be on the list in case anyone else wants them. The following patches will limit maildir folders to 4000 messages on Dovecot 2.0.* and 2.1.* The Specfile patch is against the Cityfan Redhat EL5 SRPM but is likely to work on most build platforms Changing the message limit requires a recompile. It's brutal and
2011 Oct 20
1
cores-per-socket
Hi, We''ve been using Amazon EC2 a fair bit. We''ve discovered that to run a proprietary, commerical bioinformatics piece of software that is liscenced by physical SOCKET instead of cores, we needed to use a virtualization technology that converts multiple cores into a single socket. (This proprietary software will allow us to use up to 4 cores per physical socket, so
2009 Nov 16
1
dovecot ignoring folder permissions on directory creation
Ubuntu 8.04lts Dovecot 1.2.6 So, further to the 'deliver' problem posted yesterday I've also discovered another issue regarding permissions: files and directories are being created 0600/0700 by the IMAP and deliver process (depending on who gets there first!) preventing use of shared mailboxes. According to documentation: "When creating a new mailbox, Dovecot v1.2+ copies the
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2013 May 03
1
sanlockd, virtlock and GFS2
Hi, I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm running into some issues with either sanlock or virtlockd. All virtual machines are handled via the cluster (in /etc/cluser/cluster.conf) but I want some kind of locking to be in place as extra security measurement. Sanlock ======= At first I tried sanlock, but it seems if one node goes down unexpectedly,
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2009 Feb 21
1
GFS2/OCFS2 scalability
Andreas Dilger wrote: > On Feb 20, 2009 20:23 +0300, Kirill Kuvaldin wrote: >> I'm evaluating different cluster file systems that can work with large >> clustered environment, e.g. hundreds of nodes connected to a SAN over >> FC. >> >> So far I looked at OCFS2 and GFS2, they both worked nearly the same >> in terms of performance, but since I ran my
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok, I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific SAN configuration). Here's my layout: I have a server cluster. OS= RHEL 5.4 (both nodes...) kernel= 2.6.18-194.11.3.el5 Samba= samba-3.0.33-3.14.el5 *On this cluster are 6 GFS2 Clustered filesystems. *4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The other 2 remaining volumes are 1
2008 Jun 26
0
CEBA-2008:0501 CentOS 5 i386 gfs2-kmod Update
CentOS Errata and Bugfix Advisory 2008:0501 Upstream details at : https://rhn.redhat.com/errata/RHBA-2008-0501.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: 629f45a15a6cef05327f23d73524358d kmod-gfs2-1.92-1.1.el5_2.2.i686.rpm dcc5d2905e9c0cf4d424000ad24c6a5b kmod-gfs2-PAE-1.92-1.1.el5_2.2.i686.rpm
2008 Nov 12
1
Use DECT GAP handsets with Snom M3 base?
Anyone have practical experience using inexpensive GAP-compliant DECT handsets with the Snom M3 basestation? When I asked Snom support, the answer was that 'basic functionality should work', but they didn't elaborate. I'm _guessing_ that means registering/unregistering with the base, making calls, and receiving calls (including presenting caller ID). They also stated that they
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM. % yum list | grep gfs2 gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2 It is missing from: http://msync.centos.org/centos-5/5/os/SRPMS/ What I need from the SRPM are the patches. I'm working through some issues using the source code, and the patches in the RedHat SRPM
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello, I will create a Xen cluster and using GFS2 (with conga, ...) to create a new Xen cluster. I know that GFS2 is prod ready since RHEL 5.3. Do you know whent Centos 5.3 will be ready ? Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to 5.3 without reinstallation ? Tx
2013 Jun 07
0
CentOS-announce Digest, Vol 100, Issue 4
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When