Displaying 20 results from an estimated 800 matches similar to: "IMAP Benchmarking, active/active cluster with GFS2"
2007 Jan 25
4
Benchmarking
I wasted too much time today doing this, but they do show a few
interesting things:
1) mmap_disable=yes is faster than mmap_disable=no. I didn't expect
this. I'm not yet sure why this is, but possibly because then it updates
the mail index by reading the changes from dovecot.index.log, instead of
reopening and re-mmaping dovecot.index. At least that's the biggest
difference between
2007 Mar 29
1
uiddir mailbox format with benchmarks
I cleaned up the lib-storage code a bit so that it's easier to write
support for new mailbox formats. It could still use a bit of cleaning
up.
Anyway, I thought I'd try how easy it would be to implement Cyrus-like
mail store which would consist of only Dovecot's index files and
"<uid>." message files. Flags and everything else is kept only in
Dovecot's index files
2007 Mar 14
2
Benchmarking CVS HEAD vs. v1.0
Some new features in CVS HEAD are:
- v1.0 saves most of the data to dovecot.index.cache only when the
client is FETCHing the messages. With mboxes the message headers are
saved while parsing the mbox, which is almost the same thing.
With CVS HEAD the dovecot.index.cache file is updated already when the
new messages are being saved (with deliver or IMAP APPEND). This avoids
reading the message
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best
shared filesystem for hosting many users, here I share with you the results,
notice the bad perfomance of all the shared filesystems against the local
storage.
Is there any specific optimization/tunning on dovecot for use GFS2 on
rhel6??, we have configured the director to make the user mailbox persistent
in a node, we will
2007 May 12
3
dbmail benchmarking
I thought I'd try benchmarking with dbmail (v2.2.4) to see how much
slower a SQL backend could actually be. Skip to bottom for the
conclusions.
Originally I ran the tests with the databases being in XFS
filesystem. MySQL's performance was horrible. It went 3-7x faster
with ext3.
MySQL 5.0.30 backend (innodb):
./imaptest clients=1 - append=100 seed=1 secs=30 msgs=1000000 logout=0
2007 Jul 12
3
v1.1 status and benchmarks
v1.1 plans have changed a bit. I'll release v1.1.alpha1 soon and hope to
have a stable v1.1 in a month or two. The rest of the features that
didn't make it into v1.1 will go to v1.2. I'll write more about this
when v1.1 alpha is released.
I also did a bit of benchmarking. v1.1's performance improvements are
looking pretty great, it seems to be twice as fast as v1.0.
v1.0:
2007 May 30
0
Index file rewrite status
The biggest problem with getting v1.0 out was getting its index file
code stable. v1.1 plans included doing large changes to the index file
code, so it's important to get the new code stable as soon as possible.
Since I've managed to stay pretty productive for the last few weeks,
I've been mostly just coding the index changes. Once I'm sure that the
code is again fully working
2007 Sep 25
0
Performance on BSD
Hi list, I want to test performance of dovecot-1.0.5 on OpenBSD and
FreeBSD. I want to implement a small pop3s/imaps server with ~3000 users.
I had tested dovecot-1.0.5 with vpopmail+mysql but I don't know if the
performance is ok.
Anybody have test with imaptest?
OpenBSD 4.1 amd64 SMP
dovecot-1.0.5
vpopmail-5.4.22
./imaptest seed=1 secs=60
Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele
2015 Jan 14
1
Questions regarding imaptest
Hi,
The measurements were created under the following conditions :
? operating system : Red Hat Enterprise Linux Server release 6.6 (Santiago)
kernel in version 2.6.32-504.el6.x86_64
? virtual server (VMware) with an Intel(R) Xeon(R) 4vCPU E5649 x 2,526 GHz
(2 cores per virtual socket) and 4 GB RAM
? 7,200 RPM SATA 1TB (FC SAN IBM System Storage N3400)
? all file systems had been formated in
2013 Jun 28
1
Dovecot SLOW in imaptest without any apparent reason
Hello,
I'm migrating a mail server from a centos 5 cluster architecture to a
centos 6 cluster architecture. The new cluster involves faster machines
then the old cluster, and a virtual machine.
I use dovecot-2.0.9-5.el6.x86_64, while the old cluster uses
dovecot-2.0.1-1_118.el5.
Tha mail server uses mysql for the users database, and a local ldap for
authentication.
The storage is also much
2007 Mar 28
2
imaptest10 and stalled messages
Greetings -
I've now got as far as playing with the imaptest10 test utility to
see if I can stress-test our development server. imaptest10 is built
against dovecot-1.0rc28
It may just be that I'm excessively heavy handed, but when I let
imaptest10 rip with the command...
./imaptest10 user=test%03d host=testserver.imap.york.ac.uk clients=50
mbox=./dovecot.mbox msgs=1000 secs=30
2011 Jun 23
0
interpreting imaptest results
Hi
I'am doing some tests with nginx proxy -> dovecot using imaptest:
imaptest user=user host=host pass=pass msgs=50 clients=100
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100% 50% 50% 100% 100% 100% 50% 100% 100% 100% 100%
30% 5%
41 19 10 36 24 39 9 13 11 8 12 100/100
10 12 11 17 24 31 3
2007 Nov 30
3
Zimbra benchmarking
Now that I have a working kvm setup, I thought I'd finally try how
Zimbra works. This is mainly some microbenchmarking, so it may not have
much to do with actual performance in real life.
Setup:
- 1GB memory given to kvm (from host's 2GB)
- Intel Core 2 6600 (kvm uses only one CPU)
- CentOS 5
- 15GB qcow2 image on XFS filesystem
- Zimbra 5.0 RC2 RHEL5 x86_64
- Dovecot latest hg,
2010 Dec 17
6
Authentication issue.
Hi list,
I try to run imaptest, but I get the following errors (as root):
# ./imaptest copybox=Trash # or any other command in
http://www.imapwiki.org/ImapTest/Examples
Logi List Stat Sele Fetc Fet2 Copy Stor Dele Expu Appe Logo
100% 50% 50% 100% 100% 100% 33% 50% 100% 100% 100% 100%
30% 5% 5%
0 0 0 0 0 0 0 0 0 0
2007 Mar 20
5
Stalled imaptest10 process
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
in order to stress my Dovecot test installation a bit, I compiled
imaptest10.c.
Now and then I see:
Logi List Stat Sele Fetc Fet2 Copy Stor Dele Expu Appe Logo
100% 50% 50% 100% 100% 100% 33% 50% 100% 100% 100% 100%
30% 5% 5%
0 0 0 0 0 0 0 0 0 0 0 0 1/
2009 Mar 05
3
Maildir dirty syncs
http://dovecot.org/patches/1.1/maildir-dirty-syncs.diff
This patch adds a new maildir_very_dirty_syncs setting. If set to "yes",
Dovecot assumes it's the only one changing the cur/ directory (so other
MDAs can add mails to new/ without problems). This makes it possible to
avoid rescanning the cur/ directory all the time looking for filenames.
It also looks like (in stress testing)
2008 Feb 19
3
compiling error imaptest.c
dovecot-1.1.beta16]# gcc imaptest.c -o imaptest -g -Wall -W -I.
-Isrc/lib-mail -Isrc/lib -Isrc/lib-imap -Isrc/lib-storage/index/mbox
-DHAVE_CONFIG_H src/lib-storage/index/mbox/mbox-from.o
src/lib-imap/libimap.a src/lib-mail/libmail.a src/lib/liblib.a
imaptest.c: In client_append function:
imaptest.c:1492: error: too many arguments for i_stream_create_limit
function
2019 Jan 14
0
mdbox + zlib performing less than just mdbox
mdbox format is a cross between mbox and sdbox.
The idea is that it keeps up to mdbox_rotate_size sized mbox files which contain mails. Using zlib here will not help, because zlib is not applied to the full mdbox file, but individual mails within.
Also not sure why you think that adding compression would make things faster? =)
I am not fully sure how cephfs works but you might get better
2019 Jan 14
2
mdbox + zlib performing less than just mdbox
I have test environment to determine what would be best settings. I have
been told that enabling zlib compression would be good to save iops on
storage. But doing the test now, I get worse results.
[@test2 ~]# pr -m -t mail04-mdbox-vdb-append-64kb-6.log
mail04-mdbox-vdb-append-64kb-8.log |less
Logi Sele Appe Logi Sele Appe
100% 100% 100% 100% 100%
2019 Jan 14
0
What does the mdbox_rotate_size influence?
I wondered why mdbox_rotate_size is 2MB by default.
I thought if I increased it to 16MB, maybe there would be less disk io,
but I dont see any difference. Furthermore I read in some thread that
increasing it from 2MB, could give problems when deleting messages, can
someone explain this?
Does anyone have this on a cephfs mount, does it make sense to set this
to 4MB?
[@test2 ~]# pr -m -t