Displaying 20 results from an estimated 2000 matches similar to: "1.0.rc22 released"
2007 Jan 25
4
Benchmarking
I wasted too much time today doing this, but they do show a few
interesting things:
1) mmap_disable=yes is faster than mmap_disable=no. I didn't expect
this. I'm not yet sure why this is, but possibly because then it updates
the mail index by reading the changes from dovecot.index.log, instead of
reopening and re-mmaping dovecot.index. At least that's the biggest
difference between
2007 Feb 05
2
Nitpicking: rc21 startup banner says rc19
> [jhg at helios RPMS]$ rpm -q dovecot
> dovecot-1.0-1_42.rc21.fc5.at
but:
> dovecot: Feb 05 10:49:21 Info: Dovecot v1.0.rc19 starting up
2007 May 12
3
dbmail benchmarking
I thought I'd try benchmarking with dbmail (v2.2.4) to see how much
slower a SQL backend could actually be. Skip to bottom for the
conclusions.
Originally I ran the tests with the databases being in XFS
filesystem. MySQL's performance was horrible. It went 3-7x faster
with ext3.
MySQL 5.0.30 backend (innodb):
./imaptest clients=1 - append=100 seed=1 secs=30 msgs=1000000 logout=0
2007 Feb 02
4
1.0.rc21 released
http://dovecot.org/releases/dovecot-1.0.rc21.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc21.tar.gz.sig
Just one fix. Maybe the one big thing in Dovecot v2.0.* will be a test
suite, which is run before any release. :)
- Cache file handling could have crashed rc20
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type:
2007 Feb 02
4
1.0.rc21 released
http://dovecot.org/releases/dovecot-1.0.rc21.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc21.tar.gz.sig
Just one fix. Maybe the one big thing in Dovecot v2.0.* will be a test
suite, which is run before any release. :)
- Cache file handling could have crashed rc20
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type:
2007 Mar 14
2
Benchmarking CVS HEAD vs. v1.0
Some new features in CVS HEAD are:
- v1.0 saves most of the data to dovecot.index.cache only when the
client is FETCHing the messages. With mboxes the message headers are
saved while parsing the mbox, which is almost the same thing.
With CVS HEAD the dovecot.index.cache file is updated already when the
new messages are being saved (with deliver or IMAP APPEND). This avoids
reading the message
2007 Feb 02
2
broken attachments with pop3
Hi all,
ive just upgrade to rc21 ( so i dont know perhaps this report failure is
fixed with this release )
i have reports from users that attachments get broken ( with rc19, dont
know about before ) with some accounts , which where downloaded via a
pop3 service to a exchange server, but in fact nothing is broken when
looking with imap to the mailboxes
Has anyone an idea, i havent any failure logs
2007 Nov 30
3
Zimbra benchmarking
Now that I have a working kvm setup, I thought I'd finally try how
Zimbra works. This is mainly some microbenchmarking, so it may not have
much to do with actual performance in real life.
Setup:
- 1GB memory given to kvm (from host's 2GB)
- Intel Core 2 6600 (kvm uses only one CPU)
- CentOS 5
- 15GB qcow2 image on XFS filesystem
- Zimbra 5.0 RC2 RHEL5 x86_64
- Dovecot latest hg,
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2024 Sep 15
0
NHW v0.3.0-rc22 new version
Hello,
For those interested, I have released the NHW v0.3.0-rc22 new version.
I continue to fine-tune the nhw_kernel weights.This new version has then
more precision and a better visual quality.
More at: https://nhwcodec.blogspot.com/
Cheers,
Raphael
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2007 Feb 20
3
rc22 segv when over quota
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I just pulled and built version 1.0.rc22
root at ux-2s11-9:/mnt/mailcache/dvtest/MailDir# dovecot -n
# /usr/local/etc/dovecot.conf
log_path: /var/tmp/dovecot.log
info_log_path: /var/tmp/dovecot.info
ssl_disable: yes
disable_plaintext_auth: no
verbose_ssl: yes
login_dir: /usr/local/var/run/dovecot/login
login_executable:
2009 Mar 30
2
dbox benchmarks
http://hg.dovecot.org/dovecot-dbox-redesign/
Looks like multi-dbox scales pretty nicely. Even after 100k messages the
peak saved msgs/sec is the same as the initial saved msgs/sec, even if
the average slows down somewhat.
I tested this by first deleting mailbox, then running "imaptest" for a
second to get saving to start writing several fields to
dovecot.index.cache file. Then ran
2014 Oct 10
1
fixes for quota support on NetBSD
Hi!
dovecot-2.2.13 already has quota support for NetBSD, but it's buggy.
The attached patches by Manuel Bouyer <bouyer at NetBSD.org> fix the
issues.
There is one thing that's not nice in them: one include is now for
"/usr/include/quota.h" since dovecot comes with its own file "quota.h"
which is earlier in the search path. Perhaps dovecot's copy can be
2007 Jan 26
3
imap-login crash with RC19
Hi Timo,
Using RC19, I've have the following crash. If there was a core file,
I've got no idea where it's gone...
Jan 25 10:35:10 rouge dovecot: imap-login: file client.c: line 528
(client_unref): assertion failed: (client->refcount > 0)
Jan 25 10:35:10 rouge dovecot: child 25498 (login) killed with signal 6
Best regards,
--
Nico
On r?alise qu'une femme est de la dynamite
2007 Jan 29
0
dovecot patch for filesystem quota
from a fellow pkgsrc-developer (who is not subscribed to this list)
Geert
----- Forwarded message from Manuel Bouyer <Manuel.Bouyer at lip6.fr> -----
From: Manuel Bouyer <Manuel.Bouyer at lip6.fr>
Message-ID: <20070129115851.GA12360 at asim.lip6.fr>
Date: Mon, 29 Jan 2007 12:58:51 +0100
To: ghen at NetBSD.org
Subject: dovecot patch for filesystem quota
User-Agent: Mutt/1.5.13
2007 Feb 20
1
crash in mail_cache_transaction_reset on rc22
I've not been able to roll out rc23 yet (tonight, I hope) but I just
saw a crash which I'm not sure I've seen reported before, following a server
outage (that is to say, the server came back up and one of the users had a
dovecot core).
#0 0x0005d720 in mail_cache_transaction_reset (ctx=0xcf928)
at mail-cache-transaction.c:71
No locals.
#1 0x0005e8bc in mail_cache_add
2007 Feb 09
0
LSUB * error solved on rc22
Hi All,
Just a quickie to say ew are running the latest version now. Our LSUB
error was due to:
open(/var/mail/public/.Projects/dovecot-acl) failed: Permission denied
Thanks.
--
Kind Regards,
Gavin Henry.
Managing Director.
T +44 (0) 1224 279484
M +44 (0) 7930 323266
F +44 (0) 1224 824887
E ghenry at suretecsystems.com
Open Source. Open Solutions(tm).
http://www.suretecsystems.com/
2007 Aug 06
0
Benchmark: Appending 250k messages
I just changed Dovecot v1.1 code to work much more nicely when appending
new messages to large mailboxes.
Dovecot hg (cydir format, fsync_disable=yes):
./imaptest secs=1
./imaptest - append=100,0 logout=0 msgs=10000000 clients=1 seed=1 secs=300
Logi Sele Appe
100% 100% 100%
1 1 2322 1/ 1
0 0 1998 1/ 1
0 0 2293 1/ 1
0 0 2009 1/ 1
0 0 1789 1/ 1
0
2012 Jun 08
1
[PATCH] netbsd: pci passthrough for HVM guests
Implement pci passthrough for HVM guests for NetBSD Dom0.
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
From: Manuel Bouyer <bouyer@netbsd.org>
--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
2006 May 08
0
CVS forked into development/stable branches
New branch created to CVS: branch_1_0
It's going to stabilize into v1.0 Dovecot, hopefully somewhat soon. I'll
start adding new larger features into HEAD, so you might want to switch
to branch_1_0 if you're currently using Dovecot from CVS. You can do
this with:
cvs up -r branch_1_0
The nightly snapshots are from branch_1_0, at least for now.
-------------- next part --------------