mikkel at euro123.dk
2007-Nov-04 12:02 UTC
[Dovecot] Dovecot write activity (mostly 1.1.x)
I?m experiencing write activity that?s somewhat different from my previous qmail/courier-imap/Maildir setup. This more outspoken in v.1.1.x than v1.0.x (I?m using Maildir). Write activity is about half that of read activity when measuring throughput. But when measuring operations it?s about 5-7 times as high (measured with zpool iostat on ZFS). I think this might be due to the many small updates to index and cache files. Anyway since writing is much more demanding to the disk than reads dovecots ends up being slower (only a little though) than my old qmail/courier-imap/Maildir was. And the old setup didn?t even benefit from indexes like Dovecot does. (What I mean saying ?slower? is that it can service fewer users before it ?hits the wall?). Of course there?s also lot?s of benefits using Dovecot. I?m just wondering whether this was a thing that should be focused on for later versions (maybe writes could be grouped together or something like that). Dovecot is very cheap on the CPU side so the only real limit in terms of scalability is the storage. Regards, Mikkel
On Sun, 2007-11-04 at 13:02 +0100, mikkel at euro123.dk wrote:> Write activity is about half that of read activity when measuring throughput. > But when measuring operations it?s about 5-7 times as high (measured with > zpool iostat on ZFS).Have you tried with fsync_disable=yes? ZFS's fsyncing performance apparently isn't all that great. I'm guessing this is also the reason for the I/O stalls in your other mail. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: <http://dovecot.org/pipermail/dovecot/attachments/20071104/ba79eeda/attachment-0002.bin>
mikkel at euro123.dk
2007-Nov-04 13:10 UTC
[Dovecot] Dovecot write activity (mostly 1.1.x)
On Sun, November 4, 2007 1:51 pm, Timo Sirainen wrote:> On Sun, 2007-11-04 at 13:02 +0100, mikkel at euro123.dk wrote: > >> Write activity is about half that of read activity when measuring >> throughput. But when measuring operations it???s about 5-7 times as high >> (measured with >> zpool iostat on ZFS). > > Have you tried with fsync_disable=yes? ZFS's fsyncing performance > apparently isn't all that great. I'm guessing this is also the reason for > the I/O stalls in your other mail. >I'm using fsync_disable=yes already. I know ZFS has issues. In my opinion it was never ready when it was released but it has such nice features I?m trying to cope with its specialities. I've also disabled "flush cache requests" in ZFS since it made the performance horrible. If I?m the only one experiencing this then I guess I'll just have to accept it as yet another ZFS curiosity :| (Possibly this is also the answer to my other post regarding stalled/delayed I/O) - Mikkel
mikkel at euro123.dk
2007-Nov-05 09:35 UTC
[Dovecot] Dovecot write activity (mostly 1.1.x)
On Sun, November 4, 2007 4:32 pm, Timo Sirainen wrote:>> >> I didn't know that mail_nfs_index=yes resulted in a forced chown. >> How come that's necessary with NFS but not on local disks? >> > > It's used to flush NFS attribute cache. Enabling it allows you to use > multiple servers to access the same maildir at the same time while still > having attribute cache enabled (v1.0 required actime=0). If you don't need > this (and it's better if you don't), then just set the mail_nfs_* to "no" > and it works faster. > >> By the way I misinformed you about fsync_disable=yes. >> It was like that before i upgraded to v1.1, but v1.1 requires >> fsync_disable=no when mail_nfs_index=true so I had to disable it. > > So you use ZFS on the NFS server, but Dovecot is using NFS on client > side? The fsyncs then just mean that the data is sent to NFS server, not a > real fsync on ZFS. >Thanks a lot for the help - this changed a lot! Dist writes fell to about 1/3 of before. I guess the reason is that ZFS can now make use if it's caching capabilities. Deliver?s activity is completely random since it's impossible to load balance a connection based on the e-mail recipient, since only the ip is known at the load balancing point. Therefore I have fsync_disable=no for deliver. It's easy to force the clients using imap/pop3 to the same server since it can be based on the ip only. Therefore I have fsync_disable=yes for imap/pop3. This changed everything. Now there's a real performance gain upgrading from 1.0.x to 1.1.x. About two or three times less disk activity overall (reads were already improved) for both reads and writes. That?s pretty neat!