Displaying 20 results from an estimated 704 matches for "mmap_disabled".
Did you mean:
mmap_disable
2007 Jan 25
4
Benchmarking
I wasted too much time today doing this, but they do show a few
interesting things:
1) mmap_disable=yes is faster than mmap_disable=no. I didn't expect
this. I'm not yet sure why this is, but possibly because then it updates
the mail index by reading the changes from dovecot.index.log, instead of
reopening and re-mmaping dovecot.index. At least that's the biggest
difference between
2011 Dec 12
1
Documentation clarifiction on mmap_disable
Greetings,
On http://wiki.dovecot.org/MainConfig I read:
"mmap_disable = no
Don't use mmap() at all. This is required if you store indexes to
shared filesystems (NFS or clustered filesystem). "
Does that mean:
1. mmap is required when using NFS or
2. it is required to don't use mmap at all when using NFS?
Sorry if this is obvious. Best regards.
--
*Marcio Merlone*
2007 Jul 12
3
v1.1 status and benchmarks
v1.1 plans have changed a bit. I'll release v1.1.alpha1 soon and hope to
have a stable v1.1 in a month or two. The rest of the features that
didn't make it into v1.1 will go to v1.2. I'll write more about this
when v1.1 alpha is released.
I also did a bit of benchmarking. v1.1's performance improvements are
looking pretty great, it seems to be twice as fast as v1.0.
v1.0:
2006 Nov 19
0
Security hole #2: Off-by-one buffer overflow with mmap_disable=yes
Version: 1.0test53 .. 1.0.rc14 (ie. all 1.0alpha, 1.0beta and 1.0rc
versions so far).
0.99.x versions are safe (they don't even have mmap_disable setting).
Problem: When mmap_disable=yes setting is used, dovecot.index.cache file
is read to memory using "file cache" code. It contains a "mapped pages"
bitmask buffer. In some conditions when updating the buffer it allocates
2006 Nov 19
0
Security hole #2: Off-by-one buffer overflow with mmap_disable=yes
Version: 1.0test53 .. 1.0.rc14 (ie. all 1.0alpha, 1.0beta and 1.0rc
versions so far).
0.99.x versions are safe (they don't even have mmap_disable setting).
Problem: When mmap_disable=yes setting is used, dovecot.index.cache file
is read to memory using "file cache" code. It contains a "mapped pages"
bitmask buffer. In some conditions when updating the buffer it allocates
2014 Oct 15
0
mmap_disable=yes not honored always
Hi all, I'm experimenting with having the mail store on a 9p file system that lacks mmap() functionality. So I disabled it in dovecot: mmap_disable = yes However, I keep getting the following error messages in my log: Oct 15 16:55:00 computer-name dovecot: imap user at domain.com[192.168.1.3] Error: mmap() failed with file
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2005 Oct 14
1
"Out of memory" error in 1.0alpha3
Hi All,
I have Dovecot 1.0 Alpha3 running on x86_64 Linux 2.6 multi-CPU box
using Maildir folders. Moving messages from one folder to another in
Thunderbird 1.0.7, regardless of their size (usually small), produced
error 83, "Out of memory", when both of these parameters were unset
(default): mail_read_mmaped, mmap_disable. The log line before the error
was pool_system_malloc():
2012 Jun 20
2
dovecot 2.1.5 performance
Hello,
I'm migrating from 1.1.16 running in 4 debian lenny servers virtualized
with xenserver and 1 core and 5GB of RAM to 2.1.5 running in 4 ubuntu
12.04 servers with 6 cpu cores and 16GB of RAM virtualized with VMWare,
but I'm having lots a performance problems. I don't think that
virtualization platform could be the problem, because the new servers
running in xenserver has
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2005 May 14
6
1.0-test70
http://dovecot.org/test/
- vpopmail authentication fix
- many mmap_disable=yes fixes and a few optimizations
- pop3 hang fix
- mbox fix for "last-uid lost" error (hopefully last one)
- mbox fix for losing dirty flag, causing lost message flags
mmap_disable=yes seems to be finally working pretty well. There should
be no more cache file related errors with it enabled.
I'm still
2018 Apr 05
1
GFS2 writes extremely slow
Hello all,
We are facing extremely slow GFS2 issues in Redhat 7 64 bit. Backend is 16 Gbps FC SAN so no issues there.
I have scoured the entire (anyways most of the) Internet and arrived at the following settings.
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
mmap_disable = yes
lock_method = fcntl
Did a systemctl restart dovecot and did not find any major
2005 Jul 22
2
1.0-stable and nfs
Hi all,
I've a little pb with 1.0-stable on nfs share without lockd :
with :
- lock_method = dotlock
- mmap_disable = yes
I've :
- dovecot: Jul 22 11:58:33 Error: IMAP(xxxxxxxx): open() failed with
index file (null): Bad address
with :
- lock_method = dotlock
- mmap_disable = no
I've :
- dovecot: Jul 22 11:59:47 Error: IMAP(xxxxxxxx): lock_method=dotlock
and
2004 May 24
4
1.0-test12
http://dovecot.org/test/
- "Maildir sync: UID < next_uid (446 < 447, file = .." errors should
be fixed
- fixes for detecting changes in uidvalidity and external uidnext
changes
- several fixes and cleanups in index file handling. less code than
before and now changes to index header also go through transaction log.
that should mean that soon I can get mmap_disable = yes
2007 Mar 14
2
Benchmarking CVS HEAD vs. v1.0
Some new features in CVS HEAD are:
- v1.0 saves most of the data to dovecot.index.cache only when the
client is FETCHing the messages. With mboxes the message headers are
saved while parsing the mbox, which is almost the same thing.
With CVS HEAD the dovecot.index.cache file is updated already when the
new messages are being saved (with deliver or IMAP APPEND). This avoids
reading the message
2008 Feb 19
3
compiling error imaptest.c
dovecot-1.1.beta16]# gcc imaptest.c -o imaptest -g -Wall -W -I.
-Isrc/lib-mail -Isrc/lib -Isrc/lib-imap -Isrc/lib-storage/index/mbox
-DHAVE_CONFIG_H src/lib-storage/index/mbox/mbox-from.o
src/lib-imap/libimap.a src/lib-mail/libmail.a src/lib/liblib.a
imaptest.c: In client_append function:
imaptest.c:1492: error: too many arguments for i_stream_create_limit
function
2018 Dec 18
2
High Load average on NFS Spool - v.2.1.15 & 2.2.13
I have two servers pointing to an NFS mounted mail spool with dovecot.?
Since I recently switched from using Dovecot v1.X, I have been
experiencing high CPU use with the two Dovecot servers. I am not certain
why they are not well behaved.? Here is the configuration information.
This configuration is currently running at a load average of 17.
/usr/sbin/dovecot -n
# 2.1.15:
2006 Jun 13
2
nfs, dovecot, and maildirs
Hello - I know I am not the only one that will be trying, or has tried this
before. Here are my questions!
NFS Clients, multiple servers running dovecot - linux with 2.6 kernel with
relevant patches (utime, etc)
Backend - netapp filer
End users - various clients all running IMAP
It looks like 1.0.beta8 is definately the way to go. I have a few questions
with regards to setup.
Is it now
2019 Nov 20
2
Error: Raw backtrace and index cache
Hi
I have "problem" with dovect 2.2.13 from repo debian8 and I don't know
how to solve it ...
Server is a virtual (kvm) with debian 8.11 (postfix + dovecot from repo)
and storage is mounting via nfs (I have use only one dovecot with
external storage)
All works fine but sometime ( after a few hours ) I have got a problem
with dovecot cache (i use indexes)
logs ->
2005 Apr 14
2
maildir in NFS
How can I configure the dovecot for working with maildir's through NFS?
If I add option "index_mmap_invalidate = yes" (as in doc/nfs.t) then in log I
have this message:
"Error: Error in configuration file /etc/dovecot.conf line 296: Unknown
setting: index_mmap_invalidate"
If I try to get the mail without option "index_mmap_invalidate" then I have in
log this