Displaying 20 results from an estimated 3000 matches similar to: "nfs flush/fsync config settings problem"
2018 May 30
2
Fatal: nfs flush requires mail_fsync=always
Hello, any news about the attached error?
I'm preparing the 2.2 to 2.3 upgrade and having the same error.
We have the mail stores in an NFS filer.
Regards
> On 19.01.2018 11:55, S?ren Skou wrote:
>> Hiya all,
>>
>> I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
>> my testbed. The issue is that from what I can see, mail_fsync
2018 Jan 19
1
Fatal: nfs flush requires mail_fsync=always
Hiya all,
I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
my testbed. The issue is that from what I can see, mail_fsync is set
to always :
# doveconf -n | grep mail_fs
mail_fsync = always
The result is that the client does not connect at all, which is not
really what I wanted to happen :)
Any idea what is going wrong here?
Best regards
S?ren P. Skou
2014 Jun 04
1
Dovecot + NFS + FreeBSD breakage ?
Hi,
I am trying to update my old mails servers from dovecot 2.1.15 to 2.2.12 (freebsd ports) and upgrade to FreeBSD 10.0-P3.
My mail storage are on NFS with index also.
On 2.1.15 everything is ok, and in 10-mail.conf I have the good things to be added as wiki tell me (eg http://wiki2.dovecot.org/NFS).
BUT, when I try a single connection like :
$ telnet ::1 110
Trying ::1...
Connected to
2013 Jan 16
4
Benchmarking: Dovecot vs Courier. Courier wins as POP3 server
Hi All,
I have compared Dovecot performance to Courier and it appears that as a POP3 server Dovecot is slower in 2 times but as an IMAP server it is faster in 1.5 times. The same node (16CPUs), testing time is 30 min, please see results and dovecot configs attached.
Benchmark software is MStone used by Sendmail Inc so is is quite reliable.
I do not see anything else to tweak in Dovecot to
2018 Nov 15
2
huge increase in storage activity afther dovecot upgrade
Yes, multiple imap servers using one shared nfs storage. With the same
config on 2.2.13 the public interface traffic was similar to the storage
interface, around 100 mbps.
After we switch to 2.2.27 the storage interface traffic jumped 10 times
while the public interface stayed the same. This make us thinking that
something is wrong and each time a user logs in the whole Inbox content
is read
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2018 Nov 14
2
huge increase in storage activity afther dovecot upgrade
Thanks, they are as in example, except for "mailbox_list_index = yes" witch
is from https://wiki.dovecot.org/PerformanceTuning
On Wed, Nov 14, 2018 at 12:18 PM Aki Tuomi <aki.tuomi at open-xchange.com>
wrote:
> You should review https://wiki2.dovecot.org/NFS to see that the settings
> make sense.
>
> Aki
> On 14.11.2018 12.00, Adrian M wrote:
>
> Thank you !
2018 Dec 18
2
High Load average on NFS Spool - v.2.1.15 & 2.2.13
I have two servers pointing to an NFS mounted mail spool with dovecot.?
Since I recently switched from using Dovecot v1.X, I have been
experiencing high CPU use with the two Dovecot servers. I am not certain
why they are not well behaved.? Here is the configuration information.
This configuration is currently running at a load average of 17.
/usr/sbin/dovecot -n
# 2.1.15:
2018 Dec 18
2
High Load average on NFS Spool - v.2.1.15 & 2.2.13
I have, but I will be happy to review it once again.
On 12/18/18 2:14 PM, admin wrote:
> Am Dienstag, den 18.12.2018, 14:06 -0500 schrieb Albert E. Whale, CEH
> CHS CISA CISSP:
>>
>> I have two servers pointing to an NFS mounted mail spool with
>> dovecot.? Since I recently switched from using Dovecot v1.X, I have
>> been experiencing high CPU use with the two
2009 May 28
0
[PATCH server] Use fixed mount points and add timeouts to various calls.
This patch uses fixed mount points necessary for migration to work
properly. Mount points are unique for each storage type.
This also uses the new :timeout keyword argument for various operations
that could take a while. This should fix the 'seq' timeout problem
we've been seeing. This requires the latest ruby-qpid which is now
in the ovirt repo.
Signed-off-by: Ian Main <imain
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2009 May 29
0
[PATCH server] Add more debugging to storage tasks
This patch adds more debug calling in storage related tasks.
Signed-off-by: Ian Main <imain at redhat.com>
---
src/task-omatic/task_storage.rb | 29 ++++++++++++++++++-----------
src/task-omatic/taskomatic.rb | 18 +++++++++---------
2 files changed, 27 insertions(+), 20 deletions(-)
diff --git a/src/task-omatic/task_storage.rb b/src/task-omatic/task_storage.rb
index
2003 Nov 04
0
PATCH: make local IP address available to auth modules
The attached patch makes the local IP address to which the client
connected available to the authentication modules; i.e., the local IP
address is available for substitution as %i for the mysql and pgsql
modules. We needed this feature to support thousands of our legacy
accounts which are authenticated by username/local_part (not the full
email address) and IP address (one per domain).
Timo,
2019 Nov 20
2
Error: Raw backtrace and index cache
Hi
I have "problem" with dovect 2.2.13 from repo debian8 and I don't know
how to solve it ...
Server is a virtual (kvm) with debian 8.11 (postfix + dovecot from repo)
and storage is mounting via nfs (I have use only one dovecot with
external storage)
All works fine but sometime ( after a few hours ) I have got a problem
with dovecot cache (i use indexes)
logs ->
2009 Jul 09
1
[PATCH 1/5 ovirt-server] Add glusterfs to task-omatic API for {task_storage,utils}
---
src/task-omatic/task_storage.rb | 50 +++++++++++++++++++++++++++++++++++++++
src/task-omatic/utils.rb | 40 +++++++++++++++++++++++++++++++
2 files changed, 90 insertions(+), 0 deletions(-)
diff --git a/src/task-omatic/task_storage.rb b/src/task-omatic/task_storage.rb
index 77363ac..97ae4fc 100644
--- a/src/task-omatic/task_storage.rb
+++ b/src/task-omatic/task_storage.rb
@@
2011 Jul 05
2
Many "Error: Corrupted index cache file /XXX/dovecot.index.cache: invalid record size"
Hi all,
I just joigned this list, so I'm sorry if this problem has already been
reported.
I'm running Dovecot 2.0.13 on many servers, one for POP/IMAP access,
others for LDA, others for authentification only, etc.
All servers are accessing a shared file system, based on MooseFS
(www.moosefs.org). The FS is mounted using FUSE.
All my Dovecot servers have this configuration :
2019 Nov 20
2
Error: Raw backtrace and index cache
Hi
Thanx for replay.
Log:
http://paste.debian.net/1117077/
On 20.11.2019 10:07, Aki Tuomi wrote:
> Firstly, 2.2.13 is about 5 years old. So there's that. It would be
> helpful if you can reproduce this with 2.2.36.
>
> Also, you forgot to actually include in your log snippet the panic. So
> maybe few more lines before the Raw backtrace?
>
> Aki
>
> On 20.11.2019
2009 Jul 31
0
[TAKE-2][PATCH 1/5] Add glusterfs to task-omatic API for task_storage
---
src/task-omatic/task_storage.rb | 50 +++++++++++++++++++++++++++++++++++++++
1 files changed, 50 insertions(+), 0 deletions(-)
diff --git a/src/task-omatic/task_storage.rb b/src/task-omatic/task_storage.rb
index 8165818..77b0166 100644
--- a/src/task-omatic/task_storage.rb
+++ b/src/task-omatic/task_storage.rb
@@ -202,6 +202,8 @@ class LibvirtPool
return
2018 Nov 14
2
huge increase in storage activity afther dovecot upgrade
Thank you !
I was little concerned that the following settings are not in line with the
new version:
mail_nfs_index = yes
mail_nfs_storage = yes
mail_fsync = always
mailbox_list_index = yes
maildir_stat_dirs = yes
mmap_disable = yes
On Wed, Nov 14, 2018 at 10:19 AM Aki Tuomi <aki.tuomi at open-xchange.com>
wrote:
> It should eventually wind down once all the problems are fixed. Of
2011 Aug 07
2
[PATCH] kinit minor checkpatch cleanup
coding style fixes.
FIXME: check that compiled bin the same1!!
---
usr/kinit/initrd.c | 3 ++-
usr/kinit/kinit.c | 12 ++++--------
usr/kinit/kinit.h | 20 ++++++++++----------
usr/kinit/name_to_dev.c | 6 +++---
usr/kinit/nfsroot.c | 5 ++---
5 files changed, 21 insertions(+), 25 deletions(-)
diff --git a/usr/kinit/initrd.c b/usr/kinit/initrd.c
index