similar to: .dovecot.sieve.log/tmp failed: Not a directory

Displaying 20 results from an estimated 8000 matches similar to: ".dovecot.sieve.log/tmp failed: Not a directory"

2020 Mar 19
0
.dovecot.sieve.log/tmp failed: Not a directory
> On 19/03/2020 10:10 mabi <mabi at protonmail.ch> wrote: > > > Hello > > I am running Dovecot 2.3.9 for IMAP access with RainLoop webmail and have noticed for an account where I have some filters setup that I get the following error messages in the dovecot log file: > > Mar 17 19:02:50 mbox dovecot: imap(mail at domain.tld)<10511><ubAzvhCh1N2yxeu/>:
2020 Mar 19
1
.dovecot.sieve.log/tmp failed: Not a directory
??????? Original Message ??????? On Thursday, March 19, 2020 9:13 AM, Aki Tuomi <aki.tuomi at open-xchange.com> wrote: > This is usually prevented by not configuring mail_home and mail_location to same directory: > > mail_home=/var/vmail/%Ld/%Lu/ > mail_location=maildir:~/mail Thanks for the tip. This means that if I now configure "mail_location=maildir:~/mail", I
2019 Jul 02
2
dovecot.index.log: duplicate transaction log sequence (3)
Hello, I am running Dovecot 2.3.5.1 on OpenBSD 6.5 with RainLoop as IMAP webmail client and just noticed the following error messages about duplicate transaction log sequences in the index log: Jul 01 13:15:58 Error: imap(<REMOVED_EMAIL1>)<21324><TVSmwJyMd0C5D+Vb>: Transaction log /var/vmail/<REMOVED_DOMAIN>/<REMOVED_EMAIL1>/dovecot.index.log: duplicate transaction
2019 Mar 15
1
Unable to set quota-fs plugin [fixed]
The issue was in the systemd service file. The option PrivateDevices was setted. It prevents the service to have access to physical devices. I removed this option and from there, quota is reported without errors. Thanks for your support Regards, - Eric Grammatico _/) 14 mars 2019 16:42 "Eric Grammatico" <e.grammatico at gmail.com> a ?crit: > Sure !! > > I got it ! I
2017 Nov 10
2
Sieve global path?
On Fri, 10 Nov 2017 03:41:20 -0500 Bill Shirley <bill at KnoxvilleChristian.org> wrote: > No it isn't shown as a folder.? All folder directories here begin with a dot. > i.e.? .INBOX? .Trash? .Drafts > > Bill No, they don't. me thought that, too. But using the rainloop webmail interface on top of such a config showed the sieve folder in the overview. Sometimes you can
2018 Nov 15
2
Quota in MySql Dict not recalculate automatic
Hi, I have a working Installation with: Ubuntu 16.04 LTS Dovecot 2.2.22 MySql 5.7.24 Postfixadmin 3.2 Apache 2.4.18 Rainloop 1.12.1 I manage the E-Mail Accounts with postfixadmin in a MySql-DB. Also I use quotas with Quota Backend postfixadmin-DB (dict). Everything works fine. Now I installed a new Server with the following Versions and migrate the Configs to the new System. Ubuntu 18.04 LTS
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first: - In order to launch the index heal is the following command correct: gluster volume heal myvolume - If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking. > --------
2018 Oct 12
2
Sieve scripts not replicated
Hello, I use dovecot replication and the sieve scripts are not replicated. Mail replication is working fine. Log when sieve script (with Rainloop webmail) is created: Oct 12 12:57:57 srv1 dovecot: managesieve-login: Login: user=<hativ at example.com>, method=PLAIN, rip=91.67.174.186, lip=195.201.251.57, mpid=5360, TLS, session=<OXvK9QV4fOBbQ666> Oct 12 12:57:57 srv1 dovecot:
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote: > Thanks for the additional hints, I have the following 2 questions first: > > - In order to launch the index heal is the following command correct: > gluster volume heal myvolume > Yes > - If I run a "volume start force" will it have any short disruptions > on my clients which mount the volume through FUSE? If yes, how long?
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 5:58 AM > UTC Time: August 28, 2017 3:58 AM > From: ravishankar at redhat.com >
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15: https://bugzilla.redhat.com/show_bug.cgi?id=1471613 Is it possible that the problem I described in this post is related to that bug? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 22, 2017 11:51 AM > UTC Time: August 22, 2017 9:51 AM > From: ravishankar at
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output: Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node2.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node3.domain.tld:/srv/glusterfs/myvolume/brick Status: Connected Number of entries in split-brain: 0 > -------- Original
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail. The volume concerned is called myvol-pro, the other 3 volumes have no problem so far. Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards. Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the
2017 Aug 27
2
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ravishankar N" <ravishankar at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, August 27, 2017 3:15:33 PM > Subject: Re: [Gluster-users] self-heal not working > >
2019 Feb 07
0
"sieve: failed to store into mailbox 'Junk': Read-only mbox" over root_squashed NFS, lmtp : euid/egid set and access() don't mix together for me
Hi, I try to migrate an old fashioned mailsystem to Debian 9.7 / dovecot 2.2.7. I "have" to cope with mbox for now. I try to get rid of Sun OS 5.9 sendmail before mbox to mdbox migration (I'm fine if you laugh loudly ^^). Intended setup : 2 VM with exim (smtp in, smtp out roughly), 3 VM with dovecot (mbox, maildir, testbed), 1 VM with IMAP proxy and LMTP proxy. doveconf -n is
2017 Jul 29
2
Not possible to stop geo-rep after adding arbiter to replica 2
Hello To my two node replica volume I have added an arbiter node for safety purpose. On that volume I also have geo replication running and would like to stop it is status "Faulty" and keeps trying over and over to sync without success. I am using GlusterFS 3.8.11. So in order to stop geo-rep I use: gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo stop but it
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS. > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 10:41 AM > UTC Time: August 28, 2017 8:41 AM >
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2017 Aug 21
2
self-heal not working
Hi Ben, So it is really a 0 kBytes file everywhere (all nodes including the arbiter and from the client). Here below you will find the output you requested. Hopefully that will help to find out why this specific file is not healing... Let me know if you need any more information. Btw node3 is my arbiter node. NODE1: STAT: File: