similar to: wrong permission

Displaying 20 results from an estimated 700 matches similar to: "wrong permission"

2009 Jul 22
3
Newbie: unable to access mailbox more than once
Hello, I'm using dovecot as a mail relay so that I can back up my providers IMAP mail locally. I'm impressed how easy this was to set up, but I'm having a quirk that I would like help with, if possible. There are many weak links in my set-up chain, if you will, but I think I've narrowed the problem. I'll describe my set-up anyway. I'm running dovecot 1.1.4 on ubuntu
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
Hi list, yesterday I noted the following lines into the glustershd.log log file: [2017-06-28 11:53:05.000890] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-iso-images-repo-replicate-0: unable to get index-dir on iso-images-repo-client-0 [2017-06-28 11:53:05.001146] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-vm-images-repo-replicate-0: unable to get index-dir
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >>
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote: > Hi list, > > yesterday I noted the following lines into the glustershd.log log file: > > [2017-06-28 11:53:05.000890] W [MSGID: 108034] > [afr-self-heald.c:479:afr_shd_index_sweep] > 0-iso-images-repo-replicate-0: unable to get index-dir on > iso-images-repo-client-0 > [2017-06-28 11:53:05.001146] W [MSGID: 108034] >
2014 Jun 16
1
SELinux issue?
I've recently built a new mail server with centos6.5, and decided to bite the bullet and leave SELinux running. I've stumbled through making things work and am mostly there. I've got my own spam and ham corpus as mbox files in /home/user/Mail/learned. These files came from my backup of the centos 5 server this machine is replacing. The folder is owned by the user (the following is
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glusterd start > * yum versionlock add glusterfs* > *
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time
2009 Mar 05
1
hatvalues?
I am struiggling a bit with this function 'hatvalues'. I would like a little more undrestanding than taking the black-box and using the values. I looked at the Fortran source and it is quite opaque to me. So I am asking for some help in understanding the theory. First, I take the simplest case of a single variant. For this I turn o John Fox's book, "Applied Regression Analysis
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2013 Feb 19
2
Errors after enable vnd.dovecot.duplicate
Hi, on dovecot 2.1.14 and pigeonhole-0.3.3 after creating a default.sieve like this: require ["vnd.dovecot.duplicate", "fileinto", "mailbox"]; if duplicate { fileinto "Trash"; } I have two strange errors, one is many of this error in lda.log: Feb 19 14:43:17 lda(info at domain.com): Error:
2011 Aug 24
3
Catch22: user needs space to fix out of space condition
A mail user reported that he filled up his INBOX (despite reminders he was approaching his filesystem quota), and furthermore, he could not fix the situation because he couldn't expunge message he marked for deletion. The dovecot logs revealed the cause dovecot: imap(user): Error: open(/var/mail/user.lock) failed: Disc quota exceeded This created an impasse where a user cannot free
2012 Dec 04
1
dotlock error
i finally manage to control access on public folder by File system permission. i have 3 test users. 1. tom 2. fmaster 3 . testmail tom and fmaster are a group called "news-own" and testmail user is a readonly one. here is my folder structure ill share dovecot -n output at the end of this email. drwxrwxr-t 2 tom news-own 4.0K Dec 4 19:08 tmp drwxrwxr-t 2 tom news-own 4.0K Dec
2010 May 12
1
Problems with the fs-quota plugin on delivery stage
Hi All! Just noticed a strange behavior of the FS quota plugin on delivery stage. We use group FS quotas via NFS. And quota-tool says: --- Disk quotas for group #5751796 (gid 5751796): Filesystem blocks quota limit grace nfse:/export 1276 10240 10240 --- I run: --- cat ./test.eml | /usr/local/libexec/dovecot/deliver -e -n -d xxxx at xxxxx.xxxx --- and get: --- Quota
2008 Nov 24
3
Panic from 1.1.7
dovecot: Nov 24 12:49:06 Panic: IMAP(user): file ioloop-notify- kqueue.c: line 66 (event_callback): assertion failed: (io->refcount == 1) dovecot: Nov 24 12:49:06 Error: IMAP(user): Raw backtrace: 2 imap 0x0000000100068e82 default_fatal_finish + 41 -> 3 imap 0x0000000100068eed i_syslog_fatal_handler + 0 -> 4 imap 0x0000000100068687 i_info + 0 -> 5 imap
2014 Sep 01
1
dovecot 2.2.13: LMTP delivery with multiple recipients incorrectly mixes users
Hi. I'm using exim that delivers email over LMTP to dovecot 2.2.13. I noticed that dovecot LMTP service is sometimes (reare but repeats) mixing users. Example below. There is one mail (msgid=<1ACE53B70631CA45B62348E4EE8757493731A59E at KRMXA41>) that is going to be delivered to multiple local recipients. Some recipients are delivered properly: Sep 1 05:40:33 host dovecot:
2008 Dec 15
2
1.1.7: quota problem, unable to delete mails when quota exceeded
Hi, When user exceeds it's quota then dovecot can't create it's files and it's showing zero mails :( This also means that user is unable to delete it's own mails. Sounds like kind-of bug, right? Dec 15 08:28:37 mbox1 dovecot: IMAP(xxx): open(/var/mail/xxx/dovecot-uidlist.lock) failed: Disk quota exceeded Dec 15 08:28:37 mbox1 dovecot: IMAP(kdudus):
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info": Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries: 0 Brick