similar to: permission problems

Displaying 20 results from an estimated 700 matches similar to: "permission problems"

2008 Jun 26
2
wrong permission
Im still finding a number of control directories that have the wrong permission: Jun 26 00:02:34 userimap13.xs4all.nl dovecot: IMAP(xxx): file_dotlock_create(/var/spool/mail/dovecot-control/g/gl/xxx/INBOX/.Apple Mail To Do/dovecot-uidlist) failed: Permission denied userimap1# ls -al "/var/spool/mail/dovecot-control/g/gl/xxx/INBOX/.Apple Mail To Do" total 8 drw------- 2 xxx user 4096
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
Hi list, yesterday I noted the following lines into the glustershd.log log file: [2017-06-28 11:53:05.000890] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-iso-images-repo-replicate-0: unable to get index-dir on iso-images-repo-client-0 [2017-06-28 11:53:05.001146] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-vm-images-repo-replicate-0: unable to get index-dir
2008 Apr 24
1
1.1 a major improvement
I recently switched one of our 30 imap servers to 1.1RC4 from 1.0.x, and the difference is huge. The load on the server dropped from 5+ to 0.3 or so. This means there is no more io waiting going on. Thats on FreeBSD 6.2. Great work Timo! Cor
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote: > Hi list, > > yesterday I noted the following lines into the glustershd.log log file: > > [2017-06-28 11:53:05.000890] W [MSGID: 108034] > [afr-self-heald.c:479:afr_shd_index_sweep] > 0-iso-images-repo-replicate-0: unable to get index-dir on > iso-images-repo-client-0 > [2017-06-28 11:53:05.001146] W [MSGID: 108034] >
2014 Jun 16
1
SELinux issue?
I've recently built a new mail server with centos6.5, and decided to bite the bullet and leave SELinux running. I've stumbled through making things work and am mostly there. I've got my own spam and ham corpus as mbox files in /home/user/Mail/learned. These files came from my backup of the centos 5 server this machine is replacing. The folder is owned by the user (the following is
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >>
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glusterd start > * yum versionlock add glusterfs* > *
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2009 Mar 05
1
hatvalues?
I am struiggling a bit with this function 'hatvalues'. I would like a little more undrestanding than taking the black-box and using the values. I looked at the Fortran source and it is quite opaque to me. So I am asking for some help in understanding the theory. First, I take the simplest case of a single variant. For this I turn o John Fox's book, "Applied Regression Analysis
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Jul 29
2
Possible stale .glusterfs/indices/xattrop file?
Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can see here from the output of a "heal info": Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries: 0 Brick
2011 Jan 26
2
Basic Permissions Questions
Hi List :) So, I have a folder1, its owner is user1 who has r+w on the folder. User2 is the group owner who only has read access (when I say user2, I mean the group called user2, because when you make a new user the OS can make them their own group). You can see these permissions below: [user2 at host test]$ ls -l drw-r----- 3 user1 user2 28 Nov 2 16:17 folder1 How ever user2 can not
2010 Jul 20
2
directory permissions set to 600?
Hello all, Today, I ran across a directory in /etc/ on one of our servers whose permissions where set to 600 (drw-------) with root being the owner. The directory is for the firewall package for the server, so it is not something malicious. Checking some other systems, they also have this directory and the permissions on those servers is also 600, so it isn't just a messed up permissions on
2017 Jul 30
0
Possible stale .glusterfs/indices/xattrop file?
On 07/29/2017 04:36 PM, mabi wrote: > Hi, > > Sorry for mailing again but as mentioned in my previous mail, I have > added an arbiter node to my replica 2 volume and it seem to have gone > fine except for the fact that there is one single file which needs > healing and does not get healed as you can see here from the output of > a "heal info": > > Brick
2015 Jul 14
2
ssh failed only with nfs home directory
Hey all, Having a weird ssh issue I'd like some opinions on. If I have my home directory mounted on the NFS server itself, I get permission denied when I try to ssh into it. The correct permissions and ownership are on the home directory, ssh directory and the authorized_users file. Here's what a verbose ssh session looks like: #ssh -v bluethundr at nfs1.example.com OpenSSH_6.2p2,
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appeared in that log file after running the above command. I checked the files which need to be healing using the "heal <volume> info"
2009 Jan 30
3
Shared subscription, acl-list and uidvalidity(s)
Hello, I'm running dovecot-1.1.8/Maildir/ACL plugin. I sucessfully set up a Maildir shared between users of the unix group 'doveshared' via a public namespace, unix permissions and ACL files. The location of my public namespace is /path/to/public. I tried 2 sub-setups : First setup ---------- drwxrws--- 4 root doveshared 4096 Jan 30 13:39 public -rw-r----- 1 root doveshared