Displaying 20 results from an estimated 5000 matches similar to: "set owner:group on root of volume"
2017 Jul 11
0
set owner:group on root of volume
Just found out I needed to set following two parameters:
gluster volume set myvol storage.owner-uid 1000
gluster volume set myvol storage.owner-gid 1000
In case that helps any one else :)
> -------- Original Message --------
> Subject: set owner:group on root of volume
> Local Time: July 11, 2017 8:15 PM
> UTC Time: July 11, 2017 6:15 PM
> From: mabi at protonmail.ch
> To:
2017 Jul 18
2
set owner:group on root of volume
Unfortunately the root directory of my volume still get its owner and group resetted to root. Can someone explain why or help with this issue? I need it to be set to UID/GID 1000 and stay like that.
Thanks
> -------- Original Message --------
> Subject: Re: set owner:group on root of volume
> Local Time: July 11, 2017 9:33 PM
> UTC Time: July 11, 2017 7:33 PM
> From: mabi at
2017 Jul 23
2
set owner:group on root of volume
On 07/20/2017 03:13 PM, mabi wrote:
> Anyone has an idea? or shall I open a bug for that?
This is an interesting problem. A few questions:
1. Is there any chance that one of your applications does a chown on the
root?
2. Do you notice any logs related to metadata self-heal on '/' in the
gluster logs?
3. Does the ownership of all bricks reset to custom uid/gid after every
restart
2017 Jul 20
0
set owner:group on root of volume
Anyone has an idea? or shall I open a bug for that?
> -------- Original Message --------
> Subject: Re: set owner:group on root of volume
> Local Time: July 18, 2017 3:46 PM
> UTC Time: July 18, 2017 1:46 PM
> From: mabi at protonmail.ch
> To: Gluster Users <gluster-users at gluster.org>
> Unfortunately the root directory of my volume still get its owner and group
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail.
The volume concerned is called myvol-pro, the other 3 volumes have no problem so far.
Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 5:58 AM
> UTC Time: August 28, 2017 3:58 AM
> From: ravishankar at redhat.com
>
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like "
got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards.
Anyway I reproduced it by manually setting the afr.dirty bit for a zero
byte file on all 3 bricks. Since there are no afr pending xattrs
indicating good/bad copies and all files are zero bytes, the data
self-heal algorithm just picks the
2017 Aug 27
2
self-heal not working
----- Original Message -----
> From: "mabi" <mabi at protonmail.ch>
> To: "Ravishankar N" <ravishankar at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
> Sent: Sunday, August 27, 2017 3:15:33 PM
> Subject: Re: [Gluster-users] self-heal not working
>
>
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS.
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 28, 2017 10:41 AM
> UTC Time: August 28, 2017 8:41 AM
>
2017 Aug 27
0
self-heal not working
Thanks Ravi for your analysis. So as far as I understand nothing to worry about but my question now would be: how do I get rid of this file from the heal info?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 27, 2017 3:45 PM
> UTC Time: August 27, 2017 1:45 PM
> From: ravishankar at redhat.com
> To: mabi <mabi at
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:29 PM, mabi wrote:
> Excuse me for my naive questions but how do I reset the afr.dirty
> xattr on the file to be healed? and do I need to do that through a
> FUSE mount? or simply on every bricks directly?
>
>
Directly on the bricks: `setfattr -n trusted.afr.dirty -v
0x000000000000000000000000
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:57 AM, Ben Turner wrote:
> ----- Original Message -----
>> From: "mabi" <mabi at protonmail.ch>
>> To: "Ravishankar N" <ravishankar at redhat.com>
>> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org>
>> Sent: Sunday, August 27, 2017 3:15:33 PM
>>
2017 Aug 25
0
self-heal not working
Hi Ravi,
Did you get a chance to have a look at the log files I have attached in my last mail?
Best,
Mabi
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 24, 2017 12:08 PM
> UTC Time: August 24, 2017 10:08 AM
> From: mabi at protonmail.ch
> To: Ravishankar N <ravishankar at redhat.com>
> Ben Turner