search for: azipfiledir

Displaying 8 results from an estimated 8 matches for "azipfiledir".

2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
..."this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` After doing this for all files, run 'gluster volume heal &...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...attr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > > > NODE1: > > > > STAT: > > > > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > > > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > > > > Device: 23h/35d Inode: 6822549 Links: 2 > > > > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) > > > > Access: 201...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...ot;, do you mean the brick on the arbitrer node (node 3 in my case)? > > Sorry I should have been clearer. Yes the brick on the 3rd node. > > `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` > > `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` > > After doing this for all files, run 'gl...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38 IO Block: 131072 regular empty file Device: 23h/35d Inode: 6822549 Links: 2 Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) Access: 2018-04-09 08:58:54.311556621 +0200 Modify: 2018-04-09 08:58:...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...do you mean the brick on the arbitrer node (node 3 in my case)? >> Sorry I should have been clearer. Yes the brick on the 3rd node. >> >> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` >> >> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` >> >> After doing this for all file...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...ling list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > Device: 23h/35d Inode: 6822549 Links: 2 > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) > Access: 2018-04-09 08:58:54.311556621 +0200 >...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there