search for: oc_default_modul

Displaying 19 results from an estimated 19 matches for "oc_default_modul".

Did you mean: oc_default_module
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular empty file Device: 23h/35d Inode: 744413 Links: 2 Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN) Access: 2018-05-15 08:54:20.296048887 +0200 Modify: 2018-05-15 08:54:20.296048887 +0200...
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
...that would like to use it as right now I have to do that manually. ?? NODE 1 brick log: [2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory] NODE 2 brick log: [2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or dire...
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
...for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. > > NODE 1: > > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > Device: 23h/35d Inode: 744413 Links: 2 > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN) > Access: 2018-05-15 08:54:20.296048887 +0200 > Modify: 2018-05-15...
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
...use it as right now I have to do that manually. > > NODE 1 brick log: > > [2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory] > > NODE 2 brick log: > > [2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [...
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
...ht now I have to do that manually. >> >> NODE 1 brick log: >> >> [2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory] >> >> NODE 2 brick log: >> >> [2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DE...
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...he problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > > > NODE1: > > > > STAT: > > > > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > > > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > > > > Device: 23h/35d Inode: 6822549 Links: 2 > > > > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) > > > > Access: 2018-04-09 08:58:54.3115...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...k", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` After doing this for all files, run 'gluster volume heal <volname>`. HTH...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...the brick on the arbitrer node (node 3 in my case)? > > Sorry I should have been clearer. Yes the brick on the 3rd node. > > `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` > > `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` > > After doing this for all files, run 'gluster volume heal &lt...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38 IO Block: 131072 regular empty file Device: 23h/35d Inode: 6822549 Links: 2 Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) Access: 2018-04-09 08:58:54.311556621 +0200 Modify: 2018-04-09 08:58:54.311556621 +0200 Ch...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > Device: 23h/35d Inode: 6822549 Links: 2 > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) > Access: 2018-04-09 08:58:54.311556621 +0200 > Modify: 2018-04-09 0...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...brick on the arbitrer node (node 3 in my case)? >> Sorry I should have been clearer. Yes the brick on the 3rd node. >> >> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` >> >> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` >> >> After doing this for all files, run 'gluster v...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09 06:58:54.178133] E [MSGID: 113015] [posix.c:1208:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfile.zip/OC_DEFAULT_MODULE [No such file or directory] ? Hope that helps to find out the issue. ??????? Original Message ??????? On April 9, 2018 9:37 AM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hello, > > Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12...
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
...ick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > > node1: > 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE > > node2: > 8394638 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE > > > > arbiternode: > find: &gt...
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
...ind /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE node2: 8394638 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE arbiternode: find: '/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran