Displaying 7 results from an estimated 7 matches for "cd5c".
Did you mean:
c95c
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
...6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey
trusted.gfid=0x3b6c722cd6c64a4180fa028809671d63
trusted.gfid2path.9cb852a48fe5e361=0x38666131356462642d636435632d343930302d623838392d3066653766636534366131332f6e6361646d696e6973747261746f722e73686172654b6579
trusted.glusterfs.quota.8fa15dbd-cd5c-4900-b889-0fe7fce46a13.contri.1=0x00000000000000000000000000000001
trusted.pgfid.8fa15dbd-cd5c-4900-b889-0fe7fce46a13=0x00000001
# file: data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/
trusted.gfid=0x8fa15dbdcd5c4900b8890fe7fce46a13
trusted.gluster...
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
...9/dir10/OC_DEFAULT_MODULE/filename.shareKey
> trusted.gfid=0x3b6c722cd6c64a4180fa028809671d63
> trusted.gfid2path.9cb852a48fe5e361=0x38666131356462642d636435632d343930302d623838392d3066653766636534366131332f6e6361646d696e6973747261746f722e73686172654b6579
> trusted.glusterfs.quota.8fa15dbd-cd5c-4900-b889-0fe7fce46a13.contri.1=0x00000000000000000000000000000001
> trusted.pgfid.8fa15dbd-cd5c-4900-b889-0fe7fce46a13=0x00000001
>
> # file: data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/
> trusted.gfid=0x8fa15dbdcd5c4900b8890fe7fce46...
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
...glusterfs/myvol-private/brick/.glusterfs/f0/65/f065a5e7-ac06-445f-add0-83acf8ce4155, removing it. [Stale file handle]
[2018-05-15 06:54:20.056196] W [MSGID: 113103] [posix.c:285:posix_lookup] 0-myvol-private-posix: Found stale gfid handle /srv/glusterfs/myvol-private/brick/.glusterfs/8f/a1/8fa15dbd-cd5c-4900-b889-0fe7fce46a13, removing it. [Stale file handle]
[2018-05-15 06:54:20.172823] I [MSGID: 115056] [server-rpc-fops.c:485:server_rmdir_cbk] 0-myvol-private-server: 14740125: RMDIR /cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE (f065a5e7-ac06-445f-add0-83...
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
...yvol-private/brick/.glusterfs/f0/65/f065a5e7-ac06-445f-add0-83acf8ce4155, removing it. [Stale file handle]
>
> [2018-05-15 06:54:20.056196] W [MSGID: 113103] [posix.c:285:posix_lookup] 0-myvol-private-posix: Found stale gfid handle /srv/glusterfs/myvol-private/brick/.glusterfs/8f/a1/8fa15dbd-cd5c-4900-b889-0fe7fce46a13, removing it. [Stale file handle]
>
> [2018-05-15 06:54:20.172823] I [MSGID: 115056] [server-rpc-fops.c:485:server_rmdir_cbk] 0-myvol-private-server: 14740125: RMDIR /cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE (f065a5e7-ac06-4...
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
...ivate/brick/.glusterfs/f0/65/f065a5e7-ac06-445f-add0-83acf8ce4155, removing it. [Stale file handle]
>>
>> [2018-05-15 06:54:20.056196] W [MSGID: 113103] [posix.c:285:posix_lookup] 0-myvol-private-posix: Found stale gfid handle /srv/glusterfs/myvol-private/brick/.glusterfs/8f/a1/8fa15dbd-cd5c-4900-b889-0fe7fce46a13, removing it. [Stale file handle]
>>
>> [2018-05-15 06:54:20.172823] I [MSGID: 115056] [server-rpc-fops.c:485:server_rmdir_cbk] 0-myvol-private-server: 14740125: RMDIR /cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE (f065a5e7...
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote:
> Dear all,
>
> I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
>
> It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all,
I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started