Displaying 12 results from an estimated 12 matches for "0saaaaaaaaaaaaaaaa".
Did you mean:
0saaaaaqaaaaaaaaaa
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2011 Aug 21
2
Fixing split brain
Hi
Consider the typical spit brain situation: reading from file gets EIO,
logs say:
[2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open]
0-gfs-replicate-0: failed to open as split brain seen, returning EIO
[2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk]
0-glusterfs-fuse: 1371456: OPEN()
/manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1
(Input/output
2018 Feb 20
0
Split brain
...ttr: hex: No such file or directory
getfattr: Removing leading '/' from absolute path names
# file: data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images
security.selinux="system_u:object_r:default_t:s0"
trusted.afr.VMData2-client-6=0sAAAAAQAAAAAAAAAG
trusted.afr.dirty=0sAAAAAAAAAAAAAAAA
trusted.gfid=0sK8ZFxmThRxeq7pYw7QTOCw==
trusted.glusterfs.dht=0sAAAAAQAAAABVVVVVqqqqqQ==
the only difference being the trusted.afr.dirty item in found2 which in not in found3.
Any help would be appreciated.
Russell Wecker
IT Director
Southern Asia-Pacific Division
San Miguel II
Bypas...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...08:58:54.311556621 +0200
Change: 2018-04-09 08:58:54.423555611 +0200
Birth: -
GETFATTR:
trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
NODE2:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38 IO Block: 131072 reg...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...>
> > GETFATTR:
> >
> > trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
> >
> > trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
> >
> > trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
> >
> > trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
> >
> > NODE2:
> >
> > STAT:
> >
> > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MO...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...e: 2018-04-09 08:58:54.423555611 +0200
> Birth: -
>
> GETFATTR:
> trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
> trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
> trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
> trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
>
> NODE2:
>
> STAT:
> File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
> Size: 0 Bloc...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...; GETFATTR:
>>>
>>> trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
>>>
>>> trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
>>>
>>> trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
>>>
>>> trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
>>>
>>> NODE2:
>>>
>>> STAT:
>>>
>>> File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiled...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...t; > trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
> > > >
> > > > trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
> > > >
> > > > trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
> > > >
> > > > trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
> > > >
> > > > NODE2:
> > > >
> > > > STAT:
> > > >
> > > > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4...
2008 Dec 10
3
AFR healing problem after returning one node.
...-R -d -m ".*" /export/storage?/*
getfattr: Removing leading '/' from absolute path names
# file: export/storage1/brick
trusted.glusterfs.afr.entry-pending=0sAAAAAAAAAAA=
trusted.glusterfs.test="working\000"
# file: export/storage1/ns
trusted.glusterfs.afr.entry-pending=0sAAAAAAAAAAAAAAAA
trusted.glusterfs.test="working\000"
# file: export/storage1/ns/test
trusted.glusterfs.afr.entry-pending=0sAAAAAAAAAAEAAAAA
# file: export/storage2/brick
trusted.glusterfs.test="working\000"
Then n2 was bring back, and after a while I was able to cat file:
n3:/storage/test#...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...;> trusted.gfid=0smMGdfAozTLS8v1d4jMb42w==
>>>>>
>>>>> trusted.gfid2path.d40e834f9a258d9f="13880e8c-13da-442f-8180-fa40b6f5327c/problematicfile"
>>>>>
>>>>> trusted.glusterfs.quota.13880e8c-13da-442f-8180-fa40b6f5327c.contri.1=0sAAAAAAAAAAAAAAAAAAAAAQ==
>>>>>
>>>>> trusted.pgfid.13880e8c-13da-442f-8180-fa40b6f5327c=0sAAAAAQ==
>>>>>
>>>>> NODE2:
>>>>>
>>>>> STAT:
>>>>>
>>>>> File: ?/data/myvol-private/brick/dir1/dir2/dir3/...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there