> On 21 Sep 2015, at 12:44, Ravishankar N <ravishankar at redhat.com>
wrote:
>
> s
>
> On 09/21/2015 03:48 PM, Davy Croonen wrote:
>> Hmmm, strange, I went through all my bricks with every time the same
result:
>>
>> -bash: cd:
/mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4: No
such file or directory
>>
>> The directory /mnt/public/brick1/.glusterfs/31/38 does exist, and
indeed there?s a symlink in there but not one with the referenced gfid.
>
> That is strange..Does the same string (
"<gfid:3138d605-25ec-4aa9-9069-5db2e4202db4>/Doc1_LOUJA.htm - Is in
split-brain.") appear on both bricks of the replica (assuming replica 2)
when you run the heal info command?
>
We have indeed a replica 2 and yes exact the same gfid string shows up on both
bricks.
>> Any further suggestions? If I can get rid of the message it?s ok.
> You could stat all the files on the mount (find /<mount-point> |xargs
stat > /dev/null) and then run the heal info command again and see if you now
get the absolute path instead of the gfid string.
>
I ran the stat command against every file on the volume but after that nothing
was changed when running gluster volume heal public info.
>>
>> Thanks in advance.
>>
>> Kind regards
>> Davy
>>
>>> On 21 Sep 2015, at 11:59, Ravishankar N <ravishankar at
redhat.com> wrote:
>>>
>>>
>>>
>>> On 09/21/2015 03:09 PM, Davy Croonen wrote:
>>>> Ravi
>>>>
>>>> Thanks for your quick reply.
>>>>
>>>> I didn?t solve the split-brain on the file because I don?t know
to which directory this gfid is referencing (due to the implementation of our
application we have multiple directories containing the same files).
>>> <gfid:3138d605-25ec-4aa9-9069-5db2e4202db4>/Doc1_LOUJA.htm -
Is in split-brain.
>>>
>>> Directories are symlinked inside the .glusterfs folder on the
bricks. You can `cd
/mnt/public/brick1/.glusterfs/31/38/3138d605-25ec-4aa9-9069-5db2e4202db4` and
get to the file 'Doc1_LOUJA.htm' and then resolve the split-brain.
>>>
>>> HTH,
>>> Ravi
>>>> Running the command ?getfattr -m . -d -e hex <directory>?
against every directory which could be the one didn?t show up the mentioned
gfid. But there was one result that didn?t fit the picture, see results below:
>>>>
>>>> root at gfs01a-dcg:/mnt/public/brick1/provil/Timetable#
getfattr -m . -d -e hex h9cnBmEx6j26sgidVKLaZhAqh
>>>> # file: h9cnBmEx6j26sgidVKLaZhAqh
>>>> trusted.afr.dirty=0x000000000000000000000000
>>>> trusted.afr.public-client-0=0x000000000000000000000000
>>>> trusted.afr.public-client-1=0x000000000000000000000000
>>>> trusted.gfid=0xb560c2463f5b476fa95d318f6bb91356
>>>> trusted.glusterfs.dht=0x00000001000000007ffe51f8ffffffff
>>>>
>>>> root at gfs01a-dcg:/mnt/public/brick1/provil/Timetable#
getfattr -m . -d -e hex h9cnBmEx6j26sgidVKLaZhAqh1442579382189160
>>>> # file: h9cnBmEx6j26sgidVKLaZhAqh1442579382189160
>>>> trusted.afr.dirty=0x000000000000000000000000
>>>> trusted.afr.public-client-0=0x000000000000000000000000
>>>> trusted.afr.public-client-1=0x000000000000000000000000
>>>> trusted.gfid=0x0367b477d5ea4b62b4da174b33622457
>>>>
>>>> On the other hand there is also the possibility that the
directory doesn?t even exist anymore because every night there are running some
cleanup scripts.
>>>>
>>>> By the way we are running gluster version 3.6.4.
>>>>
>>>> Davy
>>>>
>>>>> On 21 Sep 2015, at 11:16, Ravishankar N <ravishankar at
redhat.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 09/21/2015 02:32 PM, Davy Croonen wrote:
>>>>>> Hi all
>>>>>>
>>>>>> For, at the moment a unknown reason, the command
"gluster volume heal public info? shows a lot of the following entries:
>>>>>>
>>>>>>
<gfid:3138d605-25ec-4aa9-9069-5db2e4202db4>/Doc1_LOUJA.htm - Is in
split-brain.
>>>>>>
>>>>>> The part after the / differs but the gfid is always the
same, I suppose this gfid is referring to a directory.
>>>>> Correct, it is the parent directory of the file in
question.
>>>>>> Now considering the data this isn?t an issue the files
can be or are already deleted, but is there a way to clear these split-brain
entries?
>>>>> Did you actually resolve the split-brain on the file ? If
you do that, then the entry must disappear from the output of the heal info
command.
>>>>>> The command for restarting the self heal daemon
?gluster v start public force?, didn?t solve the problem. Any other suggestions?
>>>>> What version of gluster are you running?
>>>>>
>>>>> -Ravi
>>>>>> Thanks in advance.
>>>>>>
>>>>>>
>>>>>> Kind regards
>>>>>> Davy
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>