Displaying 6 results from an estimated 6 matches for "leleu".
Did you mean:
lele
2017 Jul 07
2
I/O error for one folder within the mountpoint
...fid mismatch on snooper directory or the files
> under it for all 3 bricks. In any case the mount log or the
> glustershd.log of the 3 nodes for the gfids you listed below should
> give you some idea on why the files aren't healed.
> Thanks.
>
> On 07/07/2017 03:10 PM, Florian Leleu wrote:
>>
>> Hi Ravi,
>>
>> thanks for your answer, sure there you go:
>>
>> # gluster volume heal applicatif info
>> Brick ipvr7.xxx:/mnt/gluster-applicatif/brick
>> <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed>
>> <gfid:f8030467-b7a3-47...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 03:39 PM, Florian Leleu wrote:
>
> I guess you're right aboug gfid, I got that:
>
> [2017-07-07 07:35:15.197003] W [MSGID: 108008]
> [afr-self-heal-name.c:354:afr_selfheal_name_gfid_mismatch_check]
> 0-applicatif-replicate-0: GFID mismatch for
> <gfid:3fa785b5-4242-4816-a452-97da1a5e45c6>...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...Number of entries in split-brain: 0
Brick ipvr9.xxx:/mnt/gluster-applicatif/brick
Status: Connected
Number of entries in split-brain: 0
Doesn't it seem odd that the first command give some different output ?
Le 07/07/2017 ? 11:31, Ravishankar N a ?crit :
> On 07/07/2017 01:23 PM, Florian Leleu wrote:
>>
>> Hello everyone,
>>
>> first time on the ML so excuse me if I'm not following well the
>> rules, I'll improve if I get comments.
>>
>> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
>> following com...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...er? Check
if there is a gfid mismatch on snooper directory or the files under it
for all 3 bricks. In any case the mount log or the glustershd.log of the
3 nodes for the gfids you listed below should give you some idea on why
the files aren't healed.
Thanks.
On 07/07/2017 03:10 PM, Florian Leleu wrote:
>
> Hi Ravi,
>
> thanks for your answer, sure there you go:
>
> # gluster volume heal applicatif info
> Brick ipvr7.xxx:/mnt/gluster-applicatif/brick
> <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed>
> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401>
> <gf...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...der "snooper" is fine.
I tried rebooting the servers, restarting gluster after killing every
process using it but it's not working.
Has anyone already experienced that ? Any help would be nice.
Thanks a lot !
--
Cordialement,
<http://www.cognix-systems.com/>
Florian LELEU
Responsable Hosting, Cognix Systems
*Rennes* | Brest | Saint-Malo | Paris
florian.leleu at cognix-systems.com <mailto:florian.leleu at cognix-systems.com>
T?l. : 02 99 27 75 92
Facebook Cognix Systems <https://www.facebook.com/cognix.systems/>
Twitter Cognix Systems <https...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote:
>
> Hello everyone,
>
> first time on the ML so excuse me if I'm not following well the rules,
> I'll improve if I get comments.
>
> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
> following command was made on node ipvr8.xx...