Xavi,
CV_MAGNETIC directory on a single brick has 155683 entries.
There are altogether 60 bricks in the volume. I could provide the output if you
still need that.
Thanks and Regards,
Ram
-----Original Message-----
From: Xavier Hernandez [mailto:xhernandez at datalab.es]
Sent: Monday, March 13, 2017 9:56 AM
To: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org);
gluster-users at gluster.org
Subject: Re: [Gluster-users] Disperse mkdir fails
Hi Ram,
On 13/03/17 14:13, Ankireddypalle Reddy wrote:> Attachment (1):
>
> 1
>
>
>
> data.txt
> <https://imap.commvault.com/webconsole/embedded.do?url=https://imap.co
> mmvault.com/webconsole/api/drive/publicshare/346714/file/02bb2e2504a54
> 3e58cc89bce9f350f8c/action/preview&downloadUrl=https://imap.commvault.
> com/webconsole/api/contentstore/publicshare/346714/file/02bb2e2504a543
> e58cc89bce9f350f8c/action/download>
> [Download]
> <https://imap.commvault.com/webconsole/api/contentstore/publicshare/34
> 6714/file/02bb2e2504a543e58cc89bce9f350f8c/action/download>(17.63
> KB)
>
> Xavier,
> Please find attached the required info from all the six
> nodes of the cluster.
I asked for the contents of the CV_MAGNETIC because this is the damaged
directory, not the parent. But anyway we can see that the number of hard links
of the directory differs for each brick, so this means that the number of
subdirectories is different on each brick. A small difference could be
explainable by the current activity of the volume while the data has been
captured, but the differences are too big.
> We need to find
> 1) What is the solution through which this problem can
> be avoided.
> 2) How do we fix the current state of the cluster.
>
> Thanks and Regards,
> Ram
> -----Original Message-----
> From: Xavier Hernandez [mailto:xhernandez at datalab.es]
> Sent: Friday, March 10, 2017 3:34 AM
> To: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org);
> gluster-users at gluster.org
> Subject: Re: [Gluster-users] Disperse mkdir fails
>
> Hi Ram,
>
> On 09/03/17 20:15, Ankireddypalle Reddy wrote:
>> Xavi,
>> Thanks for checking this.
>> 1) mkdir returns errnum 5. EIO.
>> 2) The specified directory is the parent directory under
> which all the data in the gluster volume will be stored. Current
> around 160TB of 262 TB is consumed.
>
> I only need the first level entries of that directory, not the entire
> tree of entries. This should be in the order of thousands, right ?
>
> We need to make sure that all bricks have the same entries in this
> directory. Otherwise we would need to check other things.
>
>> 3) It is extremely difficult to list the exact sequence
> of FOPS that would have been issued to the directory. The storage is
> heavily used and lot of sub directories are present inside this directory.
>>
>> Are you looking for the extended attributes for this
> directory from all the bricks inside the volume. There are about 60
bricks.
>
> If possible, yes.
>
> However, if there's a lot of modifications on that directory while you
> are getting the xattr, it's possible that you get inconsistent values,
> but they are not really inconsistent.
>
> If possible, you should get that information pausing all activity to
> that directory.
>
> Xavi
>
>>
>> Thanks and Regards,
>> Ram
>>
>> -----Original Message-----
>> From: Xavier Hernandez [mailto:xhernandez at datalab.es]
>> Sent: Thursday, March 09, 2017 11:15 AM
>> To: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org);
>> gluster-users at gluster.org
>> Subject: Re: [Gluster-users] Disperse mkdir fails
>>
>> Hi Ram,
>>
>> On 09/03/17 16:52, Ankireddypalle Reddy wrote:
>>> Attachment (1):
>>>
>>> 1
>>>
>>>
>>>
>>> info.txt
>>>
<https://imap.commvault.com/webconsole/embedded.do?url=https://imap.
>>> c
>>> o
>>>
mmvault.com/webconsole/api/drive/publicshare/346714/file/3037641a3f9
>>> b
>>> 4
>>>
133920b1b251ed32d5d/action/preview&downloadUrl=https://imap.commvault.
>>>
com/webconsole/api/contentstore/publicshare/346714/file/3037641a3f9b
>>> 4
>>> 1
>>> 33920b1b251ed32d5d/action/download>
>>> [Download]
>>>
<https://imap.commvault.com/webconsole/api/contentstore/publicshare/
>>> 3
>>> 4
>>> 6714/file/3037641a3f9b4133920b1b251ed32d5d/action/download>(3.35
>>> KB)
>>>
>>> Hi,
>>>
>>> I have a disperse gluster volume with 6 servers. 262TB of
>>> usable capacity. Gluster version is 3.7.19.
>>>
>>> glusterfs1, glusterf2 and glusterfs3 nodes were initially
>>> used for creating the volume. Nodes glusterf4, glusterfs5 and
>>> glusterfs6 were later added to the volume.
>>>
>>>
>>>
>>> Directory creation failed on a directory called
>>> /ws/glus/Folder_07.11.2016_23.02/CV_MAGNETIC.
>>>
>>> # file: ws/glus/Folder_07.11.2016_23.02/CV_MAGNETIC
>>>
>>>
glusterfs.gfid.string="e8e51015-616f-4f04-b9d2-92f46eb5cfc7"
>>>
>>>
>>>
>>> gluster mount log contains lot of following errors:
>>>
>>> [2017-03-09 15:32:36.773937] W [MSGID: 122056]
>>> [ec-combine.c:875:ec_combine_check] 0-StoragePool-disperse-7:
>>> Mismatching xdata in answers of 'LOOKUP' for
>>> e8e51015-616f-4f04-b9d2-92f46eb5cfc7
>>>
>>>
>>>
>>> The directory seems to be out of sync between nodes
>>> glusterfs1,
>>> glusterfs2 and glusterfs3. Each has different version.
>>>
>>>
>>>
>>> trusted.ec.version=0x00000000000839f00000000000083a4d
>>>
>>> trusted.ec.version=0x0000000000082ea40000000000083a4b
>>>
>>> trusted.ec.version=0x0000000000083a760000000000083a7b
>>>
>>>
>>>
>>> Self-heal does not seem to be healing this directory.
>>>
>>
>> This is very similar to what happened the other time. Once more than
>> 1
> brick is damaged, self-heal cannot do anything to heal it on a 2+1
> configuration.
>>
>> What error does return the mkdir request ?
>>
>> Does the directory you are trying to create already exist on some brick
?
>>
>> Can you show all the remaining extended attributes of the directory ?
>>
>> It would also be useful to have the directory contents on each brick
> (an 'ls -l'). In this case, include the name of the directory you
are
> trying to create.
>>
>> Can you explain a detailed sequence of operations done on that
> directory since the last time you successfully created a new subdirectory ?
>> including any metadata change.
>>
>> Xavi
>>
>>>
>>>
>>> Thanks and Regards,
>>>
>>> Ram
>>>
>>> ***************************Legal
>>> Disclaimer***************************
>>> "This communication may contain confidential and privileged
material
>>> for the sole use of the intended recipient. Any unauthorized
review,
>>> use or distribution by others is strictly prohibited. If you have
>>> received the message by mistake, please advise the sender by reply
>>> email and delete the message. Thank you."
>>>
********************************************************************
>>> *
>>> *
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>> ***************************Legal
>> Disclaimer***************************
>> "This communication may contain confidential and privileged
material
>> for the sole use of the intended recipient. Any unauthorized review,
>> use or distribution by others is strictly prohibited. If you have
>> received the message by mistake, please advise the sender by reply
> email and delete the message. Thank you."
>> *********************************************************************
>> *
>>
>
> ***************************Legal Disclaimer***************************
> "This communication may contain confidential and privileged material
> for the sole use of the intended recipient. Any unauthorized review,
> use or distribution by others is strictly prohibited. If you have
> received the message by mistake, please advise the sender by reply
> email and delete the message. Thank you."
> **********************************************************************
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for
the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************