Thank you for the feedback, Amar!
This is the first time I'm hearing of the Kadalu project. I will keep an
eye on that as well as Gluster 10.
Cheers,
On Thu, Sep 9, 2021 at 10:30 AM Amar Tumballi <amar at kadalu.io> wrote:
>
>
> On Thu, Sep 9, 2021 at 11:13 AM Alan Orth <alan.orth at gmail.com>
wrote:
>
>> For what it's worth, I disabled quotas and re-enabled them. This of
>> course caused the limits I had set to be erased and GlusterFS to re-run
the
>> find/stat on all bricks. Once it finished I was able to add a limit and
it
>> was finally accurate.
>>
>> I'm still hoping someone can comment about this. Is it normal, what
is
>> the status of quotas in GlusterFS 8.x, is there maintenance I should
do,
>> etc?
>>
>>
> Glad to hear that it worked for you. In general, Quota feature is
> maintained not actively but based on any critical or CVE bug fixes right
> now.
>
> We needed quota feature for the kadalu project (https://kadalu.io or
> https://github.com/kadalu/kadalu), for which the crawling quota update
> turned out to be a major bottleneck. Hence we came up with
'simple-quota'
> approach which is based on ideas previously discussed around project quotas
> etc. The feature is already developed and used in kadalu releases, but
> waiting reviews in glusterfs. Hopefully would make it to glusterfs-10.
>
>
>> Thank you,
>>
>> On Thu, Sep 2, 2021 at 1:42 PM Alan Orth <alan.orth at gmail.com>
wrote:
>>
>>> Yesterday I noticed that there were still several find / stat
processes
>>> running on this node (there are two bricks for this volume on this
host)
>>> after enabling quotas:
>>>
>>> # ps auxw | grep stat
>>> root 5875 0.0 0.0 112812 976 pts/0 S+ 13:28 0:00
grep
>>> --color=auto stat
>>> root 19846 3.9 0.0 121804 2624 ? S Aug31 52:49
>>> /usr/bin/find . -exec /usr/bin/stat {} \ ;
>>> root 19856 3.1 0.0 121784 2536 ? S Aug31 42:02
>>> /usr/bin/find . -exec /usr/bin/stat {} \ ;
>>>
>>> I waited for them to finish, but my quota information is still
incorrect
>>> for the two directories I have enabled it on. I ran the
quota_fsck.py
>>> script on two different hosts and every time it runs it has a
different
>>> number of objects fixed:
>>>
>>> # ./quota_fsck.py --sub-dir aorth --fix-issues /mnt/homes-quota-fix
>>> /data/glusterfs/sdc/homes
>>>
>>> MARKING DIRTY: /data/glusterfs/sdc/homes/aorth
>>> stat on /mnt/homes-quota-fix/aorth
>>> Files verified : 9160
>>> Directories verified : 4670
>>> Objects Fixed : 2920
>>>
>>> MARKING DIRTY: /data/glusterfs/sdc/homes/aorth
>>> stat on /mnt/homes-quota-fix/aorth
>>> Files verified : 9160
>>> Directories verified : 4670
>>> Objects Fixed : 3487
>>>
>>> MARKING DIRTY: /data/glusterfs/sdc/homes/aorth
>>> stat on /mnt/homes-quota-fix/aorth
>>> Files verified : 9160
>>> Directories verified : 4670
>>> Objects Fixed : 3486
>>>
>>> What's going on here?
>>>
>>> Thank you for any help...
>>>
>>> On Wed, Sep 1, 2021 at 10:41 AM Alan Orth <alan.orth at
gmail.com> wrote:
>>>
>>>> Dear list,
>>>>
>>>> I enabled quotas and set hard limits for several directories on
a
>>>> distribute?replicate replica 2 volume yesterday. After setting
the limits I
>>>> did a du, a find/stat, and a recursive ls on the paths on the
FUSE mount.
>>>> The usage reported in `gluster volume quota <volume>
list` initially began
>>>> to update, but now it's been about twelve hours and the
reported usage is
>>>> still much lower than actual usage.
>>>>
>>>> GlusterFS version 8.5 on CentOS 7. How can I force the quotas
to update?
>>>>
>>>> Thank you,
>>>>
>>>> --
>>>> Alan Orth
>>>> alan.orth at gmail.com
>>>> https://picturingjordan.com
>>>> https://englishbulgaria.net
>>>> https://mjanja.ch
>>>>
>>>
>>>
>>> --
>>> Alan Orth
>>> alan.orth at gmail.com
>>> https://picturingjordan.com
>>> https://englishbulgaria.net
>>> https://mjanja.ch
>>>
>>
>>
>> --
>> Alan Orth
>> alan.orth at gmail.com
>> https://picturingjordan.com
>> https://englishbulgaria.net
>> https://mjanja.ch
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> --
> --
> https://kadalu.io
> Container Storage made easy!
>
>
--
Alan Orth
alan.orth at gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20210914/b1c1d266/attachment.html>