Hi Strahil, I have tried removing the quota for that specific directory and setting it again but it didn't work (maybe it has to be a quota disable and enable in the volume options). Currently testing a solution by Hari with the quota_fsck.py script (https://medium.com/@harigowtham/ glusterfs-quota-fix-accounting-840df33fcd3a) and its detecting a lot of size mismatch in files. Thank you, *Jo?o Ba?to* --------------- *Scientific Computing and Software Platform* Champalimaud Research Champalimaud Center for the Unknown Av. Bras?lia, Doca de Pedrou?os 1400-038 Lisbon, Portugal fchampalimaud.org <https://www.fchampalimaud.org/> Strahil Nikolov <hunter86_bg at yahoo.com> escreveu no dia sexta, 14/08/2020 ?(s) 10:16:> Hi Jo?o, > > Based on your output it seems that the quota size is different on the 2 > bricks. > > Have you tried to remove the quota and then recreate it ? Maybe it will be > the easiest way to fix it. > > Best Regards, > Strahil Nikolov > > > ?? 14 ?????? 2020 ?. 4:35:14 GMT+03:00, "Jo?o Ba?to" < > joao.bauto at neuro.fchampalimaud.org> ??????: > >Hi all, > > > >We have a 4-node distributed cluster with 2 bricks per node running > >Gluster > >7.7 + ZFS. We use directory quota to limit the space used by our > >members on > >each project. Two days ago we noticed inconsistent space used reported > >by > >Gluster in the quota list. > > > >A small snippet of gluster volume quota vol list, > > > > Path Hard-limit Soft-limit Used > >Available Soft-limit exceeded? Hard-limit exceeded? > >/projectA 5.0TB 80%(4.0TB) 3.1TB 1.9TB > > No No > >*/projectB 100.0TB 80%(80.0TB) 16383.4PB 740.9TB > > No No* > >/projectC 70.0TB 80%(56.0TB) 50.0TB 20.0TB > > No No > > > >The total space available in the cluster is 360TB, the quota for > >projectB > >is 100TB and, as you can see, its reporting 16383.4PB used and 740TB > >available (already decreased from 750TB). > > > >There was an issue in Gluster 3.x related to the wrong directory quota > >( > > > https://lists.gluster.org/pipermail/gluster-users/2016-February/025305.html > > and > > > https://lists.gluster.org/pipermail/gluster-users/2018-November/035374.html > ) > >but it's marked as solved (not sure if the solution still applies). > > > >*On projectB* > ># getfattr -d -m . -e hex projectB > ># file: projectB > >trusted.gfid=0x3ca2bce0455945efa6662813ce20fc0c > > >trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f35e69800098ed9 > >trusted.glusterfs.dht=0xe1a4060c000000003ffffffe5ffffffc > > >trusted.glusterfs.mdata=0x010000000000000000000000005f355c59000000000939079f000000005ce2aff90000000007fdacb0000000005ce2aff90000000007fdacb0 > > >trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 > >trusted.glusterfs.quota.dirty=0x3000 > >trusted.glusterfs.quota.limit-set.1=0x0000640000000000ffffffffffffffff > > >trusted.glusterfs.quota.size.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 > > > >*On projectA* > ># getfattr -d -m . -e hex projectA > ># file: projectA > >trusted.gfid=0x05b09ded19354c0eb544d22d4659582e > > >trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f1aeb9f00044c64 > >trusted.glusterfs.dht=0xe1a4060c000000001fffffff3ffffffd > > >trusted.glusterfs.mdata=0x010000000000000000000000005f1ac6a10000000018f30a4e000000005c338fab0000000017a3135a000000005b0694fb000000001584a21b > > >trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000067de3bbe20000000000000128610000000000033498 > >trusted.glusterfs.quota.dirty=0x3000 > >trusted.glusterfs.quota.limit-set.1=0x0000460000000000ffffffffffffffff > > >trusted.glusterfs.quota.size.1=0x0000067de3bbe20000000000000128610000000000033498 > > > >Any idea on what's happening and how to fix it? > > > >Thanks! > >*Jo?o Ba?to* > >--------------- > > > >*Scientific Computing and Software Platform* > >Champalimaud Research > >Champalimaud Center for the Unknown > >Av. Bras?lia, Doca de Pedrou?os > >1400-038 Lisbon, Portugal > >fchampalimaud.org <https://www.fchampalimaud.org/> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200814/b07de1e6/attachment.html>
Hi Jo?o, most probably enable/disable should help. Have you checked all bricks on the ZFS ? Your example is for projectA vs ProjectB. What about 'ProjectB' directories on all bricks of the volume ? If enable/disable doesn't help, I have an idea but I have never test it, so I can't guarantee that it will help: - Create a new dir via FUSE - Set the quota on that new dir as you would like to set it on the ProjectB - use getfattr on the bricks to identify if everything is the same on all bricks If all are the same, you can use setfattr with the same values from the new dir to the 'ProjectB' volume's brick directories and remove the dirty flag. When you stat that dir ('du' or 'stat' from fuse should work), the quota should get fixed . Best Regards, Strahil Nikolov ?? 14 ?????? 2020 ?. 14:39:49 GMT+03:00, "Jo?o Ba?to" <joao.bauto at neuro.fchampalimaud.org> ??????:>Hi Strahil, > >I have tried removing the quota for that specific directory and setting >it >again but it didn't work (maybe it has to be a quota disable and enable >in the volume options). Currently testing a solution >by Hari with the quota_fsck.py script (https://medium.com/@harigowtham/ >glusterfs-quota-fix-accounting-840df33fcd3a) and its detecting a lot of >size mismatch in files. > >Thank you, >*Jo?o Ba?to* >--------------- > >*Scientific Computing and Software Platform* >Champalimaud Research >Champalimaud Center for the Unknown >Av. Bras?lia, Doca de Pedrou?os >1400-038 Lisbon, Portugal >fchampalimaud.org <https://www.fchampalimaud.org/> > > >Strahil Nikolov <hunter86_bg at yahoo.com> escreveu no dia sexta, >14/08/2020 >?(s) 10:16: > >> Hi Jo?o, >> >> Based on your output it seems that the quota size is different on the >2 >> bricks. >> >> Have you tried to remove the quota and then recreate it ? Maybe it >will be >> the easiest way to fix it. >> >> Best Regards, >> Strahil Nikolov >> >> >> ?? 14 ?????? 2020 ?. 4:35:14 GMT+03:00, "Jo?o Ba?to" < >> joao.bauto at neuro.fchampalimaud.org> ??????: >> >Hi all, >> > >> >We have a 4-node distributed cluster with 2 bricks per node running >> >Gluster >> >7.7 + ZFS. We use directory quota to limit the space used by our >> >members on >> >each project. Two days ago we noticed inconsistent space used >reported >> >by >> >Gluster in the quota list. >> > >> >A small snippet of gluster volume quota vol list, >> > >> > Path Hard-limit Soft-limit Used >> >Available Soft-limit exceeded? Hard-limit exceeded? >> >/projectA 5.0TB 80%(4.0TB) 3.1TB >1.9TB >> > No No >> >*/projectB 100.0TB 80%(80.0TB) 16383.4PB 740.9TB >> > No No* >> >/projectC 70.0TB 80%(56.0TB) 50.0TB >20.0TB >> > No No >> > >> >The total space available in the cluster is 360TB, the quota for >> >projectB >> >is 100TB and, as you can see, its reporting 16383.4PB used and 740TB >> >available (already decreased from 750TB). >> > >> >There was an issue in Gluster 3.x related to the wrong directory >quota >> >( >> > >> >https://lists.gluster.org/pipermail/gluster-users/2016-February/025305.html >> > and >> > >> >https://lists.gluster.org/pipermail/gluster-users/2018-November/035374.html >> ) >> >but it's marked as solved (not sure if the solution still applies). >> > >> >*On projectB* >> ># getfattr -d -m . -e hex projectB >> ># file: projectB >> >trusted.gfid=0x3ca2bce0455945efa6662813ce20fc0c >> >> >>trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f35e69800098ed9 >> >trusted.glusterfs.dht=0xe1a4060c000000003ffffffe5ffffffc >> >> >>trusted.glusterfs.mdata=0x010000000000000000000000005f355c59000000000939079f000000005ce2aff90000000007fdacb0000000005ce2aff90000000007fdacb0 >> >> >>trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 >> >trusted.glusterfs.quota.dirty=0x3000 >> >>trusted.glusterfs.quota.limit-set.1=0x0000640000000000ffffffffffffffff >> >> >>trusted.glusterfs.quota.size.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 >> > >> >*On projectA* >> ># getfattr -d -m . -e hex projectA >> ># file: projectA >> >trusted.gfid=0x05b09ded19354c0eb544d22d4659582e >> >> >>trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f1aeb9f00044c64 >> >trusted.glusterfs.dht=0xe1a4060c000000001fffffff3ffffffd >> >> >>trusted.glusterfs.mdata=0x010000000000000000000000005f1ac6a10000000018f30a4e000000005c338fab0000000017a3135a000000005b0694fb000000001584a21b >> >> >>trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000067de3bbe20000000000000128610000000000033498 >> >trusted.glusterfs.quota.dirty=0x3000 >> >>trusted.glusterfs.quota.limit-set.1=0x0000460000000000ffffffffffffffff >> >> >>trusted.glusterfs.quota.size.1=0x0000067de3bbe20000000000000128610000000000033498 >> > >> >Any idea on what's happening and how to fix it? >> > >> >Thanks! >> >*Jo?o Ba?to* >> >--------------- >> > >> >*Scientific Computing and Software Platform* >> >Champalimaud Research >> >Champalimaud Center for the Unknown >> >Av. Bras?lia, Doca de Pedrou?os >> >1400-038 Lisbon, Portugal >> >fchampalimaud.org <https://www.fchampalimaud.org/> >>
Hi Jo?o, The quota accounting error is what we're looking at here. I think you've already looked into the blog post by Hari and are using the script to fix the accounting issue. That should help you out in fixing this issue. Let me know if you face any issues while using it. Regards, Srijan Sivakumar On Fri 14 Aug, 2020, 17:10 Jo?o Ba?to, <joao.bauto at neuro.fchampalimaud.org> wrote:> Hi Strahil, > > I have tried removing the quota for that specific directory and setting it > again but it didn't work (maybe it has to be a quota disable and enable > in the volume options). Currently testing a solution > by Hari with the quota_fsck.py script (https://medium.com/@harigowtham/ > glusterfs-quota-fix-accounting-840df33fcd3a) and its detecting a lot of > size mismatch in files. > > Thank you, > *Jo?o Ba?to* > --------------- > > *Scientific Computing and Software Platform* > Champalimaud Research > Champalimaud Center for the Unknown > Av. Bras?lia, Doca de Pedrou?os > 1400-038 Lisbon, Portugal > fchampalimaud.org <https://www.fchampalimaud.org/> > > > Strahil Nikolov <hunter86_bg at yahoo.com> escreveu no dia sexta, 14/08/2020 > ?(s) 10:16: > >> Hi Jo?o, >> >> Based on your output it seems that the quota size is different on the 2 >> bricks. >> >> Have you tried to remove the quota and then recreate it ? Maybe it will >> be the easiest way to fix it. >> >> Best Regards, >> Strahil Nikolov >> >> >> ?? 14 ?????? 2020 ?. 4:35:14 GMT+03:00, "Jo?o Ba?to" < >> joao.bauto at neuro.fchampalimaud.org> ??????: >> >Hi all, >> > >> >We have a 4-node distributed cluster with 2 bricks per node running >> >Gluster >> >7.7 + ZFS. We use directory quota to limit the space used by our >> >members on >> >each project. Two days ago we noticed inconsistent space used reported >> >by >> >Gluster in the quota list. >> > >> >A small snippet of gluster volume quota vol list, >> > >> > Path Hard-limit Soft-limit Used >> >Available Soft-limit exceeded? Hard-limit exceeded? >> >/projectA 5.0TB 80%(4.0TB) 3.1TB 1.9TB >> > No No >> >*/projectB 100.0TB 80%(80.0TB) 16383.4PB 740.9TB >> > No No* >> >/projectC 70.0TB 80%(56.0TB) 50.0TB 20.0TB >> > No No >> > >> >The total space available in the cluster is 360TB, the quota for >> >projectB >> >is 100TB and, as you can see, its reporting 16383.4PB used and 740TB >> >available (already decreased from 750TB). >> > >> >There was an issue in Gluster 3.x related to the wrong directory quota >> >( >> > >> https://lists.gluster.org/pipermail/gluster-users/2016-February/025305.html >> > and >> > >> https://lists.gluster.org/pipermail/gluster-users/2018-November/035374.html >> ) >> >but it's marked as solved (not sure if the solution still applies). >> > >> >*On projectB* >> ># getfattr -d -m . -e hex projectB >> ># file: projectB >> >trusted.gfid=0x3ca2bce0455945efa6662813ce20fc0c >> >> >trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f35e69800098ed9 >> >trusted.glusterfs.dht=0xe1a4060c000000003ffffffe5ffffffc >> >> >trusted.glusterfs.mdata=0x010000000000000000000000005f355c59000000000939079f000000005ce2aff90000000007fdacb0000000005ce2aff90000000007fdacb0 >> >> >trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 >> >trusted.glusterfs.quota.dirty=0x3000 >> >trusted.glusterfs.quota.limit-set.1=0x0000640000000000ffffffffffffffff >> >> >trusted.glusterfs.quota.size.1=0x0000ab0f227a860000000000478e33acffffffffffffc112 >> > >> >*On projectA* >> ># getfattr -d -m . -e hex projectA >> ># file: projectA >> >trusted.gfid=0x05b09ded19354c0eb544d22d4659582e >> >> >trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f1aeb9f00044c64 >> >trusted.glusterfs.dht=0xe1a4060c000000001fffffff3ffffffd >> >> >trusted.glusterfs.mdata=0x010000000000000000000000005f1ac6a10000000018f30a4e000000005c338fab0000000017a3135a000000005b0694fb000000001584a21b >> >> >trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000067de3bbe20000000000000128610000000000033498 >> >trusted.glusterfs.quota.dirty=0x3000 >> >trusted.glusterfs.quota.limit-set.1=0x0000460000000000ffffffffffffffff >> >> >trusted.glusterfs.quota.size.1=0x0000067de3bbe20000000000000128610000000000033498 >> > >> >Any idea on what's happening and how to fix it? >> > >> >Thanks! >> >*Jo?o Ba?to* >> >--------------- >> > >> >*Scientific Computing and Software Platform* >> >Champalimaud Research >> >Champalimaud Center for the Unknown >> >Av. Bras?lia, Doca de Pedrou?os >> >1400-038 Lisbon, Portugal >> >fchampalimaud.org <https://www.fchampalimaud.org/> >> > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://bluejeans.com/441850968 > > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200815/f49575b6/attachment.html>