Jeff Byers
2021-May-11 20:43 UTC
[Gluster-users] Completely filling up a Disperse volume results in unreadable/unhealable files that must be deleted.
Does anyone have any ideas how to prevent, or perhaps fix the issue described here: Completely filling up a Disperse volume results in unreadable/unhealable files that must be deleted. https://github.com/gluster/glusterfs/issues/2021 Cleaning up from this was so terrible when it happened the first time, that the thought of it happening again is causing me to lose sleep. :-( This was a while ago, but from what I recall from my lab testing, reserving space with the GlusterFS option, and using the GlusterFS quota feature only helped some, and didn't prevent the problem from happening. Thanks! ~ Jeff Byers ~
Ravishankar N
2021-May-12 07:29 UTC
[Gluster-users] Completely filling up a Disperse volume results in unreadable/unhealable files that must be deleted.
On Wed, May 12, 2021 at 2:14 AM Jeff Byers <jbyers.sfly at gmail.com> wrote:> Does anyone have any ideas how to prevent, or perhaps > fix the issue described here: > > Completely filling up a Disperse volume results in > unreadable/unhealable files that must be deleted. > https://github.com/gluster/glusterfs/issues/2021 > > Cleaning up from this was so terrible when it happened the > first time, that the thought of it happening again is causing > me to lose sleep. :-( > > This was a while ago, but from what I recall from my lab > testing, reserving space with the GlusterFS option, and using > the GlusterFS quota feature only helped some, and didn't > prevent the problem from happening. > >You could perhaps try to reserve some space on the bricks with the `storage.reserve` option so that you are alerted earlier. As far as I understand, in disperse volumes, for a file to be healed successfully, all the xattrs and stat information (file size, permissions, uid, gid etc) must be identical on majority of the bricks. If that isn't the case, heal logic cannot proceed further. For *file.7 *in the github issue, I see that *trusted.glusterfs.mdata* and the file size (from the `ls -l` output) is different on all 3 bricks, so heals won't happen even if there is free space. CC'in Xavi to correct me if I am wrong. I'm also not sure if it is possible to partially recover the data from the append writes which were successful before the ENOSPC was hit. Regards, Ravi Thanks!> > ~ Jeff Byers ~ > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210512/8465653d/attachment.html>