Last Friday, I rebooted one of my gluster nodes and it didn't properly mount the filesystem holding its brick (I had forgotten to add it to fstab...), so, when I got back to work on Monday, its root filesystem was full and the gluster heal info showed around 25000 entries needing to be healed. I got the filesystems straightened out and, within a matter of minutes, the number of entries waiting to be healed in that subvolume dropped to 59. (Showing twice, of course. The cluster is replica 2+A, so the other full replica and the arbiter are both showing the same list of entries.) Over a full day later, it's still at 59. Is there anything I can do to kick the self-heal back into action and get those final 59 entries cleaned up? -- Dave Sherohman
Which version of glusterfs are you using? On Tue, Sep 4, 2018 at 4:26 PM Dave Sherohman <dave at sherohman.org> wrote:> Last Friday, I rebooted one of my gluster nodes and it didn't properly > mount the filesystem holding its brick (I had forgotten to add it to > fstab...), so, when I got back to work on Monday, its root filesystem > was full and the gluster heal info showed around 25000 entries needing > to be healed. > > I got the filesystems straightened out and, within a matter of minutes, > the number of entries waiting to be healed in that subvolume dropped to > 59. (Showing twice, of course. The cluster is replica 2+A, so the > other full replica and the arbiter are both showing the same list of > entries.) Over a full day later, it's still at 59. > > Is there anything I can do to kick the self-heal back into action and > get those final 59 entries cleaned up? > > -- > Dave Sherohman > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180904/514fc2b4/attachment.html>
On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:> Is there anything I can do to kick the self-heal back into action and > get those final 59 entries cleaned up?In response to the request about what version of gluster I'm running (...which I deleted prematurely...), it's the latest version from the Debian stable repository, which they identify as 3.8.8-1. -- Dave Sherohman
A month and a half later, I've finally managed to make the necessary arrangements and upgraded last week from gluster 3.8.8 to 3.12.15. Doing the upgrade cleared up the substantial majority of the entries which refused to heal, but there are still 5 outstanding entries after allowing some days to complete them. (I finished upgrades on the affected subvolume on Wednesday last week and the other two subvolumes on Friday. And didn't touch anything over the weekend, of course.) So, going back to my original question, what can I do to get these remaining entries to heal and have a fully-consistent cluster again? On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:> Last Friday, I rebooted one of my gluster nodes and it didn't properly > mount the filesystem holding its brick (I had forgotten to add it to > fstab...), so, when I got back to work on Monday, its root filesystem > was full and the gluster heal info showed around 25000 entries needing > to be healed. > > I got the filesystems straightened out and, within a matter of minutes, > the number of entries waiting to be healed in that subvolume dropped to > 59. (Showing twice, of course. The cluster is replica 2+A, so the > other full replica and the arbiter are both showing the same list of > entries.) Over a full day later, it's still at 59. > > Is there anything I can do to kick the self-heal back into action and > get those final 59 entries cleaned up?-- Dave Sherohman