Hoggins!
2017-Dec-19 18:26 UTC
[Gluster-users] How to make sure self-heal backlog is empty ?
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing
a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do gluster volume heal $i info; done
And here is what I get :
Brick ngluster-1.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-1.network.hoggins.fr:/export/brick/mailer
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/mailer
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/mailer
<gfid:98642fd6-f8a4-4966-9c30-32fedbecfc1a>
Status: Connected
Number of entries: 1
Brick ngluster-1.network.hoggins.fr:/export/brick/rom
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/rom
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/rom
<gfid:52b09fb6-78da-46db-af0e-e6a16194a977>
Status: Connected
Number of entries: 1
Brick ngluster-1.network.hoggins.fr:/export/brick/thedude
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/thedude
<gfid:4b1f4d9b-f2d8-4a50-83f7-3f014fe0b9f6>
Status: Connected
Number of entries: 1
Brick ngluster-3.network.hoggins.fr:/export/brick/thedude
Status: Connected
Number of entries: 0
Brick ngluster-1.network.hoggins.fr:/export/brick/web
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/web
<gfid:491c59f7-bf42-4d7c-be56-842317c55ac5>
<gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6>
<gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403>
Status: Connected
Number of entries: 3
Brick ngluster-3.network.hoggins.fr:/export/brick/web
<gfid:0f29326d-d273-4299-ba71-a5d8722a9149>
<gfid:b5f0dd49-00a1-4a1d-97c1-0be973b097d6>
<gfid:22d21ac4-8ad8-4390-a07b-26c8a75f2f5d>
<gfid:5b432df5-8e8d-4789-abea-c35e88490e41>
<gfid:b3621d26-3a60-4803-8039-a89933c306d8>
<gfid:5eb83bc2-c975-4182-a7c3-c8cc9b39a064>
<gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403>
<gfid:491c59f7-bf42-4d7c-be56-842317c55ac5>
<gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6>
<gfid:44a64b36-cdb4-4c0f-be18-419b72add380>
<gfid:e12d8e6e-56b0-4db4-9e89-e80bdee3a435>
Status: Connected
Number of entries: 11
Should I be worrying with this never ending ?
??? Thank you,
??? ??? Hoggins!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: OpenPGP digital signature
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20171219/b7506e71/attachment.sig>
Matt Waymack
2017-Dec-19 21:41 UTC
[Gluster-users] How to make sure self-heal backlog is empty ?
Mine also has a list of files that seemingly never heal. They are usually
isolated on my arbiter bricks, but not always. I would also like to find an
answer for this behavior.
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at
gluster.org] On Behalf Of Hoggins!
Sent: Tuesday, December 19, 2017 12:26 PM
To: gluster-users <gluster-users at gluster.org>
Subject: [Gluster-users] How to make sure self-heal backlog is empty ?
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing
a rolling upgrade before going to the next node), how can I tell, while reading
this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do gluster volume heal $i info; done
And here is what I get :
Brick ngluster-1.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/clem
Status: Connected
Number of entries: 0
Brick ngluster-1.network.hoggins.fr:/export/brick/mailer
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/mailer
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/mailer
<gfid:98642fd6-f8a4-4966-9c30-32fedbecfc1a>
Status: Connected
Number of entries: 1
Brick ngluster-1.network.hoggins.fr:/export/brick/rom
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/rom
Status: Connected
Number of entries: 0
Brick ngluster-3.network.hoggins.fr:/export/brick/rom
<gfid:52b09fb6-78da-46db-af0e-e6a16194a977>
Status: Connected
Number of entries: 1
Brick ngluster-1.network.hoggins.fr:/export/brick/thedude
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/thedude
<gfid:4b1f4d9b-f2d8-4a50-83f7-3f014fe0b9f6>
Status: Connected
Number of entries: 1
Brick ngluster-3.network.hoggins.fr:/export/brick/thedude
Status: Connected
Number of entries: 0
Brick ngluster-1.network.hoggins.fr:/export/brick/web
Status: Connected
Number of entries: 0
Brick ngluster-2.network.hoggins.fr:/export/brick/web
<gfid:491c59f7-bf42-4d7c-be56-842317c55ac5>
<gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6>
<gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403>
Status: Connected
Number of entries: 3
Brick ngluster-3.network.hoggins.fr:/export/brick/web
<gfid:0f29326d-d273-4299-ba71-a5d8722a9149>
<gfid:b5f0dd49-00a1-4a1d-97c1-0be973b097d6>
<gfid:22d21ac4-8ad8-4390-a07b-26c8a75f2f5d>
<gfid:5b432df5-8e8d-4789-abea-c35e88490e41>
<gfid:b3621d26-3a60-4803-8039-a89933c306d8>
<gfid:5eb83bc2-c975-4182-a7c3-c8cc9b39a064>
<gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403>
<gfid:491c59f7-bf42-4d7c-be56-842317c55ac5>
<gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6>
<gfid:44a64b36-cdb4-4c0f-be18-419b72add380>
<gfid:e12d8e6e-56b0-4db4-9e89-e80bdee3a435>
Status: Connected
Number of entries: 11
Should I be worrying with this never ending ?
??? Thank you,
??? ??? Hoggins!
Karthik Subrahmanya
2017-Dec-20 05:45 UTC
[Gluster-users] How to make sure self-heal backlog is empty ?
Hi, Can you provide the - volume info - shd log - mount log of the volumes which are showing pending entries, to debug the issue. Thanks & Regards, Karthik On Wed, Dec 20, 2017 at 3:11 AM, Matt Waymack <mwaymack at nsgdv.com> wrote:> Mine also has a list of files that seemingly never heal. They are usually > isolated on my arbiter bricks, but not always. I would also like to find > an answer for this behavior. > > -----Original Message----- > From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces@ > gluster.org] On Behalf Of Hoggins! > Sent: Tuesday, December 19, 2017 12:26 PM > To: gluster-users <gluster-users at gluster.org> > Subject: [Gluster-users] How to make sure self-heal backlog is empty ? > > Hello list, > > I'm not sure what to look for here, not sure if what I'm seeing is the > actual "backlog" (that we need to make sure is empty while performing a > rolling upgrade before going to the next node), how can I tell, while > reading this, if it's okay to reboot / upgrade my next node in the pool ? > Here is what I do for checking : > > for i in `gluster volume list`; do gluster volume heal $i info; done > > And here is what I get : > > Brick ngluster-1.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-1.network.hoggins.fr:/export/brick/mailer > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/mailer > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/mailer > <gfid:98642fd6-f8a4-4966-9c30-32fedbecfc1a> > Status: Connected > Number of entries: 1 > > Brick ngluster-1.network.hoggins.fr:/export/brick/rom > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/rom > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/rom > <gfid:52b09fb6-78da-46db-af0e-e6a16194a977> > Status: Connected > Number of entries: 1 > > Brick ngluster-1.network.hoggins.fr:/export/brick/thedude > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/thedude > <gfid:4b1f4d9b-f2d8-4a50-83f7-3f014fe0b9f6> > Status: Connected > Number of entries: 1 > > Brick ngluster-3.network.hoggins.fr:/export/brick/thedude > Status: Connected > Number of entries: 0 > > Brick ngluster-1.network.hoggins.fr:/export/brick/web > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/web > <gfid:491c59f7-bf42-4d7c-be56-842317c55ac5> > <gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6> > <gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403> > Status: Connected > Number of entries: 3 > > Brick ngluster-3.network.hoggins.fr:/export/brick/web > <gfid:0f29326d-d273-4299-ba71-a5d8722a9149> > <gfid:b5f0dd49-00a1-4a1d-97c1-0be973b097d6> > <gfid:22d21ac4-8ad8-4390-a07b-26c8a75f2f5d> > <gfid:5b432df5-8e8d-4789-abea-c35e88490e41> > <gfid:b3621d26-3a60-4803-8039-a89933c306d8> > <gfid:5eb83bc2-c975-4182-a7c3-c8cc9b39a064> > <gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403> > <gfid:491c59f7-bf42-4d7c-be56-842317c55ac5> > <gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6> > <gfid:44a64b36-cdb4-4c0f-be18-419b72add380> > <gfid:e12d8e6e-56b0-4db4-9e89-e80bdee3a435> > Status: Connected > Number of entries: 11 > > > Should I be worrying with this never ending ? > > Thank you, > > Hoggins! > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171220/9cd5ed71/attachment.html>
Ben Turner
2017-Dec-21 02:25 UTC
[Gluster-users] How to make sure self-heal backlog is empty ?
You can try kicking off a client side heal by running: ls -laR /your-gluster-mount/* Sometimes when I see just the GFID instead of the file name I have found that if I stat the file the name shows up in heal info. Before running that make sure that you don't have any split brain files: gluster v heal your-vol info split-brain If you do have split brain files follow: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html HTH! -b ----- Original Message -----> From: "Hoggins!" <fuckspam at wheres5.com> > To: "gluster-users" <gluster-users at gluster.org> > Sent: Tuesday, December 19, 2017 1:26:08 PM > Subject: [Gluster-users] How to make sure self-heal backlog is empty ? > > Hello list, > > I'm not sure what to look for here, not sure if what I'm seeing is the > actual "backlog" (that we need to make sure is empty while performing a > rolling upgrade before going to the next node), how can I tell, while > reading this, if it's okay to reboot / upgrade my next node in the pool ? > Here is what I do for checking : > > for i in `gluster volume list`; do gluster volume heal $i info; done > > And here is what I get : > > Brick ngluster-1.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/clem > Status: Connected > Number of entries: 0 > > Brick ngluster-1.network.hoggins.fr:/export/brick/mailer > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/mailer > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/mailer > <gfid:98642fd6-f8a4-4966-9c30-32fedbecfc1a> > Status: Connected > Number of entries: 1 > > Brick ngluster-1.network.hoggins.fr:/export/brick/rom > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/rom > Status: Connected > Number of entries: 0 > > Brick ngluster-3.network.hoggins.fr:/export/brick/rom > <gfid:52b09fb6-78da-46db-af0e-e6a16194a977> > Status: Connected > Number of entries: 1 > > Brick ngluster-1.network.hoggins.fr:/export/brick/thedude > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/thedude > <gfid:4b1f4d9b-f2d8-4a50-83f7-3f014fe0b9f6> > Status: Connected > Number of entries: 1 > > Brick ngluster-3.network.hoggins.fr:/export/brick/thedude > Status: Connected > Number of entries: 0 > > Brick ngluster-1.network.hoggins.fr:/export/brick/web > Status: Connected > Number of entries: 0 > > Brick ngluster-2.network.hoggins.fr:/export/brick/web > <gfid:491c59f7-bf42-4d7c-be56-842317c55ac5> > <gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6> > <gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403> > Status: Connected > Number of entries: 3 > > Brick ngluster-3.network.hoggins.fr:/export/brick/web > <gfid:0f29326d-d273-4299-ba71-a5d8722a9149> > <gfid:b5f0dd49-00a1-4a1d-97c1-0be973b097d6> > <gfid:22d21ac4-8ad8-4390-a07b-26c8a75f2f5d> > <gfid:5b432df5-8e8d-4789-abea-c35e88490e41> > <gfid:b3621d26-3a60-4803-8039-a89933c306d8> > <gfid:5eb83bc2-c975-4182-a7c3-c8cc9b39a064> > <gfid:3803e1ec-9327-4e08-8f31-f3dc90aaa403> > <gfid:491c59f7-bf42-4d7c-be56-842317c55ac5> > <gfid:9deb7b0d-0459-4dd1-a93c-f4eab03df6d6> > <gfid:44a64b36-cdb4-4c0f-be18-419b72add380> > <gfid:e12d8e6e-56b0-4db4-9e89-e80bdee3a435> > Status: Connected > Number of entries: 11 > > > Should I be worrying with this never ending ? > > ??? Thank you, > > ??? ??? Hoggins! > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users