Can you share the output of 'ls -l' and 'getfattr -d -e hex -m .
<gfid/file> ?
Best Regards,Strahil Nikolov
On Fri, Mar 25, 2022 at 20:17, Collin Strassburger<cstrassburger at
bihrle.com> wrote: <!--#yiv2527564451 _filtered {} _filtered {}
_filtered {}#yiv2527564451 #yiv2527564451 p.yiv2527564451MsoNormal,
#yiv2527564451 li.yiv2527564451MsoNormal, #yiv2527564451
div.yiv2527564451MsoNormal
{margin:0in;font-size:11.0pt;font-family:"Calibri",
sans-serif;}#yiv2527564451 a:link, #yiv2527564451 span.yiv2527564451MsoHyperlink
{color:blue;text-decoration:underline;}#yiv2527564451 p.yiv2527564451msonormal,
#yiv2527564451 li.yiv2527564451msonormal, #yiv2527564451
div.yiv2527564451msonormal
{margin-right:0in;margin-left:0in;font-size:11.0pt;font-family:"Calibri",
sans-serif;}#yiv2527564451 span.yiv2527564451EmailStyle27
{font-family:"Calibri", sans-serif;color:windowtext;}#yiv2527564451
.yiv2527564451MsoChpDefault {font-family:"Calibri", sans-serif;}
_filtered {}#yiv2527564451 div.yiv2527564451WordSection1 {}-->
Thank you for the suggestion!
?
I mounted the volume on a new mount point, retrieved the gfid of another file,
and was able to obtain the file information through Method2.
?
However, when I used the gfids listed in heal, I get a ?transport endpoint is
not connected? error, despite them showing as ?connected? in heal info (and
volume status).
?
Thanks,
Collin
?
From: Strahil Nikolov <hunter86_bg at yahoo.com>
Sent: Friday, March 25, 2022 1:47 PM
To: Collin Strassburger <cstrassburger at bihrle.com>; gluster-users at
gluster.org
Subject: Re: [Gluster-users] gfid entries failing to heal
?
CAUTION - EXTERNAL EMAIL: Do not click any links or open any attachments unless
you trust the sender and know the content is safe.
To find gfid path, you can
use?https://docs.gluster.org/en/main/Troubleshooting/gfid-to-path/
?
Usually, I prefer to mount and then use Method2 to retrieve the path.
?
Then, you can getfattr the file/dir to get a clue.
?
Best Regards,
Strahil Nikolov
On Fri, Mar 25, 2022 at 18:51, Collin Strassburger
<cstrassburger at bihrle.com> wrote:
Hello,
?
I am having a problem with a replica 3 volume.
?
When I run: gluster volume heal hydra_pbs_vol info
It returns:
Brick hydra1:/data/glusterfs/PBS/NonMountDir
<gfid:78ce1382-4ac8-4c2e-94af-45b5abd055c5>
Status: Connected
Number of entries: 1
?
Brick hydra2:/data/glusterfs/PBS/NonMountDir
<gfid:c3e7dc8e-111c-4ab2-9c10-ea5c1fd11223>
<gfid:96a72cf8-025d-4127-a216-4429dd1f59c6>
<gfid:8abbef15-929b-450f-9528-cb37dfc40bde>
<gfid:825bffa1-c727-4c2e-8c21-4da563851d7b>
Status: Connected
Number of entries: 4
?
Brick viz1:/data/glusterfs/PBS/NonMountDir
Status: Connected
Number of entries: 0
?
The items have been present for some time and do not appear to be healing.
As shown above, the items are not labeled as split-brain and they do not have
path information to do a manual delete-and-heal.
?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Content of /var/log/glusterfs/glfsheal-hydra_pbs_vol.log is attached
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Info:
gluster volume info hydra_pbs_vol
Volume Name: hydra_pbs_vol
Type: Replicate
Volume ID: efb30804-1c08-4ef6-a579-a2f77d5049e0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: hydra1:/data/glusterfs/PBS/NonMountDir
Brick2: hydra2:/data/glusterfs/PBS/NonMountDir
Brick3: viz1:/data/glusterfs/PBS/NonMountDir
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
features.bitrot: on
features.scrub: Active
?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Status:
Status of volume: hydra_pbs_vol
Gluster process???????????????????????????? TCP Port? RDMA Port? Online? Pid
------------------------------------------------------------------------------
Brick hydra1:/data/glusterfs/PBS/NonMountDi
r?????????????????????????????????????????? 49152???? 0????????? Y?????? 1518
Brick hydra2:/data/glusterfs/PBS/NonMountDi
r?????????????????????????????????????????? 49152???? 0????????? Y?????? 1438
Brick viz1:/data/glusterfs/PBS/NonMountDir? 49152???? 0????????? Y?????? 2991
Self-heal Daemon on localhost?????????????? N/A?????? N/A??????? Y?????? 1942
Bitrot Daemon on localhost????????????????? N/A?????? N/A??????? Y?????? 1563
Scrubber Daemon on localhost??????????????? N/A?????? N/A??????? Y?????? 1738
Self-heal Daemon on viz1??????????????????? N/A?????? N/A??????? Y?????? 3491
Bitrot Daemon on viz1?????????????????????? N/A?????? N/A??????? Y?????? 3203
Scrubber Daemon on viz1???????????????????? N/A?????? N/A??????? Y?????? 3261
Self-heal Daemon on hydra2????????????????? N/A?????? N/A??????? Y?????? 1843
Bitrot Daemon on hydra2???????????????????? N/A?????? N/A??????? Y?????? 1475
Scrubber Daemon on hydra2???????????????? ??N/A?????? N/A??????? Y?????? 1651
?
Task Status of Volume hydra_pbs_vol
------------------------------------------------------------------------------
There are no active volume tasks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
?
?
How can I resolve these entries/issues?
?
?
Thanks,
Collin Strassburger (he/him)
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20220325/8a3fcc8b/attachment.html>