similar to: gfid entries in volume heal info that do not heal

Displaying 20 results from an estimated 10000 matches similar to: "gfid entries in volume heal info that do not heal"

2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10. [root at tpc-cent-glus1-081017 ~]#
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt, The files might be in split brain. Could you please send the outputs of these? gluster volume info <volname> gluster volume heal <volname> info And also the getfattr output of the files which are in the heal info output from all the bricks of that replica pair. getfattr -d -e hex -m . <file path on brick> Thanks & Regards Karthik On 16-Oct-2017 8:16 PM,
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt, >From the xattr output, it looks like the files are not present on the arbiter brick & needs healing. But on the parent it does not have the pending markers set for those entries. The workaround for this is you need to do a lookup on the file which needs heal from the mount, so it will create the entry on the arbiter brick and then run the volume heal to do the healing. Follow
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while! I have the following stats: 4085169 files in both bricks3162940 files only have a single hard link. All of the files exist on both servers. bmidata2 (below) WAS running when bmidata1 died. gluster volume heal clifford statistics heal-countGathering count of entries to be healed on volume clifford has been successful Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt, Can you also check for the link count in the stat output of those hardlink entries in the .glusterfs folder on the bricks. If the link count is 1 on all the bricks for those entries, then they are orphaned entries and you can delete those hardlinks. To be on the safer side have a backup before deleting any of the entries. Regards, Karthik On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only on the brick that was live during the outage and concurrent file copy- in. The brick that was down at that time has no GFIDs that are not also on the up brick. As the bricks are 10TB, the find is going to be a long running process. I'm running several finds at once with gnu parallel but it will still take some time.
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 23, 2017 1:52 AM To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re:
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue (RAID6 array failed out with 3 dead drives at once while a 12 TB load was being copied into one mounted space - what a mess) I have >700K GFID entries that have no path data:Example:getfattr -d -e hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file: .glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data that supplies the path to the original. I have the inode from stat. Looking now to dig out the path/filename from xfs_db on the specific inodes individually. Is the hash of the filename or <path>/filename and if so relative to where? /, <path from top of brick>, ? On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim, Can you check whether the same hardlinks are present on both the bricks & both of them have the link count 2? If the link count is 2 then "find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid>" should give you the file path. Regards, Karthik On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Dec 19
0
How to make sure self-heal backlog is empty ?
Mine also has a list of files that seemingly never heal. They are usually isolated on my arbiter bricks, but not always. I would also like to find an answer for this behavior. -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hoggins! Sent: Tuesday, December 19, 2017 12:26 PM To: gluster-users <gluster-users at
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list, I'm not sure what to look for here, not sure if what I'm seeing is the actual "backlog" (that we need to make sure is empty while performing a rolling upgrade before going to the next node), how can I tell, while reading this, if it's okay to reboot / upgrade my next node in the pool ? Here is what I do for checking : for i in `gluster volume list`; do
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2017 Sep 17
0
Volume Heal issue
I am using gluster 3.8.12, the default on Centos 7.3 (I will update to 3.10 at some moment) On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote: > Hi all, > > I have a replica 3 with 1 arbiter. > > I see the last days that one file at a volume is always showing as needing > healing: > > gluster volume heal vms info > Brick
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2017 Sep 17
2
Volume Heal issue
Hi all, I have a replica 3 with 1 arbiter. I see the last days that one file at a volume is always showing as needing healing: gluster volume heal vms info Brick gluster0:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster1:/gluster/vms/brick Status: Connected Number of entries: 0 Brick gluster2:/gluster/vms/brick *<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>* Status:
2017 Aug 21
0
self-heal not working
Can you also provide: gluster v heal <my vol> info split-brain If it is split brain just delete the incorrect file from the brick and run heal again. I haven't tried this with arbiter but I assume the process is the same. -b ----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ben Turner" <bturner at redhat.com> > Cc:
2018 Mar 20
0
Disperse volume recovery and healing
On Tue, Mar 20, 2018 at 5:26 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > That makes sense. In the case of "file damage," it would show up as files > that could not be healed in logfiles or gluster volume heal [volume] info? > If the damage affects more bricks than the volume redundancy, then probably yes. These files or directories will appear in