Displaying 20 results from an estimated 2000 matches similar to: "gfid entries in volume heal info that do not heal"
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt,
Run these commands on all the bricks of the replica pair to get the attrs
set on the backend.
On the bricks of first replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/
108694db-c039-4b7c-bd3d-ad6a15d811a2
On the fourth replica set:
getfattr -d -e hex -m . <brick path>/.glusterfs/
e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
Also run the "gluster volume
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue
(RAID6 array failed out with 3 dead drives at once while a 12 TB load
was being copied into one mounted space - what a mess)
I have >700K GFID entries that have no path data:Example:getfattr -d -e
hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file:
.glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re:
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file:
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find <brickpath> -samefile
<brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full
gfid>"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt,
>From the xattr output, it looks like the files are not present on the
arbiter brick & needs healing. But on the parent it does not have the
pending markers set for those entries.
The workaround for this is you need to do a lookup on the file which needs
heal from the mount, so it will create the entry on the arbiter brick and
then run the volume heal to do the healing.
Follow
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt,
Can you also check for the link count in the stat output of those hardlink
entries in the .glusterfs folder on the bricks.
If the link count is 1 on all the bricks for those entries, then they are
orphaned entries and you can delete those hardlinks.
To be on the safer side have a backup before deleting any of the entries.
Regards,
Karthik
On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running process.
I'm running several finds at once with gnu parallel but it will still
take some time.
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10.
[root at tpc-cent-glus1-081017 ~]#
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while!
I have the following stats:
4085169 files in both bricks3162940 files only have a single hard link.
All of the files exist on both servers. bmidata2 (below) WAS running
when bmidata1 died.
gluster volume heal clifford statistics heal-countGathering count of
entries to be healed on volume clifford has been successful
Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Dec 15
3
Production Volume will not start
Hi all,
I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return:
Error: Request timed out
For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following:
[2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote:
> Hi all,
>
>
>
> I have an issue where our volume will not start from any node. When
> attempting to start the volume it will eventually return:
>
> Error: Request timed out
>
>
>
> For some time after that, the volume is locked and we either have to wait
> or restart
2017 Dec 19
0
How to make sure self-heal backlog is empty ?
Mine also has a list of files that seemingly never heal. They are usually isolated on my arbiter bricks, but not always. I would also like to find an answer for this behavior.
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hoggins!
Sent: Tuesday, December 19, 2017 12:26 PM
To: gluster-users <gluster-users at
2011 Mar 03
3
Mac / NFS problems
Hello,
Were having issues with macs writing to our gluster system.
Gluster vol info at end.
On a mac, if I make a file in the shell I get the following message:
smoke:hunter david$ echo hello > test
-bash: test: Operation not permitted
And the file is made but is zero size.
smoke:hunter david$ ls -l test
-rw-r--r-- 1 david realise 0 Mar 3 08:44 test
glusterfs/nfslog logs thus:
2018 Feb 09
1
Tiering Volumns
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do
2018 May 22
2
split brain? but where?
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of
gfid>/<next 2 bits of gfid>/<full gfid>
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick
2018 May 21
2
split brain? but where?
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==========
root at salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/centos-root