search for: gfids

Displaying 20 results from an estimated 365 matches for "gfids".

Did you mean: gfid
2017 Oct 16
0
gfid entries in volume heal info that do not heal
OK, so here?s my output of the volume info and the heal info. I have not yet tracked down physical location of these files, any tips to finding them would be appreciated, but I?m definitely just wanting them gone. I forgot to mention earlier that the cluster is running 3.12 and was upgraded from 3.10; these files were likely stuck like this when it was on 3.10. [root at tpc-cent-glus1-081017 ~]#
2017 Oct 16
2
gfid entries in volume heal info that do not heal
Hi Matt, The files might be in split brain. Could you please send the outputs of these? gluster volume info <volname> gluster volume heal <volname> info And also the getfattr output of the files which are in the heal info output from all the bricks of that replica pair. getfattr -d -e hex -m . <file path on brick> Thanks & Regards Karthik On 16-Oct-2017 8:16 PM,
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2017 Oct 19
2
gfid entries in volume heal info that do not heal
...em. I accidentally copied over the .glusterfs folder from the working side(replica 2 only for now - adding arbiter node as soon as I can get this one cleaned up). I've run the methods from "http://docs.gluster.org/en/latest/Troublesho oting/gfid-to-path/" with no results using random GFIDs. A full systemic run using the script from method 3 crashes with "too many nested links" error (or something similar). When I run gluster volume heal volname info, I get 700K+ GFIDs. Oh. gluster 3.8.4 on Centos 7.3 Should I just remove the contents of the .glusterfs folder on both and res...
2017 Oct 18
1
gfid entries in volume heal info that do not heal
...the mount, so it will create the entry on the arbiter brick and then run the volume heal to do the healing. Follow these steps to resolve the issue: (first try this on one file and check whether it gets healed. If it gets healed then do this for all the remaining files) 1. Get the file path for the gfids you got from heal info output. find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> 2. Do ls/stat on the file from mount. 3. Run volume heal. 4. Check the heal info output to see whether the file got healed. If o...
2013 Jun 23
1
Split brain files show no filename/path with gfid
Hi I have been running 4 nodes in a distributed,replicated setup on gluster 3.3.1 since January. Each node has 10tb of storage to give a total of 20tb, for 2-3 years before that they were running on previous versions of gluster. Recently we had issues with the backend storage (ext4) on one of the nodes going read only, thats now resolved and I have ran the following and had errors. gluster
2017 Oct 23
2
gfid entries in volume heal info that do not heal
...em. I accidentally copied over the .glusterfs folder from the working side (replica 2 only for now - adding arbiter node as soon as I can get this one cleaned up). I've run the methods from "http://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/" with no results using random GFIDs. A full systemic run using the script from method 3 crashes with "too many nested links" error (or something similar). When I run gluster volume heal volname info, I get 700K+ GFIDs. Oh. gluster 3.8.4 on Centos 7.3 Should I just remove the contents of the .glusterfs folder on both and r...
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...terfs folder from the working side > > (replica 2 only for now - adding arbiter node as soon as I can get this > one cleaned up). > > > > I've run the methods from "http://docs.gluster.org/en/ > latest/Troubleshooting/gfid-to-path/" with no results using random GFIDs. > A full systemic run using the script from method 3 crashes with "too many > nested links" error (or something similar). > > > > When I run gluster volume heal volname info, I get 700K+ GFIDs. Oh. > gluster 3.8.4 on Centos 7.3 > > > > Should I just remov...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...over the .glusterfs folder from the working side > (replica 2 only for now - adding arbiter node as soon as I can get this > one cleaned up). > > I've run the methods from "http://docs.gluster.org/en/ > latest/Troubleshooting/gfid-to-path/" with no results using random GFIDs. > A full systemic run using the script from method 3 crashes with "too many > nested links" error (or something similar). > > When I run gluster volume heal volname info, I get 700K+ GFIDs. Oh. > gluster 3.8.4 on Centos 7.3 > > Should I just remove the contents of th...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only on the brick that was live during the outage and concurrent file copy- in. The brick that was down at that time has no GFIDs that are not also on the up brick. As the bricks are 10TB, the find is going to be a long running process. I'm running...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...w - adding arbiter node as soon as I can get > > this one cleaned up). > > > > > > > > > > > > > > I've run the methods from "http://docs.gluster.org/en/latest/Troubl > > eshooting/gfid-to-path/" with no results using random GFIDs. A full > > systemic > > run using the script from method 3 crashes with "too many nested > > links" error (or something similar). > > > > > > > > > > > > When I run gluster volume heal volname info, I get 700K+ GFIDs. Oh. >...
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hi Ravi, thanks for your answer, sure there you go: # gluster volume heal applicatif info Brick ipvr7.xxx:/mnt/gluster-applicatif/brick <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2>
2017 Jul 07
2
I/O error for one folder within the mountpoint
...Le 07/07/2017 ? 11:54, Ravishankar N a ?crit : > What does the mount log say when you get the EIO error on snooper? > Check if there is a gfid mismatch on snooper directory or the files > under it for all 3 bricks. In any case the mount log or the > glustershd.log of the 3 nodes for the gfids you listed below should > give you some idea on why the files aren't healed. > Thanks. > > On 07/07/2017 03:10 PM, Florian Leleu wrote: >> >> Hi Ravi, >> >> thanks for your answer, sure there you go: >> >> # gluster volume heal applicatif info &gt...
2017 Jul 07
0
I/O error for one folder within the mountpoint
What does the mount log say when you get the EIO error on snooper? Check if there is a gfid mismatch on snooper directory or the files under it for all 3 bricks. In any case the mount log or the glustershd.log of the 3 nodes for the gfids you listed below should give you some idea on why the files aren't healed. Thanks. On 07/07/2017 03:10 PM, Florian Leleu wrote: > > Hi Ravi, > > thanks for your answer, sure there you go: > > # gluster volume heal applicatif info > Brick ipvr7.xxx:/mnt/gluster-applicatif/...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...can I fix that ?If that helps I don't mind > deleting the whole folder snooper, I have backup. > The steps listed in "Fixing Directory entry split-brain:" of https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/ should give you an idea. It is for files whose gfids mismatch but the steps are similar for directories too. If the contents of the snooper is same on all bricks , you could also try directly deleting the directory from one of the bricks and immediately doing an `ls snooper` from the mount to trigger heals to recreate the entries. Hope this help...
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list, I'm not sure what to look for here, not sure if what I'm seeing is the actual "backlog" (that we need to make sure is empty while performing a rolling upgrade before going to the next node), how can I tell, while reading this, if it's okay to reboot / upgrade my next node in the pool ? Here is what I do for checking : for i in `gluster volume list`; do
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...t; > > > > > > > > > > > > > > > > > > > > > > > I've run the methods from "http://docs.gluster.org/en/latest/Tr > > > > oubleshooting/gfid-to-path/" with no results using random > > > > GFIDs. A full systemic > > > > run using the script from method 3 crashes with "too many > > > > nested links" error (or something similar). > > > > > > > > > > > > > > > > > > > > > > > > Whe...
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
..."Ian Halliday" <ihalliday at ndevix.com>; "gluster-user" <gluster-users at gluster.org>; "Nithya Balachandran" <nbalacha at redhat.com> Sent: 3/26/2018 2:37:21 AM Subject: Re: [Gluster-users] Sharding problem - multiple shard copies with mismatching gfids >Ian, > >Do you've a reproducer for this bug? If not a specific one, a general >outline of what operations where done on the file will help. > >regards, >Raghavendra > >On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa ><rgowdapp at redhat.com> wrote:...
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
...quot; <ihalliday at ndevix.com>; "gluster-user" < > gluster-users at gluster.org>; "Nithya Balachandran" <nbalacha at redhat.com> > Sent: 3/26/2018 2:37:21 AM > Subject: Re: [Gluster-users] Sharding problem - multiple shard copies with > mismatching gfids > > Ian, > > Do you've a reproducer for this bug? If not a specific one, a general > outline of what operations where done on the file will help. > > regards, > Raghavendra > > On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa < > rgowdapp at redhat.com&...