similar to: Bitrot - Restoring bad file

Displaying 20 results from an estimated 6000 matches similar to: "Bitrot - Restoring bad file"

2018 Apr 18
0
Bitrot - Restoring bad file
On 04/17/2018 06:25 PM, Omar Kohl wrote: > Hi, > > I have a question regarding bitrot detection. > > Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot. > > "gluster volume bitrot VOLNAME status" gets me the
2017 Sep 25
2
how to verify bitrot signed file manually?
resending mail. On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P <amudhan83 at gmail.com> wrote: > ok, from bitrot code I figured out gluster using sha256 hashing algo. > > > Now coming to the problem, during scrub run in my cluster some of my files > were marked as bad in few set of nodes. > I just wanted to confirm bad file. so, I have used "sha256sum" tool in >
2017 Sep 22
0
how to verify bitrot signed file manually?
ok, from bitrot code I figured out gluster using sha256 hashing algo. Now coming to the problem, during scrub run in my cluster some of my files were marked as bad in few set of nodes. I just wanted to confirm bad file. so, I have used "sha256sum" tool in Linux to manually get file hash. here is the result. file-1, file-2 marked as bad by scrub and file-3 is healthy. file-1 sha256
2017 Oct 03
1
how to verify bitrot signed file manually?
my volume is distributed disperse volume 8+2 EC. file1 and file2 are different files lying in same brick. I am able to read the file from mount point without any issue because of EC it reads rest of the available blocks in other nodes. my question is "file1" sha256 value matches bitrot signature value but still, it is also marked as bad by scrubber daemon. why is that? On Fri, Sep
2017 Sep 29
1
how to verify bitrot signed file manually?
Hi Amudhan, Sorry for the late response as I was busy with other things. You are right bitrot uses sha256 for checksum. If file-1, file-2 are marked bad, the I/O should be errored out with EIO. If that is not happening, we need to look further into it. But what's the file contents of file-1 and file-2 on the replica bricks ? Are they matching ? Thanks and Regards, Kotresh HR On Mon, Sep 25,
2017 Sep 21
2
how to verify bitrot signed file manually?
Hi, I have a file in my brick which was signed by bitrot and latter when running scrub it was marked as bad. Now, I want to verify file again manually. just to clarify my doubt how can I do this? regards Amudhan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170921/f69ff7be/attachment.html>
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it. Like this : # getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e # file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e trusted.gfid=0x00462be83e6149318bdadae1645c639e trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning, hope i got it right... using: https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02 mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata gfid 1: getfattr -n trusted.glusterfs.pathinfo -e text /mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb getfattr: Removing leading '/' from absolute path
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
Hi, Can you find and check the files with gfids: 60465723-5dc0-4ebe-aced-9f2c12e52642faf59566-10f5-4ddd-8b0c-a87bc6a334fb Use 'getfattr -d -e hex -m. ' command from https://docs.gluster.org/en/main/Troubleshooting/resolving-splitbrain/#analysis-of-the-output . Best Regards,Strahil Nikolov On Sat, Jan 20, 2024 at 9:44, Hu Bert<revirii at googlemail.com> wrote: Good morning,
2017 Jun 19
2
total outage - almost
Hi, we use a bunch of replicated gluster volumes as a backend for our backup. Yesterday I noticed that some synthetic backups failed because of I/O errors. Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads of I/O errors. The brick log file shows the below errors [2017-06-19 13:42:33.554875] E [MSGID: 116020] [bit-rot-stub.c:566:br_stub_check_bad_object]
2017 Jun 19
0
total outage - almost
Hi, I checked the attributes of one of the files with I/O errors root at chastcvtprd04:~# getfattr -d -e hex -m - /data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_1050932/CHUNK_11126559/SFILE_CONTAINER_014 getfattr: Removing leading '/' from absolute path names # file:
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim, Can you check whether the same hardlinks are present on both the bricks & both of them have the link count 2? If the link count is 2 then "find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid>" should give you the file path. Regards, Karthik On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Oct 19
2
gfid entries in volume heal info that do not heal
I've been following this particular thread as I have a similar issue (RAID6 array failed out with 3 dead drives at once while a 12 TB load was being copied into one mounted space - what a mess) I have >700K GFID entries that have no path data:Example:getfattr -d -e hex -m . .glusterfs/00/00/0000a5ef-5af7-401b-84b5-ff2a51c10421# file: .glusterfs/00/00/0000a5ef-5af7-401b-84b5-
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 23, 2017 1:52 AM To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re:
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data that supplies the path to the original. I have the inode from stat. Looking now to dig out the path/filename from xfs_db on the specific inodes individually. Is the hash of the filename or <path>/filename and if so relative to where? /, <path from top of brick>, ? On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt, Can you also check for the link count in the stat output of those hardlink entries in the .glusterfs folder on the bricks. If the link count is 1 on all the bricks for those entries, then they are orphaned entries and you can delete those hardlinks. To be on the safer side have a backup before deleting any of the entries. Regards, Karthik On Fri, Oct 20, 2017 at 3:18 AM, Jim
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only on the brick that was live during the outage and concurrent file copy- in. The brick that was down at that time has no GFIDs that are not also on the up brick. As the bricks are 10TB, the find is going to be a long running process. I'm running several finds at once with gnu parallel but it will still take some time.
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to following up on this. Unfortunately, I had already copied and shipped the data to the second datacenter before copying the GFIDs so I already stumbled before the first hurdle! I have been using the scripts in the extras/geo-rep provided for an earlier version upgrade. With a bit of tinkering, these have given me a file