Displaying 20 results from an estimated 447 matches for "bitrotting".
2018 Apr 17
2
Bitrot - Restoring bad file
Hi,
I have a question regarding bitrot detection.
Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
"gluster volume bitrot VOLNAME status" gets me the GFIDs that are corrupt and on which Host this happens.
As far as I can tell
2017 Sep 25
2
how to verify bitrot signed file manually?
resending mail.
On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> ok, from bitrot code I figured out gluster using sha256 hashing algo.
>
>
> Now coming to the problem, during scrub run in my cluster some of my files
> were marked as bad in few set of nodes.
> I just wanted to confirm bad file. so, I have used "sha256sum" tool in
>
2018 Apr 18
0
Bitrot - Restoring bad file
On 04/17/2018 06:25 PM, Omar Kohl wrote:
> Hi,
>
> I have a question regarding bitrot detection.
>
> Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
>
> "gluster volume bitrot VOLNAME status" gets me the
2017 Oct 03
1
how to verify bitrot signed file manually?
my volume is distributed disperse volume 8+2 EC.
file1 and file2 are different files lying in same brick. I am able to read
the file from mount point without any issue because of EC it reads rest of
the available blocks in other nodes.
my question is "file1" sha256 value matches bitrot signature value but
still, it is also marked as bad by scrubber daemon. why is that?
On Fri, Sep
2017 Sep 21
2
how to verify bitrot signed file manually?
Hi,
I have a file in my brick which was signed by bitrot and latter when
running scrub it was marked as bad.
Now, I want to verify file again manually. just to clarify my doubt
how can I do this?
regards
Amudhan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170921/f69ff7be/attachment.html>
2017 Sep 22
0
how to verify bitrot signed file manually?
ok, from bitrot code I figured out gluster using sha256 hashing algo.
Now coming to the problem, during scrub run in my cluster some of my files
were marked as bad in few set of nodes.
I just wanted to confirm bad file. so, I have used "sha256sum" tool in
Linux to manually get file hash.
here is the result.
file-1, file-2 marked as bad by scrub and file-3 is healthy.
file-1 sha256
2017 Sep 29
1
how to verify bitrot signed file manually?
Hi Amudhan,
Sorry for the late response as I was busy with other things. You are right
bitrot uses sha256 for checksum.
If file-1, file-2 are marked bad, the I/O should be errored out with EIO.
If that is not happening, we need
to look further into it. But what's the file contents of file-1 and file-2
on the replica bricks ? Are they
matching ?
Thanks and Regards,
Kotresh HR
On Mon, Sep 25,
2018 Apr 18
1
Bitrot strange behavior
Hi Sweta,
Thanks, this drive me some more questions:
1. What is the reason of delaying signature creation ?
2. As a same file (replicated or dispersed) having different signature thought bricks is by definition an error, it would be good to triggered it during a scrub, or with a different tool. Is something like this planned ?
Cheers
?
C?dric Lemarchand
> On 18 Apr 2018, at 07:53, Sweta
2007 Apr 15
3
Bitrot and panics
IIRC, uncorrectable bitrot even in a nonessential file detected by ZFS used to cause a kernel panic.
Bug ID 4924238 was closed with the claim that bitrot-induced panics is not a bug, but the description did mention an open bug ID 4879357, which suggests that it''s considered a bug after all.
Can somebody clarify the intended behavior? For example, if I''m running Solaris in a VM,
2018 Apr 16
2
Bitrot strange behavior
Hello,
I am playing around with the bitrot feature and have some questions:
1. when a file is created, the "trusted.bit-rot.signature? attribute
seems only created approximatively 120 seconds after its creations
(the cluster is idle and there is only one file living on it). Why ?
Is there a way to make this attribute generated at the same time of
the file creation ?
2. corrupting a file
2017 Nov 06
0
how to verify bitrot signed file manually?
Any update?
On Fri, Oct 13, 2017 at 1:14 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> any update?.
>
> why is it marked bad?
>
> Any way to find out what happened to the file?
>
>
> On Tue, Oct 3, 2017 at 12:44 PM, Amudhan P <amudhan83 at gmail.com> wrote:
>
>>
>> my volume is distributed disperse volume 8+2 EC.
>> file1 and file2 are
2018 Apr 18
0
Bitrot strange behavior
Hi Cedric,
Any file is picked up for signing by the bitd process after the
predetermined wait of 120 seconds. This default value is captured in the
volume option 'features.expiry-time' and is configurable - in your case,
it can be set to 0 or 1.
Point 2 is correct. A file corrupted before the bitrot signature is
generated will not be successfully detected by the scrubber. That would
2017 Jun 19
2
total outage - almost
Hi,
we use a bunch of replicated gluster volumes as a backend for our
backup. Yesterday I noticed that some synthetic backups failed because
of I/O errors.
Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads
of I/O errors.
The brick log file shows the below errors
[2017-06-19 13:42:33.554875] E [MSGID: 116020]
[bit-rot-stub.c:566:br_stub_check_bad_object]
2017 Jun 19
0
total outage - almost
Hi,
I checked the attributes of one of the files with I/O errors
root at chastcvtprd04:~# getfattr -d -e hex -m -
/data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_1050932/CHUNK_11126559/SFILE_CONTAINER_014
getfattr: Removing leading '/' from absolute path names
# file:
2015 Nov 04
3
Nouveau for FreeBSD
On 04/11/15 09:08, cbergstrom at pathscale.com wrote:
> Is anyone actually and or actively working on this?
> Github.com/pathscale/pscnv is totally bitrot but waaay more portable
> base. Nouveau made hard Linux assumptions that will be difficult to
> overcome afaik.
As pointed out by Ilia, this is not true anymore. Nouveau can also
partially run in the userspace, the hard
2023 Jan 19
1
really large number of skipped files after a scrub
Hi,
Just to follow up my first observation from this email from december:
automatic scheduled scrubs that not happen. We have now upgraded glusterfs
from 7.4 to 10.1, and now see that the automated scrubs ARE running now.
Not sure why they didn't in 7.4, but issue solved. :-)
MJ
On Mon, 12 Dec 2022 at 13:38, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb at gmail.com>
wrote:
> Hi,
>
> I
2011 Mar 31
3
[LLVMdev] LiveValues removal
I've read that LiveValues has been removed from trunk. Did it bitrot or
was simply removed because a replacement is available?
If it's the former, what caused the bitrotting? If it's the latter,
what's the replacement? (I've found LiveVariables but I'm not sure it
can be used in a ModulePass).
b.r.
--
Carlo Alberto Ferraris <cafxx at strayorange.com
<mailto:cafxx at strayorange.com>>
+39 333 7643 235 XMPP <xmpp:cafxx at strayorange.co...
2015 Nov 04
3
Nouveau for FreeBSD
On 04/11/15 10:38, C Bergström wrote:
> On Wed, Nov 4, 2015 at 3:33 PM, Martin Peres <martin.peres at free.fr> wrote:
>> On 04/11/15 09:08, cbergstrom at pathscale.com wrote:
>>
>> Is anyone actually and or actively working on this?
>> Github.com/pathscale/pscnv is totally bitrot but waaay more portable base.
>> Nouveau made hard Linux assumptions that will
2003 May 14
2
[Bug 188] pam_chauthtok() is called too late
http://bugzilla.mindrot.org/show_bug.cgi?id=188
djm at mindrot.org changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |WONTFIX
------- Additional Comments From djm at mindrot.org 2003-05-14 22:32
2008 Jun 16
1
[LLVMdev] [llvm-announce] llvm and simplescalar
It use to. alphasim (the validated alpha model based on simplescalar)
is better though. Also, since no one has needed to run simplescalar
experiments, the alpha backend has bitrotted some. The last version
that I know worked with most of spec was llvm 1.8 or so. You need to
get or write an elf64 loader and fix a couple instruction
implementations in simplescalar to get it to run linux binaries