similar to: [blog post] Trying to corrupt data in a ZFS mirror

Displaying 20 results from an estimated 2000 matches similar to: "[blog post] Trying to corrupt data in a ZFS mirror"

2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2013 Jun 17
0
gluster client timeouts / found conflict
Hi list Recently I've experienced more and more input/output errors from my most write heavy gluster filesystem. The logfile on the gluster servers show nothing, but the client(s) that get the input/output errors (and timeouts) will as far as I can tell get errors such as : [2013-06-14 15:55:56] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/369/60702093) inode (ptr=0x1efd440,
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2017 Nov 07
0
error logged in fuse-mount log file
Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2018 Apr 05
0
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi, > > I noticed when I run gluster volume heal data info, the follow message > shows up in the log, along with other stuff: > > [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory >> selfheal failed: Unable to form layout for directory / > > > I'm
2017 Nov 09
2
Error logged in fuse-mount log file
resending mail from another id, doubt on whether mail reaches mailing list. ---------- Forwarded message ---------- From: Amudhan P <amudhan83 at gmail.com<mailto:amudhan83 at gmail.com>> Date: Tue, Nov 7, 2017 at 6:43 PM Subject: error logged in fuse-mount log file To: Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>> Hi, I am using
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2018 Jan 17
1
Gluster endless heal
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2018 Apr 05
2
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
Hi, I noticed when I run gluster volume heal data info, the follow message shows up in the log, along with other stuff: [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal > failed: Unable to form layout for directory / I'm seeing it on Gluster 4.0.1 and 3.13.2. Here's the full log after running heal info:
2017 Jul 21
1
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish . Hi Amudhan, Can you check the gfids for every dir in that heirarchy? Maybe one of the parent dirs has a gfid mismatch. Regards, Nithya On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote: > Hi Nithya, > > I have checked gfid in all the bricks in disperse set for the folder. it > all same there is no difference. > > regards >
2017 Nov 19
0
gluster share as home
Hi Gluster Group, I've been using gluster as storage back end for oVirt for some years now without the slightest hitch at all. Excited with this I wanted to switch our home share from NFS over to a replica 3 gluster volume as well. Since small file performance was not particular good I applied all performance enhancing settings I could find in the gluster blog and on other sites. Those
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community, we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation. The version is 3.8.11 on CentOS 7. The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare. After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day. Also there seem to be problems
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone: We have a glusterfs clusters, version is 3.2.7. The volume info is as below: Volume Name: gfs1 Type: Distributed-Replicate Status: Started Number of Bricks: 94 x 3 = 282 Transport-type: tcp We native mount the volume in all nodes. When we access the file ?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain. While we can access the same file on
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2017 Nov 13
2
Error logged in fuse-mount log file
Hi Nithya, I have checked gfid in all the bricks in disperse set for the folder. it all same there is no difference. regards Amudhan P On Fri, Nov 10, 2017 at 9:02 AM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Comments inline. > > Regards, > Nithya > > On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: >