Displaying 20 results from an estimated 8000 matches similar to: "Split-brain scenario"
2008 Aug 15
6
Add/remove new server volumes on the fly?
Hi list,
1. While learning GlusterFS, I'was wondering if it's possible to add
server volumes to increase space capacity of my cluster "on the FLY"?
I mean, a hot upgrade.
2. Second, when using "files replicating strategy" (scheduler), is it
possible to remove a server node without stopping the hole cluster
(ex. hardware maintaining reasons, add more disk/ram to the
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2010 Nov 11
1
Possible split-brain
Hi all,
I have 4 glusterd servers running a single glusterfs volume. The volume was created using the gluster command line, with no changes from default. The same machines all mount the volume using the native glusterfs client:
[root at localhost ~]# gluster volume create datastore replica 2 transport tcp 192.168.253.1:/glusterfs/primary 192.168.253.3:/glusterfs/secondary
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week
where I shutdown both nodes (one after each others). I had many files that
needed to be healed after that. Everything worked well, except for 1 file.
It is in split-brain, with 2 different GFID. I read the documentation but
it only covers the cases where the GFID is the same on both bricks. BTW, I
am running Gluster 3.10.
Here
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig,
There is no way to resolve gfid split-brains with type mismatch. You have
to do it manually by following the steps in [1].
In case of type mismatch it is recommended to resolve it manually. But for
only gfid mismatch in 3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
2018 Jan 24
1
Split brain directory
Hello,
I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The
effect consist of a specific file in this splitted directory to randomly be
unavailable on some clients.
I have gathered all the informations on this gist:
https://gist.githubusercontent.com/lucagervasi/534e0024d349933eef44615fa8a5c374/raw/52ff8dd6a9cc8ba09b7f258aa85743d2854f9acc/splitinfo.txt
I discovered the
2013 Dec 09
0
Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)
Hello,
I''m trying to build a replica volume, on two servers.
The servers are: blade6 and blade7. (another blade1 in the peer, but with
no volumes)
The volume seems ok, but I cannot mount it from NFS.
Here are some logs:
[root@blade6 stor1]# df -h
/dev/mapper/gluster_stor1 882G 200M 837G 1% /gluster/stor1
[root@blade7 stor1]# df -h
/dev/mapper/gluster_fast
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that
will help to understand more about this scenarios and why gluster
recommend replica 3 or arbiter volume.
Regards
Rafi KC
On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote:
> Hi Ludwig,
>
> There is no way to resolve gfid split-brains with type mismatch. You
> have to do it manually by following the steps in [1].
2011 Aug 21
2
Fixing split brain
Hi
Consider the typical spit brain situation: reading from file gets EIO,
logs say:
[2011-08-21 13:38:54.607590] W [afr-open.c:168:afr_open]
0-gfs-replicate-0: failed to open as split brain seen, returning EIO
[2011-08-21 13:38:54.607895] W [fuse-bridge.c:585:fuse_fd_cbk]
0-glusterfs-fuse: 1371456: OPEN()
/manu/netbsd/usr/src/gnu/dist/groff/doc/Makefile.sub => -1
(Input/output
2017 Sep 27
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
HI gluster experts,
I meet a tough problem about ?split-brain? issue. Sometimes, after hard reboot, we will find some files in split-brain, however its parent directory or anything could be shown in command ?gluster volume heal <volume-name> info?, also, no entry in .glusterfs/indices/xattrop directory, can you help to shed some lights on this issue? Thanks!
Following is some info from
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey,
Can you give us the volume info output for this volume?
Why are you not able to get the xattrs from arbiter brick? It is the same
way as you do it on data bricks.
The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in
the getxattr outputs you have provided.
Did you do a remove-brick and add-brick any time? Otherwise it will be
trusted.afr.virt_images-client-{0,1,2} usually.
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi,
To resolve the gfid split-brain you can follow the steps at [1].
Since we don't have the pending markers set on the files, it is not showing
in the heal info.
To debug this issue, need some more data from you. Could you provide these
things?
1. volume info
2. mount log
3. brick logs
4. shd log
May I also know which version of gluster you are running. From the info you
have provided it