Displaying 20 results from an estimated 40000 matches similar to: "Avoid Split-brain and other stuff"
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi,
I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.
gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik,
Thanks for providing the required outputs. See my replies inline.
On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote:
> Hi Karthik and Ben,
>
> I'll try and reply to you inline.
>
> On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com>
> wrote:
> > Hey,
> >
> > Can you give us the
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey,
Can you give us the volume info output for this volume?
Why are you not able to get the xattrs from arbiter brick? It is the same
way as you do it on data bricks.
The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in
the getxattr outputs you have provided.
Did you do a remove-brick and add-brick any time? Otherwise it will be
trusted.afr.virt_images-client-{0,1,2} usually.
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
2017 Jul 31
3
gluster volume 3.10.4 hangs
Hi folks,
I'm running a simple gluster setup with a single volume replicated at two servers, as follows:
Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: sst0:/var/glusterfs
Brick2: sst2:/var/glusterfs
Options Reconfigured:
cluster.self-heal-daemon: enable
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben,
I'll try and reply to you inline.
On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hey,
>
> Can you give us the volume info output for this volume?
# gluster volume info virt_images
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks:
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi,
With replica 2 volumes one can easily end up in split-brains if there are
frequent disconnects and high IOs going on.
If you use replica 3 or arbiter volumes, it will guard you by using the
quorum mechanism giving you both consistency and availability.
But in replica 2 volumes, quorum does not make sense since it needs both
the nodes up to guarantee consistency, which costs availability.
If
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto:
> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
>> Hi, after many VM crashes during upgrades of Gluster, losing network
>> connectivity on one node etc. I would advise running replica 2 with
>> arbiter.
>
> Hi Pavel, this is bad news :(
> So, in your case at least, Gluster was not stable? Something as simple
> as an
2017 Aug 25
0
GlusterFS as virtual machine storage
Il 25-08-2017 08:32 Gionatan Danti ha scritto:
> Hi all,
> any other advice from who use (or do not use) Gluster as a replicated
> VM backend?
>
> Thanks.
Sorry, I was not seeing messages because I was not subscribed on the
list; I read it from the web.
So it seems that Pavel and WK have vastly different experience with
Gluster. Any plausible cause for that difference?
> WK
2017 Aug 25
2
GlusterFS as virtual machine storage
On 8/25/2017 12:56 AM, Gionatan Danti wrote:
>
>
>> WK wrote:
>> 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2
>> node with a VM
>
> This is true even if I manage locking at application level (via
> virlock or sanlock)?
We ran Rep2 for years on 3.4.? It does work if you are really,really?
careful,? But in a crash on one side, you might
2012 Nov 26
1
Heal not working
Hi,
I have a volume created of 12 bricks and with 3x replication (no stripe). We had to take one server (2 bricks per server, but configured such that first brick from every server, then second brick from every server so there should not be 1 server multiple times in any replica groups) for maintenance. The server was down for 40 minutes and after it came up I saw that gluster volume heal home0
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello,
In AFR we currently allow look-ups to pass through without taking into
account whether the lookup is served from the good or bad brick. We
always serve from the good brick whenever possible, but if there is
none, we just serve the lookup from one of the bricks that we got a
positive reply from.
We found a bug? [1] due to this behavior were the iatt values returned
in the lookup call
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi,
Some questions:
-Did you by any chance change the cluster.quorum-type option from the
default values?
-Is filename.shareKey supposed to be any empty file? Looks like the file
was fallocated with the keep-size option but never written to. (On the 2
data bricks, stat output shows Size =0, but non zero Blocks and yet a
'regular empty file').
-Do you have some sort of a
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
NODE 1:
File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
Size: 0 Blocks: 38 IO Block: 131072 regular
2017 Oct 26
0
not healing one file
Hi Richard,
Thanks for the informations. As you said there is gfid mismatch for the
file.
On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different.
This is not considered as split-brain because we have two good copies here.
Gluster 3.10 does not have a method to resolve this situation other than the
manual intervention [1]. Basically what you need to do is remove the
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi,
Please fine below the answers to your questions
1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume:
Option Value
------ -----
cluster.quorum-type none
2) The .shareKey