similar to: Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware

Displaying 20 results from an estimated 3000 matches similar to: "Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware"

2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html It should be pretty much the same for replica 3, you change the xattrs with something like: # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a When I try to decide which
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik, Thanks for providing the required outputs. See my replies inline. On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote: > Hi Karthik and Ben, > > I'll try and reply to you inline. > > On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> > wrote: > > Hey, > > > > Can you give us the
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey, Can you give us the volume info output for this volume? Why are you not able to get the xattrs from arbiter brick? It is the same way as you do it on data bricks. The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in the getxattr outputs you have provided. Did you do a remove-brick and add-brick any time? Otherwise it will be trusted.afr.virt_images-client-{0,1,2} usually.
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben, I'll try and reply to you inline. On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hey, > > Can you give us the volume info output for this volume? # gluster volume info virt_images Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik, Thanks for the info. Maybe the documentation should be updated to explain the different AFR versions, I know I was confused. Also, when looking at the changelogs from my three bricks before fixing: Brick 1: trusted.afr.virt_images-client-1=0x000002280000000000000000 trusted.afr.virt_images-client-3=0x000000000000000000000000 Brick 2:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey Henrik, Good to know that the issue got resolved. I will try to answer some of the questions you have. - The time taken to heal the file depends on its size. That's why you were seeing some delay in getting everything back to normal in the heal info output. - You did not hit the split-brain situation. In split-brain all the bricks will be blaming the other bricks. But in your case the
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote: Hey Alex, With two nodes, the setup works but both sides go down when one node is missing. Still I set the below two params to none and that solved my issue: cluster.quorum-type: none cluster.server-quorum-type: none Thank you for that. Cheers, Tom > Hi, > > You need 3 nodes at least to have quorum enabled. In 2 node setup you > need to
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote: > On 4/9/2018 2:45 AM, Alex K wrote: > Hey Alex, > > With two nodes, the setup works but both sides go down when one node is > missing. Still I set the below two params to none and that solved my issue: > > cluster.quorum-type: none > cluster.server-quorum-type: none > > yes this disables
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All, In a two node glusterfs setup, with one node down, can't use the second node to mount the volume. I understand this is expected behaviour? Anyway to allow the secondary node to function then replicate what changed to the first (primary) when it's back online? Or should I just go for a third node to allow for this? Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi, You need 3 nodes at least to have quorum enabled. In 2 node setup you need to disable quorum so as to be able to still use the volume when one of the nodes go down. On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote: > Hey All, > > In a two node glusterfs setup, with one node down, can't use the second > node to mount the volume. I understand this is
2017 Nov 15
1
unable to remove brick, pleas help
Hi, I am trying to remove a brick, from a server which is no longer part of the gluster pool, but I keep running into errors for which I cannot find answers on google. [root at virt2 ~]# gluster peer status Number of Peers: 3 Hostname: srv1 Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb State: Peer in Cluster (Disconnected) Hostname: srv3 Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825 State: Peer in
2010 Sep 24
2
grep contents of file on remote server
Hello, I am attempting to grep the contents of a key file I have SCP'd to a remote server. I am able to cat it: [code] [bluethundr at LBSD2:~]$:ssh root at sum1 cat /root/id_rsa.pub root at lcent01.summitnjhome.com's password: ssh-rsa
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the > command? Does it complain anything about these files? > No, glustershd.log is clean, no extra log after command on all 3 nodes > 2. Are these 12 files also present in the 3rd data brick?
2013 Nov 29
1
Self heal problem
Hi, I have a glusterfs volume replicated on three nodes. I am planing to use the volume as storage for vMware ESXi machines using NFS. The reason for using tree nodes is to be able to configure Quorum and avoid split-brains. However, during my initial testing when intentionally and gracefully restart the node "ned", a split-brain/self-heal error occurred. The log on "todd"
2018 Feb 05
0
[ovirt-users] VM paused due unknown storage error
Adding gluster-users. On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan <kmisak at gmail.com> wrote: > Hi, > > here is the output from virt3 - problematic host: > > [root at virt3 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port Online > Pid >
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick whenever possible, but if there is none, we just serve the lookup from one of the bricks that we got a positive reply from. We found a bug? [1] due to this behavior were the iatt values returned in the lookup call
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 11:41 PM, yayo (j) wrote: > Hi, > > Sorry for follow up again, but, checking the ovirt interface I've > found that ovirt report the "engine" volume as an "arbiter" > configuration and the "data" volume as full replicated volume. Check > these screenshots: This is probably some refresh bug in the UI, Sahina might be able to tell