search for: quorums

Displaying 20 results from an estimated 389 matches for "quorums".

Did you mean: quorum
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben, I'll try and reply to you inline. On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hey, > > Can you give us the volume info output for this volume? # gluster volume info virt_images Volume Name: virt_images Type: Replicate Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594 Status: Started Snapshot Count: 2 Number of Bricks:
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik, Thanks for providing the required outputs. See my replies inline. On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote: > Hi Karthik and Ben, > > I'll try and reply to you inline. > > On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> > wrote: > > Hey, > > > > Can you give us the
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey, Can you give us the volume info output for this volume? Why are you not able to get the xattrs from arbiter brick? It is the same way as you do it on data bricks. The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in the getxattr outputs you have provided. Did you do a remove-brick and add-brick any time? Otherwise it will be trusted.afr.virt_images-client-{0,1,2} usually.
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik, Thanks for the info. Maybe the documentation should be updated to explain the different AFR versions, I know I was confused. Also, when looking at the changelogs from my three bricks before fixing: Brick 1: trusted.afr.virt_images-client-1=0x000002280000000000000000 trusted.afr.virt_images-client-3=0x000000000000000000000000 Brick 2:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey Henrik, Good to know that the issue got resolved. I will try to answer some of the questions you have. - The time taken to heal the file depends on its size. That's why you were seeing some delay in getting everything back to normal in the heal info output. - You did not hit the split-brain situation. In split-brain all the bricks will be blaming the other bricks. But in your case the
2018 Feb 26
2
Quorum in distributed-replicate volume
...rst > brick of the particular replica subvol to be up to perform the fop. > > In replica 2 volumes you can end up in split-brains. How would that happen if bricks which are not in (cluster-wide) quorum refuse to accept writes? I'm not seeing the reason for using individual subvolume quorums instead of full-volume quorum. > It would be great if you can consider configuring an arbiter or > replica 3 volume. I can. My bricks are 2x850G and 4x11T, so I can repurpose the small bricks as arbiters with minimal effect on capacity. What would be the sequence of commands needed to: 1...
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and not for the entire volume. Since you have a
2018 Feb 27
0
Quorum in distributed-replicate volume
...ular replica subvol to be up to perform the fop. > > > > In replica 2 volumes you can end up in split-brains. > > How would that happen if bricks which are not in (cluster-wide) quorum > refuse to accept writes? I'm not seeing the reason for using individual > subvolume quorums instead of full-volume quorum. > Split brains happen within the replica pair. I will try to explain how you can end up in split-brain even with cluster wide quorum: Lets say you have 6 bricks (replica 2) volume and you always have at least quorum number of bricks up & running. Bricks 1 &...
2005 Apr 17
2
Quorum error
Had a problem starting Oracle after expanding an EMC Metalun. We get the following errors: >WARNING: OemInit2: Opened file(/oradata/dbf/quorum.dbf 8), tid = main:1024 file = oem.c, line = 491 {Sun Apr 17 10:33:41 2005 } >ERROR: ReadOthersDskInfo(): ReadFile(/oradata/dbf/quorum.dbf) failed(5) - (0) bytes read, tid = main:1024 file = oem.c, line = 1396 {Sun Apr 17 10:33:41 2005 }
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
Hello All, I have installed Oracle Cluster Manager on linux x86-64 nit. I am using ocfs file system for quorum file. But I am getting following error. Please see ocfs configureation below. I would appreciate, if someone could help me to understand if I am doing something wrong. Thanks in advance. --------------------------------------------------cm.log file ---------------------------- oracm,
2006 Jan 09
0
[PATCH 01/11] ocfs2: event-driven quorum
This patch separates o2net and o2quo from knowing about one another as much as possible. This is the first in a series of patches that will allow userspace cluster interaction. Quorum is separated out first, and will ultimately only be associated with the disk heartbeat as a separate module. To do so, this patch performs the following changes: * o2hb_notify() is added to handle injection of
2017 Jun 10
4
How to remove dead peer, osrry urgent again :(
Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now have four peers in the cluster, one dead, three live. Confidence level is not high. -- Lindsay Mathieson
2017 Jul 04
2
I need a sanity check.
2006 Oct 13
1
Cluster Quorum Question/Problem
Greetings all, I am in need of professional insight. I have a 2 node cluster running CentOS, mysql, apache, etc. I have on each system a fiber HBA connected to a fiber SAN. Each system shows the devices sdb and sdc for each of the connections on the HBA. I have sdc1 mounted on both machines as /quorum. When I right to the /quorum from one of the nodes, the file doesn't show up on the
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick whenever possible, but if there is none, we just serve the lookup from one of the bricks that we got a positive reply from. We found a bug? [1] due to this behavior were the iatt values returned in the lookup call
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2007 Aug 09
0
About quorum and fencing
Hi, In the ocfs2 FAQ, it is written: "A node has quorum when: * it sees an odd number of heartbeating nodes and has network connectivity to more than half of them. OR, * it sees an even number of heartbeating nodes and has network connectivity to at least half of them *and* has connectivity to the heartbeating node with the lowest node
2017 Oct 09
0
[Gluster-devel] AFR: Fail lookups when quorum not met
On 09/22/2017 07:27 PM, Niels de Vos wrote: > On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote: >> Hello, >> >> In AFR we currently allow look-ups to pass through without taking into >> account whether the lookup is served from the good or bad brick. We always >> serve from the good brick whenever possible, but if there is none, we just >> serve
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html It should be pretty much the same for replica 3, you change the xattrs with something like: # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a When I try to decide which