Displaying 20 results from an estimated 2000 matches similar to: "Clarification on cluster quorum"
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave,
On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote:
> I've configured 6 bricks as distributed-replicated with replica 2,
> expecting that all active bricks would be usable so long as a quorum of
> at least 4 live bricks is maintained.
>
The client quorum is configured per replica sub volume and not for the
entire volume.
Since you have a
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2007 Aug 09
0
About quorum and fencing
Hi,
In the ocfs2 FAQ, it is written:
"A node has quorum when:
* it sees an odd number of heartbeating nodes and has network
connectivity to more than half of them.
OR,
* it sees an even number of heartbeating nodes and has network
connectivity to at least half of them *and* has connectivity to
the heartbeating node with the lowest node
2017 Oct 09
0
[Gluster-devel] AFR: Fail lookups when quorum not met
On 09/22/2017 07:27 PM, Niels de Vos wrote:
> On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote:
>> Hello,
>>
>> In AFR we currently allow look-ups to pass through without taking into
>> account whether the lookup is served from the good or bad brick. We always
>> serve from the good brick whenever possible, but if there is none, we just
>> serve
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2006 Oct 13
1
Cluster Quorum Question/Problem
Greetings all,
I am in need of professional insight. I have a 2 node cluster running
CentOS, mysql, apache, etc. I have on each system a fiber HBA connected to
a fiber SAN. Each system shows the devices sdb and sdc for each of the
connections on the HBA. I have sdc1 mounted on both machines as /quorum.
When I right to the /quorum from one of the nodes, the file doesn't show up
on the
2012 May 24
0
Is it possible to use quorum for CTDB to prevent split-brain and removing lockfile in the cluster file system
Hello list,
We know that CTDB uses lockfile in the cluster file system to prevent
split-brain.
It is a really good design when all nodes in the cluster can mount the
cluster file system (e.g. GPFS/GFS/GlusterFS) and CTDB can work happily in
this assumption.
However, when split-brain happens, the disconnected private network
violates this assumption usually.
For example, we have four nodes (A, B,
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
>
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
2005 Apr 17
2
Quorum error
Had a problem starting Oracle after expanding an EMC Metalun. We get the
following errors:
>WARNING: OemInit2: Opened file(/oradata/dbf/quorum.dbf 8), tid =
main:1024 file = oem.c, line = 491 {Sun Apr 17 10:33:41 2005 }
>ERROR: ReadOthersDskInfo(): ReadFile(/oradata/dbf/quorum.dbf)
failed(5) - (0) bytes read, tid = main:1024 file = oem.c, line = 1396
{Sun Apr 17 10:33:41 2005 }
2006 Jan 09
0
[PATCH 01/11] ocfs2: event-driven quorum
This patch separates o2net and o2quo from knowing about one another as much
as possible. This is the first in a series of patches that will allow
userspace cluster interaction. Quorum is separated out first, and will
ultimately only be associated with the disk heartbeat as a separate module.
To do so, this patch performs the following changes:
* o2hb_notify() is added to handle injection of
2017 Sep 22
2
AFR: Fail lookups when quorum not met
Hello,
In AFR we currently allow look-ups to pass through without taking into
account whether the lookup is served from the good or bad brick. We
always serve from the good brick whenever possible, but if there is
none, we just serve the lookup from one of the bricks that we got a
positive reply from.
We found a bug? [1] due to this behavior were the iatt values returned
in the lookup call
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> I will try to explain how you can end up in split-brain even with cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
2007 Aug 07
0
Quorum and Fencing with user mode heartbeat
Hi all,
I read the FAQ, especially the questions 75-84 about Quorum and Fencing.
I want to use OCFS2 with Heartbeat V2 with heartbeat_mode 'user'.
What I missed in the FAQ is a explanation of what role in the whole OCFS
system is taken by HAv2 (or other Cluster software)
when using heartbeat_mode 'user'.
1) When is disk heartbeating started? (Mount of device?)
2) When is
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
Appears I solved this one and NFS mounts now work on all my clients. No
issues since fixing it a few hours back.
RESOLUTION
Auditd is to blame for the trouble. Noticed this in the logs on 2 of
the 3 NFS servers (nfs01, nfs02, nfs03):
type=AVC msg=audit(1526965320.850:4094): avc: denied { write } for
pid=8714 comm="ganesha.nfsd" name="nfs_0"
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Jul 17
0
best practices for migrating to new dovecot version
Hi,
> On 17 July 2018 at 22:01 Shawn Heisey <elyograg at elyograg.org> wrote:
>
>
> I have a machine with my mail service on it running Debian Squeeze. The
> version of dovecot on that server is 1.2.15.
>
> I'm building a new server with Ubuntu 18.04 (started with 16.04,
> upgraded in place), where the dovecot version is 2.2.33. I plan to
> migrate all
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm