similar to: arbiter node on client?

Displaying 20 results from an estimated 8000 matches similar to: "arbiter node on client?"

2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote: > is possible to add an arbiter node on the client? I've been running in that configuration for a couple months now with no problems. I have 6 data + 3 arbiter bricks hosting VM disk images and all three of my arbiter bricks are on one of the kvm hosts. > Can I use multiple arbiter for the same volume ? In example,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. > > > > I can probably find
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > If you want to use the first two bricks as arbiter, then you need to be > aware of the following things: > - Your distribution count will be decreased to 2. What's the significance of this? I'm trying to find documentation on distribution counts in gluster, but my google-fu is failing me. > - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm
2017 Oct 13
1
small files performance
Where did you read 2k IOPS? Each disk is able to do about 75iops as I'm using SATA disk, getting even closer to 2000 it's impossible Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto: > Depends what you need. > 2K iops for small file writes is not a bad result. > In my case I had a system that was just poorly written and it was >
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibility of > alternating outages. Thanks! > >
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and not for the entire volume. Since you have a
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on the guidelines in the doc [1], you can do it live and
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown? Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the volumes > for 42 seconds (network.ping-timeout). Is there a special way to
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Init.d/system.d script doesn't kill gluster automatically on > reboot/shutdown? > > Sounds less like an issue with how it's shutdown but an issue with how it's mounted perhaps. My gluster fuse mounts seem to handle any one node being shutdown just fine as long as
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the system.d script to kill gluster (service > glusterfs-server stop) only glusterd is killed. Then I guess the > shutdown doesn?t kill everything ! > Killing glusterd does not kill other gluster processes. When you shutdown a node, everything obviously gets killed but the client does not get notified
2017 Jun 29
0
Arbiter node as VM
As long as the VM isn't hosted on one of the two Gluster nodes, that's perfectly fine. One of my smaller clusters uses the same setup. As for your other questions, as long as it supports Unix file permissions, Gluster doesn't care what filesystem you use. Mix & match as you wish. Just try to keep matching Gluster versions across your nodes. On 29 June 2017 at 16:10, mabi
2017 Jun 29
2
Arbiter node as VM
Hello, I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers using ZFS as filesystem. Now in order to avoid a split-brain situation I would like to add a third node as arbiter. Regarding the arbiter node I have a few questions: - can the arbiter node be a virtual machine? (I am planning to use Xen as hypervisor) - can I use ext4 as file system on my arbiter? or does it need
2017 Dec 11
0
How large the Arbiter node?
Hi, there is good suggestion here : http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing <http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing> Since the arbiter brick does not store file data, its disk usage will be considerably less than the other bricks of the replica. The sizing of
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem. Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: > Yes, you would need to reduce it to replica 2 and then convert it to > arbiter. > 1. Ensure there are no pending heals, i.e. heal info shows zero entries. > 2. gluster volume remove-brick thedude replica 2 > ngluster-3.network.hoggins.fr:/export/brick/thedude force > 3. gluster volume