Displaying 20 results from an estimated 8000 matches similar to: "arbiter node on client?"
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote:
> is possible to add an arbiter node on the client?
I've been running in that configuration for a couple months now with no
problems. I have 6 data + 3 arbiter bricks hosting VM disk images and
all three of my arbiter bricks are on one of the kvm hosts.
> Can I use multiple arbiter for the same volume ? In example,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> I will try to explain how you can end up in split-brain even with cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the distribution count also unchanged.
> >
> > I can probably find
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
>
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave,
On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote:
> I've configured 6 bricks as distributed-replicated with replica 2,
> expecting that all active bricks would be usable so long as a quorum of
> at least 4 live bricks is maintained.
>
The client quorum is configured per replica sub volume and not for the
entire volume.
Since you have a
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
But if I change replica 2 arbiter 1 to replica 3 arbiter 1
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got thir error:
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant configuration. Use
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command