Displaying 6 results from an estimated 6 matches for "4x11t".
Did you mean:
0x11
2018 Feb 26
2
Quorum in distributed-replicate volume
...bricks which are not in (cluster-wide) quorum
refuse to accept writes? I'm not seeing the reason for using individual
subvolume quorums instead of full-volume quorum.
> It would be great if you can consider configuring an arbiter or
> replica 3 volume.
I can. My bricks are 2x850G and 4x11T, so I can repurpose the small
bricks as arbiters with minimal effect on capacity. What would be the
sequence of commands needed to:
1) Move all data off of bricks 1 & 2
2) Remove that replica from the cluster
3) Re-add those two bricks as arbiters
(And did I miss any additional steps?)
Unfo...
2018 Feb 27
0
Quorum in distributed-replicate volume
...file, quorum is met, and now brick 1 says brick 2
is bad
- When both the bricks 1 & 2 are up, both of them blame the other brick -
*split-brain*
>
> > It would be great if you can consider configuring an arbiter or
> > replica 3 volume.
>
> I can. My bricks are 2x850G and 4x11T, so I can repurpose the small
> bricks as arbiters with minimal effect on capacity. What would be the
> sequence of commands needed to:
>
> 1) Move all data off of bricks 1 & 2
> 2) Remove that replica from the cluster
> 3) Re-add those two bricks as arbiters
>
>
(And d...
2018 Feb 27
2
Quorum in distributed-replicate volume
...cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
> > I can. My bricks are 2x850G and 4x11T, so I can repurpose the small
> > bricks as arbiters with minimal effect on capacity. What would be the
> > sequence of commands needed to:
> >
> > 1) Move all data off of bricks 1 & 2
> > 2) Remove that replica from the cluster
> > 3) Re-add those two brick...
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave,
On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote:
> I've configured 6 bricks as distributed-replicated with replica 2,
> expecting that all active bricks would be usable so long as a quorum of
> at least 4 live bricks is maintained.
>
The client quorum is configured per replica sub volume and not for the
entire volume.
Since you have a
2018 Feb 27
0
Quorum in distributed-replicate volume
...p, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
> > > > It would be great if you can consider configuring an arbiter or
> > > > replica 3 volume.
> > >
> > > I can. My bricks are 2x850G and 4x11T, so I can repurpose the small
> > > bricks as arbiters with minimal effect on capacity. What would be the
> > > sequence of commands needed to:
> > >
> > > 1) Move all data off of bricks 1 & 2
> > > 2) Remove that replica from the cluster
> >...
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum