> > Type: Distributed-Replicate > Number of Bricks: 2 x 2 = 4 >With that setup, you lose quorum if you lose any one node. Brick 1 replicates to brick 2, and brick 3 replicates to brick 4. If any one of those goes down, quorum falls to <51%, which locks the brick under the default settings. If you've only got 4 servers to play with, I suggest you move to replication 3 arbiter 1. Put the arbiter for servers 1 & 2 on server 3, and the arbiter for servers 3 & 4 on server 1. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/02bdf7c1/attachment.html>
I see. So if I switch to a different cluster quorum model, I may get split-brain which would need manual intervention should a node go missing? Adding an arbiter involves creating a volume from scratch? You can't just add it to an existing volume. On Fri, Jan 20, 2017 at 2:52 PM, Gambit15 <dougti+gluster at gmail.com> wrote:>> Type: Distributed-Replicate >> Number of Bricks: 2 x 2 = 4 > > > With that setup, you lose quorum if you lose any one node. > Brick 1 replicates to brick 2, and brick 3 replicates to brick 4. If any one > of those goes down, quorum falls to <51%, which locks the brick under the > default settings. > > If you've only got 4 servers to play with, I suggest you move to replication > 3 arbiter 1. Put the arbiter for servers 1 & 2 on server 3, and the arbiter > for servers 3 & 4 on server 1. > > https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ >
On 21/01/2017 6:52 AM, Gambit15 wrote:> With that setup, you lose quorum if you lose any one node. > Brick 1 replicates to brick 2, and brick 3 replicates to brick 4. If > any one of those goes down, quorum falls to <51%, which locks the > brick under the default settings.This I think, highlights one of glusters few weaknesses - the inflexibility of brick layout. It would be really nice if you could just arbitrarily add bricks to distributed-replicate volumes and have files be evenly distributed among them as a whole. This would work particularly well with sharded volumes. Unfortunately I suspect this would need some sort of meta server. -- Lindsay Mathieson -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170121/3f99dc4e/attachment.html>