<html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <p>Hi everyone!</p> <p>I need a sanity check on our Server Quorum Ratio settings to ensure the maximum uptime for our virtual machines. I'd like to modify them slightly, but I'm not really interested in experimenting with live servers to see if what I'm doing is going to work, but I think that the theory is sound.<br> </p> <p>We have a Gluster array of 3 servers containing two Replicate bricks. <br> </p> <p>Brick 1 is a 1x3 arrangement where this brick is replicated on all three servers. The quorum ratio is set to 51%, so that if any one Gluster server goes down, the brick is still in Read/Write mode and the broken server will update itself when it comes back online. The clients won't notice a thing, while still ensuring that a split-brain condition doesn't occur. <br> </p> <p>Brick 2 is a 1x2 arrangement where this brick is replicated across only two servers. The quorum ratio is currently also set to 51%, but my understanding is that if one of the servers that hosts this brick goes down, it will go into read-only mode, which would probably be disruptive to the VMs we host on this brick.<br> </p> <p>My understanding is that since there are three servers in the array, I should be able to set the quorum ratio on Brick2 to 50% and the array will still be able to prevent a split-brain from occurring, because the other two servers will know which one is offline. <br> </p> <p>The alternative of course, is to simply flesh out Brick2 with a third disk. However, I've heard that 1x2 replication is faster than 1x3, and we'd prefer that extra speed for this task.<br> </p> </body> </html>
Arbiter brick is what you need Sent from my Windows 10 phone From: Ernie Dunbar Sent: Wednesday, 5 July 2017 4:28 AM To: Gluster-users Subject: [Gluster-users] I need a sanity check. Hi everyone! I need a sanity check on our Server Quorum Ratio settings to ensure the maximum uptime for our virtual machines. I'd like to modify them slightly, but I'm not really interested in experimenting with live servers to see if what I'm doing is going to work, but I think that the theory is sound. We have a Gluster array of 3 servers containing two Replicate bricks. Brick 1 is a 1x3 arrangement where this brick is replicated on all three servers. The quorum ratio is set to 51%, so that if any one Gluster server goes down, the brick is still in Read/Write mode and the broken server will update itself when it comes back online. The clients won't notice a thing, while still ensuring that a split-brain condition doesn't occur. Brick 2 is a 1x2 arrangement where this brick is replicated across only two servers. The quorum ratio is currently also set to 51%, but my understanding is that if one of the servers that hosts this brick goes down, it will go into read-only mode, which would probably be disruptive to the VMs we host on this brick. My understanding is that since there are three servers in the array, I should be able to set the quorum ratio on Brick2 to 50% and the array will still be able to prevent a split-brain from occurring, because the other two servers will know which one is offline. The alternative of course, is to simply flesh out Brick2 with a third disk. However, I've heard that 1x2 replication is faster than 1x3, and we'd prefer that extra speed for this task. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170705/1492aa00/attachment.html>
You are confusing volume with brick. You do not have a "Replicate Brick", you have one 1x3 volume, composed of 3 bricks, and one 1x2 volume made up of 2 bricks. You do need to understand the difference between volume and brick Also you need to be aware of the differences between server quorum and client quorum. For client quorum you need three bricks. For the third brick you can use an arbiter brick however. Krist On 4 July 2017 at 19:28, Ernie Dunbar <maillist at lightspeed.ca> wrote:> Hi everyone! > > I need a sanity check on our Server Quorum Ratio settings to ensure the > maximum uptime for our virtual machines. I'd like to modify them slightly, > but I'm not really interested in experimenting with live servers to see if > what I'm doing is going to work, but I think that the theory is sound. > > We have a Gluster array of 3 servers containing two Replicate bricks. > > Brick 1 is a 1x3 arrangement where this brick is replicated on all three > servers. The quorum ratio is set to 51%, so that if any one Gluster server > goes down, the brick is still in Read/Write mode and the broken server will > update itself when it comes back online. The clients won't notice a thing, > while still ensuring that a split-brain condition doesn't occur. > > Brick 2 is a 1x2 arrangement where this brick is replicated across only > two servers. The quorum ratio is currently also set to 51%, but my > understanding is that if one of the servers that hosts this brick goes > down, it will go into read-only mode, which would probably be disruptive to > the VMs we host on this brick. > > My understanding is that since there are three servers in the array, I > should be able to set the quorum ratio on Brick2 to 50% and the array will > still be able to prevent a split-brain from occurring, because the other > two servers will know which one is offline. > > The alternative of course, is to simply flesh out Brick2 with a third > disk. However, I've heard that 1x2 replication is faster than 1x3, and we'd > prefer that extra speed for this task. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- Vriendelijke Groet | Best Regards | Freundliche Gr??e | Cordialement ------------------------------ Krist van Besien senior architect, RHCE, RHCSA Open Stack Red Hat Red Hat Switzerland S.A. <https://www.redhat.com> krist at redhat.com M: +41-79-5936260 <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170705/1dae02fe/attachment.html>
Seemingly Similar Threads
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- fault tolerancy in glusterfs distributed volume
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
- Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware