Hello, I've a few questions about conflict resolution in a net-split scenario: 1. What are the default values for cluster.server-quorum-type and cluster.server-quorum-ratio? (at the moment "gluster volume info gvol0" does not report either) 2. If there are 3 mirrored nodes and cluster.server-quorum-ratio is 50, and node1 and node2 are net-split from node3, then am I right in thinking that the volume on node3 will automatically shut down and prevent access thus preventing a conflict? 3. If there are 2 mirrored nodes and a net-split happens and cluster.server-quorum-ratio is 50 then: a) If existing file A is changed on node1 and node2 then the file will enter net-split state, right? b) If existing file B is changed on node1 but not node2 then will the file enter a net-split state? c) If new file C is written on node1 but not node2 then will the file enter a net-split state? 4. Is the outcome of conflict resolution at a file level the same whether node3 is a full replica or just an arbiter? Thank you very much for any advice, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211020/a023749f/attachment.html>
Hi David, You can obtain the values via 'gluster volume get' command. See 'gluster volume get help'.The defaults are also defined and can be seen via 'gluster volume set help'. I think that the quorum ratio in replica 3 is '51'. With cluster.server-quorum-type set to 'server', the TSP nodes are creating a 'quorum'? and when node3 is disconnected from the majority (quorum), the brick of node3 will shutdown till quorum is restored. For point 3 , you need to check cluster.quorum-type. If it's set in auto mode , only the? first brick will allow writes (I think we are talking about replica 2 here) . Yet, if you configered the volume that it allows both bricks to be operational (cluster.quorum-type = fixed, cluster.quorum-count =1, volume is replica 2), then:A) yesB) no, it should just healC) no, it should just heal The conflict protection from arbiter is the same (we both use extended file attributes on arbiter and full data brick). If a file was not properly updated on node2, both bricks (node1+ node3 ) will 'blame' node2 and thus the heal daemon (if enabled) will try to heal that file. Best Regards,Strahil Nikolov On Wed, Oct 20, 2021 at 3:52, David Cunningham<dcunningham at voisonics.com> wrote: Hello, I've a few questions about conflict resolution in a net-split scenario: 1. What are the default values for cluster.server-quorum-type and cluster.server-quorum-ratio? (at the moment "gluster volume info gvol0" does not report either) 2. If there are 3 mirrored nodes and cluster.server-quorum-ratio is 50, and node1 and node2 are net-split from node3, then am I right in thinking that the volume on node3 will automatically shut down and prevent access thus preventing a conflict? 3. If there are 2 mirrored nodes and a net-split happens and cluster.server-quorum-ratio is 50 then:a) If existing file A is changed on node1 and node2 then the file will enter net-split state, right?b) If existing file B is changed on node1 but not node2 then will the file enter a net-split state?c) If new file C is written on node1 but not node2 then will the file enter a net-split state? 4. Is the outcome of conflict resolution at a file level the same whether node3 is a full replica or just an arbiter? Thank you very much for any advice, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211020/e2efd81c/attachment.html>
Hi David, On Wed, Oct 20, 2021 at 6:23 AM David Cunningham <dcunningham at voisonics.com> wrote:> Hello, > > I've a few questions about conflict resolution in a net-split scenario: > > 1. What are the default values for cluster.server-quorum-type and > cluster.server-quorum-ratio? (at the moment "gluster volume info gvol0" > does not report either) > > $gluster volume get gvol0 cluster.server-quorum-type$gluster volume get all cluster.server-quorum-ratio> 2. If there are 3 mirrored nodes and cluster.server-quorum-ratio is 50, > and node1 and node2 are net-split from node3, then am I right in thinking > that the volume on node3 will automatically shut down and prevent access > thus preventing a conflict? > > Yes, the glusterd on node 3 will kill the brick processes on that node.> 3. If there are 2 mirrored nodes and a net-split happens and > cluster.server-quorum-ratio is 50 then: > a) If existing file A is changed on node1 and node2 then the file will > enter net-split state, right? >Yes, for 2 node setups, you must set the ratio to 51% to avoid this. I'm assuming that the 'changing' that you refer to is happening via fuse mounts on each of these nodes.> b) If existing file B is changed on node1 but not node2 then will the file > enter a net-split state? >No.> c) If new file C is written on node1 but not node2 then will the file > enter a net-split state? > > No.> 4. Is the outcome of conflict resolution at a file level the same whether > node3 is a full replica or just an arbiter? > > server quorum feature is not related to replication. It really is aglusterd thing and doesn't really help prevent split-brains, because split-brains are caused by failed I/Os from clients and the clients can be mounted even outside the storage pool. What you need to ensure is that cluster.quorum-type (which is 'client' quorum) is set to auto for replica 3 and arbiter. It already is by default. See https://docs.gluster.org/en/latest/Administrator-Guide/arbiter-volumes-and-quorum/#split-brains-in-replica-volumes for more info. -Ravi> Thank you very much for any advice, > > -- > David Cunningham, Voisonics Limited > http://voisonics.com/ > USA: +1 213 221 1092 > New Zealand: +64 (0)28 2558 3782 > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211020/59e6977f/attachment.html>