Hi,
Comments inline.
On Tue, Apr 18, 2017 at 1:11 AM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hi,
>
>
> We have a replica 2 volume and we have issue with setting proper quorum.
>
> The volumes used as datastore for vmware/ovirt, the current settings for
> the quorum are:
>
>
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.server-quorum-ratio: 51%
>
>
> Losing the first node which hosting the first bricks will take the storage
> domain in ovirt offline but in the FUSE mount point works fine
"read/write"
>
This is not possible. When the client quorum is set to auto and the replica
count is 2, the first node should be up for read/write to happen from
mount. May be you are missing something here.
> Losing the second node or any other node that hosts only the second bricks
> of the replication will not affect ovirt storage domain i.e 2nd or 4th
> ndoes.
>
Since the server-quorum-ratio is set to 51%, this is also not possible.
Can you share the volume info here?
> As i understand, losing the first brick in replica 2 volumes will render
> the volume to read only
>
Yes you are correct, loosing the first brick will make the volume
read-only.
> , but how FUSE mount works in read write ?
> Also, can we add an arbiter node to the current replica 2 volume without
> losing data ? if yes, does the re-balance bug "Bug 1440635"
affect this
> process ?
>
Yes you can add an arbiter brick without losing data and the bug 1440635
will not affect that, since only the metadata need to be replicated on the
arbiter brick.
> And what happen if we set "cluster.quorum-type: none" and the
first node
> goes offline ?
>
If you set the quorum-type to auto in a replica 2 volume, you will be able
to read/write even when only one brick is up.
For an arbiter volume, quorum-type is auto by default and it is the
recommended setting.
HTH,
Karthik
>
> Thank you.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170419/f3770165/attachment.html>