I'm using libgfapi. Gluster version is 3.8.1-1. Here is the volume info: Volume Name: virt0 Type: Distributed-Replicate Volume ID: fb9f428e-b1b5-4136-8b59-19d680237302 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: elkpinfglt01:/vol/virt/virt0 Brick2: elkpinfglt02:/vol/virt/virt0 Brick3: elkpinfglt03:/vol/virt/virt0 Brick4: elkpinfglt04:/vol/virt/virt0 Options Reconfigured: server.event-threads: 8 client.event-threads: 8 performance.client-io-threads: on cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off storage.owner-gid: 500 storage.owner-uid: 500 server.allow-insecure: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on cluster.enable-shared-storage: enable On Fri, Jan 20, 2017 at 2:23 PM, Kevin Lemonnier <lemonnierk at ulrar.net> wrote:>> So, is the setup wrong or does gluster not provide high availability? > > How exactly is it setup ? > libgfapi ? fuse ? NFS mount ? > > It should work, we're using proxmox at work (which uses KVM) with gluster > and it does work well. What version of gluster are you using ? > > -- > Kevin Lemonnier > PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
> > Type: Distributed-Replicate > Number of Bricks: 2 x 2 = 4 >With that setup, you lose quorum if you lose any one node. Brick 1 replicates to brick 2, and brick 3 replicates to brick 4. If any one of those goes down, quorum falls to <51%, which locks the brick under the default settings. If you've only got 4 servers to play with, I suggest you move to replication 3 arbiter 1. Put the arbiter for servers 1 & 2 on server 3, and the arbiter for servers 3 & 4 on server 1. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/02bdf7c1/attachment.html>