Displaying 2 results from an estimated 2 matches for "have_a".
Did you mean:
have_
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock
> or sanlock)?
Yes. Gluster has it's own quorum, you can disable it but that's just a
recipe for a disaster.
> Also, on a two-node setup it is *guaranteed* for updates to one node to
> put offline the whole volume?
I think so, but I never took the chance so who knows.
> On the other hand, a 3-way
2017 Aug 25
0
GlusterFS as virtual machine storage
On 25/08/2017 6:50 PM, lemonnierk at ulrar.net wrote:
> Free from a lot of problems, but apparently not as good as a replica 3
> volume. I can't comment on arbiter, I only have replica 3 clusters. I
> can tell you that my colleagues setting up 2 nodes clusters have_a lot_
> of problems.
I run Replica 3 VM hosting (gfapi) via a 3 node proxmox cluster. Have
done a lot of rolling node updates, power failures etc, never had a
problem. Performance is better than any other DFS I've tried (Ceph,
lizard/moose). Never did get DRDB working.
nb: ZFS Bricks, w...