Hi Terry,
There is not constraint on number of nodes for erasure coded volumes.
However, there are some suggestions to keep in mind.
If you have 4+2 configuration, that means you can loose maximum 2 bricks at a
time without loosing your volume for IO.
These bricks may fail because of node crash or node disconnection. That is why
it is always good to have all the 6 bricks on 6 different nodes. If you have 3
bricks on one node and this nodes goes down then you
will loose the volume and it will be inaccessible.
So just keep in mind that you should not loose more than redundancy bricks even
if any one node goes down.
----
Ashish
----- Original Message -----
From: "Terry McGuire" <tmcguire at ualberta.ca>
To: gluster-users at gluster.org
Sent: Wednesday, March 29, 2017 11:59:32 PM
Subject: [Gluster-users] Node count constraints with EC?
Hello list. Newbie question: I?m building a low-performance/low-cost storage
service with a starting size of about 500TB, and want to use Gluster with
erasure coding. I?m considering subvolumes of maybe 4+2, or 8+3 or 4. I was
thinking I?d spread these over 4 nodes, and add single nodes over time, with
subvolumes rearranged over new nodes to maintain protection from whole node
failures.
However, reading through some RedHat-provided documentation, they seem to
suggest that node counts should be a multiple of 3, 6 or 12, depending on
subvolume config. Is this actually a requirement, or is it only a suggestion for
best performance or something?
Can anyone comment on node count constraints with erasure coded subvolumes?
Thanks in advance for anyone?s reply,
Terry
_____________________________
Terry McGuire
Information Services and Technology (IST)
University of Alberta
Edmonton, Alberta, Canada T6G 2H1
Phone: 780-492-9422
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170330/40334fc8/attachment.html>