aaron at ajserver.com
2015-Sep-14 17:47 UTC
[Gluster-users] Keeping it Simple replication for HA
Gluster users, I am looking to implement GlusterFS on my network for large, expandable, and redundant storage. I have 5 servers with 1 brick each. All I want is a simple replication that requires at least 3 of the 5 bricks have a copy of the data so I can lose any 2 bricks without data loss. I have tried replica 3 with 5 bricks but it seems to complain that my # of bricks must be a multiple of of my replica count. There a simple replication method like replica=3 with 5 bricks with glusterfs? Is there a better or different technology I can use for this? Thanks, Aaron
On 09/14/2015 10:47 AM, aaron at ajserver.com wrote:> simple replication > that requires at least 3 of the 5 bricks have a copy of the data so I can > lose any 2 bricks without data loss.This is not possible with GlusterFS, you have to specify up front which brick has which replica.> I have tried replica 3 with 5 bricks > but it seems to complain that my # of bricks must be a multiple of of my > replica count.The old docs say "should" but I guess the new software says "must". > Is there a better or different technology I can use for this? I think CephFS will allow for "at least 3 of 5" but I don't know the details. Regards, -- Alex Chekholko chekh at stanford.edu
Have you considered the disperse volume? We'd normally advocate 6 servers for a +2 redundancy factor though. Paul C On Tue, Sep 15, 2015 at 5:47 AM, <aaron at ajserver.com> wrote:> Gluster users, > > I am looking to implement GlusterFS on my network for large, expandable, > and redundant storage. > > I have 5 servers with 1 brick each. All I want is a simple replication > that requires at least 3 of the 5 bricks have a copy of the data so I can > lose any 2 bricks without data loss. I have tried replica 3 with 5 bricks > but it seems to complain that my # of bricks must be a multiple of of my > replica count. > > There a simple replication method like replica=3 with 5 bricks with > glusterfs? Is there a better or different technology I can use for this? > > Thanks, > Aaron > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150915/39cd86df/attachment.html>