Il 24 apr 2017 9:40 AM, "Ashish Pandey" <aspandey at redhat.com>
ha scritto:
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any server. The only thing is that
the server should be a part of cluster.
You can have all the 8 bricks on one server or on 8 different servers.
So, there is no *restriction* on number of servers when you add bricks.
However, the number of bricks which you want to add should be in multiple
of the
configuration you have.
This is clear but it doesn't change the result
As no one is using gluster to replicate data by loosing redundancy (it's
nonsense), adding bricks means adding servers
If our server are already full with no more available slots for adding
disks, the only solution is to add 8 servers more (at least 1 brick per
server)
In you case it should be 8, 16, 24....
"can I add a single node moving from 6:2 to 7:2 and so on ?"
You can not make 6+2 config volume to 7+2 volume. You can not change the
*configuration* of an existing volume.
You can just add bricks in multiple to increase the storage capacity.
Yes and this is the worst thing in gluster: the almost zero flexibility
Bigger the cluster, higher the cost to maintain it or expand it.
If you start with a 6:2 by using commodity hardware, you are screwed, your
next upgrade will be 8 servers with 1 disk/brick each.
Yes, gluster doesn't make use of any metadata server, but I really prefer
to add 2 metadata server and 1 storage server at once when needed than
avoid metadata servers but being forced to add a bounch of servers every
time
More servers means more power cost, more hardware that could fails and so
on.
Let's assume a replica 3 cluster.
If I need to add 2tb more, I have to add 3 servers with 2tb on each server.
Ceph, Lizard, Moose and others allow adding a single server/disk and then
they rebalance data aroud by freeing up the used space adding the new disk.
I thought that this lack of flexibility was addressed is some way in latest
version...
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170424/0c1cae87/attachment.html>