No, you proposed a wish. A feature needs described behavior, certainly a
lot more than "it should just know what I want it to do".
I'm done. You can continue to feel entitled here on the mailing list.
I'll just set my filters to bitbucket anything from you.
On 04/29/2017 01:00 PM, Gandalf Corvotempesta wrote:> I repeat: I've just proposed a feature
> I'm not a C developer and I don't know gluster internals, so I
can't
> provide details
>
> I've just asked if simplifying the add brick process is something that
> developers are interested to add
>
> Il 29 apr 2017 9:34 PM, "Joe Julian" <joe at julianfamily.org
> <mailto:joe at julianfamily.org>> ha scritto:
>
> What I said publicly in another email ... but not to call out my
> perception of your behavior publicly if also like to say:
>
> Acting adversarial doesn't make anybody want to help, especially
> not me and I'm the user community's biggest proponent.
>
> On April 29, 2017 11:08:45 AM PDT, Gandalf Corvotempesta
> <gandalf.corvotempesta at gmail.com
> <mailto:gandalf.corvotempesta at gmail.com>> wrote:
>
> Mine was a suggestion
> Fell free to ignore was gluster users has to say and still
> keep going though your way
>
> Usually, open source project tends to follow users suggestions
>
> Il 29 apr 2017 5:32 PM, "Joe Julian" <joe at
julianfamily.org
> <mailto:joe at julianfamily.org>> ha scritto:
>
> Since this is an open source community project, not a
> company product, feature requests like these are welcome,
> but would be more welcome with either code or at least a
> well described method. Broad asks like these are of little
> value, imho.
>
>
> On 04/29/2017 07:12 AM, Gandalf Corvotempesta wrote:
>
> Anyway, the proposed workaround:
>
https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
>
<https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/>
> won't work with just a single volume made up of 2
> replicated bricks.
> If I have a replica 2 volume with server1:brick1 and
> server2:brick1,
> how can I add server3:brick1 ?
> I don't have any bricks to "replace"
>
> This is something i would like to see implemented in
> gluster.
>
> 2017-04-29 16:08 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempesta at gmail.com
> <mailto:gandalf.corvotempesta at gmail.com>>:
>
> 2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri
> <pkarampu at redhat.com <mailto:pkarampu at
redhat.com>>:
>
> Are you suggesting this process to be easier
> through commands, rather than
> for administrators to figure out how to place
> the data?
>
> [1]
>
http://lists.gluster.org/pipermail/gluster-users/2016-July/027431.html
>
<http://lists.gluster.org/pipermail/gluster-users/2016-July/027431.html>
>
> Admin should always have the ability to choose
> where to place data,
> but something
> easier should be added, like in any other SDS.
>
> Something like:
>
> gluster volume add-brick gv0 new_brick
>
> if gv0 is a replicated volume, the add-brick
> should automatically add
> the new brick and rebalance data automatically,
> still keeping the
> required redundancy level
>
> In case admin would like to set a custom placement
> for data, it should
> specify a "force" argument or something
similiar.
>
> tl;dr: as default, gluster should preserve data
> redundancy allowing
> users to add single bricks without having to think
> how to place data.
> This will make gluster way easier to manage and
> much less error prone,
> thus increasing the resiliency of the whole gluster.
> after all , if you have a replicated volume, is
> obvious that you want
> your data to be replicated and gluster should
> manage this on it's own.
>
> Is this something are you planning or considering
> for further implementation?
> I know that lack of metadata server (this is a
> HUGE advantage for
> gluster) means less flexibility, but as there is a
> manual workaround
> for adding
> single bricks, gluster should be able to handle
> this automatically.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
<http://lists.gluster.org/mailman/listinfo/gluster-users>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at
gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
> <http://lists.gluster.org/mailman/listinfo/gluster-users>
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170429/8622a2ef/attachment.html>