On Sun, Apr 30, 2017 at 1:43 PM, <lemonnierk at ulrar.net> wrote:
> > So I was a little but luck. If I has all the hardware part, probably i
> > would be firesd after causing data loss by using a software marked as
> stable
>
> Yes, we lost our data last year to this bug, and it wasn't a test
cluster.
> We still hear from it from our clients to this day.
>
> > Is known that this feature is causing data loss and there is no
evidence
> or
> > no warning in official docs.
> >
>
> I was (I believe) the first one to run into the bug, it happens and I knew
> it
> was a risk when installing gluster.
> But since then I didn't see any warnings anywhere except here, I agree
> with you that it should be mentionned in big bold letters on the site.
>
After discussion with 3.10 release maintainer, this was added in
release-notes of 3.10.1:
https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.1.md
But you are right in the sense that just this much documentation doesn't do
enough justice.
>
> Might even be worth adding a warning directly on the cli when trying to
> add bricks if sharding is enabled, to make sure no-one will destroy a
> whole cluster for a known bug.
>
Want to raise a bug on 'distribute' component? If you don't have the
time
let me know I will do the needful.
>
>
> > Il 30 apr 2017 12:14 AM, <lemonnierk at ulrar.net> ha scritto:
> >
> > > I have to agree though, you keep acting like a customer.
> > > If you don't like what the developers focus on, you are free
to
> > > try and offer a bounty to motivate someone to look at what you
want,
> > > or even better : go and buy a license for one of gluster's
commercial
> > > alternatives.
> > >
> > >
> > > On Sat, Apr 29, 2017 at 11:43:54PM +0200, Gandalf Corvotempesta
wrote:
> > > > I'm pretty sure that I'll be able to sleep well even
after your
> block.
> > > >
> > > > Il 29 apr 2017 11:28 PM, "Joe Julian" <joe at
julianfamily.org> ha
> scritto:
> > > >
> > > > > No, you proposed a wish. A feature needs described
behavior,
> certainly
> > > a
> > > > > lot more than "it should just know what I want it
to do".
> > > > >
> > > > > I'm done. You can continue to feel entitled here on
the mailing
> list.
> > > I'll
> > > > > just set my filters to bitbucket anything from you.
> > > > >
> > > > > On 04/29/2017 01:00 PM, Gandalf Corvotempesta wrote:
> > > > >
> > > > > I repeat: I've just proposed a feature
> > > > > I'm not a C developer and I don't know gluster
internals, so I
> can't
> > > > > provide details
> > > > >
> > > > > I've just asked if simplifying the add brick
process is something
> that
> > > > > developers are interested to add
> > > > >
> > > > > Il 29 apr 2017 9:34 PM, "Joe Julian" <joe
at julianfamily.org> ha
> > > scritto:
> > > > >
> > > > >> What I said publicly in another email ... but not
to call out my
> > > > >> perception of your behavior publicly if also like
to say:
> > > > >>
> > > > >> Acting adversarial doesn't make anybody want to
help, especially
> not
> > > me
> > > > >> and I'm the user community's biggest
proponent.
> > > > >>
> > > > >> On April 29, 2017 11:08:45 AM PDT, Gandalf
Corvotempesta <
> > > > >> gandalf.corvotempesta at gmail.com> wrote:
> > > > >>>
> > > > >>> Mine was a suggestion
> > > > >>> Fell free to ignore was gluster users has to
say and still keep
> going
> > > > >>> though your way
> > > > >>>
> > > > >>> Usually, open source project tends to follow
users suggestions
> > > > >>>
> > > > >>> Il 29 apr 2017 5:32 PM, "Joe Julian"
<joe at julianfamily.org> ha
> > > scritto:
> > > > >>>
> > > > >>>> Since this is an open source community
project, not a company
> > > product,
> > > > >>>> feature requests like these are welcome,
but would be more
> welcome
> > > with
> > > > >>>> either code or at least a well described
method. Broad asks like
> > > these are
> > > > >>>> of little value, imho.
> > > > >>>>
> > > > >>>>
> > > > >>>> On 04/29/2017 07:12 AM, Gandalf
Corvotempesta wrote:
> > > > >>>>
> > > > >>>>> Anyway, the proposed workaround:
> > > > >>>>>
https://joejulian.name/blog/how-to-expand-glusterfs-replicat
> > > > >>>>> ed-clusters-by-one-server/
> > > > >>>>> won't work with just a single
volume made up of 2 replicated
> > > bricks.
> > > > >>>>> If I have a replica 2 volume with
server1:brick1 and
> > > server2:brick1,
> > > > >>>>> how can I add server3:brick1 ?
> > > > >>>>> I don't have any bricks to
"replace"
> > > > >>>>>
> > > > >>>>> This is something i would like to see
implemented in gluster.
> > > > >>>>>
> > > > >>>>> 2017-04-29 16:08 GMT+02:00 Gandalf
Corvotempesta
> > > > >>>>> <gandalf.corvotempesta at
gmail.com>:
> > > > >>>>>
> > > > >>>>>> 2017-04-24 10:21 GMT+02:00 Pranith
Kumar Karampuri <
> > > > >>>>>> pkarampu at redhat.com>:
> > > > >>>>>>
> > > > >>>>>>> Are you suggesting this process
to be easier through
> commands,
> > > > >>>>>>> rather than
> > > > >>>>>>> for administrators to figure
out how to place the data?
> > > > >>>>>>>
> > > > >>>>>>> [1]
http://lists.gluster.org/pipermail/gluster-users/2016-
> July/0
> > > > >>>>>>> 27431.html
> > > > >>>>>>>
> > > > >>>>>> Admin should always have the
ability to choose where to place
> > > data,
> > > > >>>>>> but something
> > > > >>>>>> easier should be added, like in any
other SDS.
> > > > >>>>>>
> > > > >>>>>> Something like:
> > > > >>>>>>
> > > > >>>>>> gluster volume add-brick gv0
new_brick
> > > > >>>>>>
> > > > >>>>>> if gv0 is a replicated volume, the
add-brick should
> automatically
> > > add
> > > > >>>>>> the new brick and rebalance data
automatically, still keeping
> the
> > > > >>>>>> required redundancy level
> > > > >>>>>>
> > > > >>>>>> In case admin would like to set a
custom placement for data,
> it
> > > should
> > > > >>>>>> specify a "force"
argument or something similiar.
> > > > >>>>>>
> > > > >>>>>> tl;dr: as default, gluster should
preserve data redundancy
> > > allowing
> > > > >>>>>> users to add single bricks without
having to think how to
> place
> > > data.
> > > > >>>>>> This will make gluster way easier
to manage and much less
> error
> > > prone,
> > > > >>>>>> thus increasing the resiliency of
the whole gluster.
> > > > >>>>>> after all , if you have a
replicated volume, is obvious that
> you
> > > want
> > > > >>>>>> your data to be replicated and
gluster should manage this on
> it's
> > > own.
> > > > >>>>>>
> > > > >>>>>> Is this something are you planning
or considering for further
> > > > >>>>>> implementation?
> > > > >>>>>> I know that lack of metadata server
(this is a HUGE advantage
> for
> > > > >>>>>> gluster) means less flexibility,
but as there is a manual
> > > workaround
> > > > >>>>>> for adding
> > > > >>>>>> single bricks, gluster should be
able to handle this
> > > automatically.
> > > > >>>>>>
> > > > >>>>>
_______________________________________________
> > > > >>>>> Gluster-users mailing list
> > > > >>>>> Gluster-users at gluster.org
> > > > >>>>>
http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > >>>>>
> > > > >>>>
> > > > >>>>
_______________________________________________
> > > > >>>> Gluster-users mailing list
> > > > >>>> Gluster-users at gluster.org
> > > > >>>>
http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > >>>>
> > > > >>>
> > > > >> --
> > > > >> Sent from my Android device with K-9 Mail. Please
excuse my
> brevity.
> > > > >>
> > > > >
> > > > >
> > >
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > >
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/05658b68/attachment.html>