On Sun, Apr 30, 2017 at 2:04 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote:> 2017-04-30 10:13 GMT+02:00 <lemonnierk at ulrar.net>: > > I was (I believe) the first one to run into the bug, it happens and I > knew it > > was a risk when installing gluster. > > I know. > > > But since then I didn't see any warnings anywhere except here, I agree > > with you that it should be mentionned in big bold letters on the site. > > > > Might even be worth adding a warning directly on the cli when trying to > > add bricks if sharding is enabled, to make sure no-one will destroy a > > whole cluster for a known bug. > > Exactly. This is making me angry. > > Even $BigVendor usually release a security bulletin, in example: > https://support.citrix.com/article/CTX214305 > https://support.citrix.com/article/CTX214768 > > Immediatly after discovering that bug, a report was made available (on > official website, not on a mailinglist) > telling users which operations should be avoided until a fix is made. > > Gluster don't. There is a huge bug that isn't referenced in official docs. > > Is not acting like a customer, i'm just asking for some transparancy. > > Even if this is an open source project, nobody should play with user data. > This bug (or, better, these bugs) are know from time, an there is NO WORDS > in any official docs nor the web site. > > is not a rare bug, it *always* loose data when used with VMs and > sharding during a rebalance. > this feature should be disabled or users should be warned somewhere on > web site and not forcing > all of them to look through ML archives. > > Anyway, i've just asked for a feature like simplifying the add-brick > process. Gluster devs are free to ignore it > but if they are interest in something similiar, i'm willing to provide > more info (if I can, i'm not a developer) > > I really love gluster, lack of metadata server is awesome, files > stored "verbatim" with no alteration is amazing (almost all SDS alter > file when stored on disks) > but being forced to add bricks in a multiple of replica count is > making gluster very expesive (yes, there is a workaround with multiple > steps, but this is prone to > error, thus i'm asking to simplify this phase allowing users to add a > single brick to a replica X volume with automatic member replacement > and rebalance) >IMHO It is difficult to implement what you are asking for without metadata server which stores where each replica is stored.> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/fd8a1207/attachment.html>
On Mon, May 1, 2017 at 9:53 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote:> > > On Sun, Apr 30, 2017 at 2:04 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> 2017-04-30 10:13 GMT+02:00 <lemonnierk at ulrar.net>: >> > I was (I believe) the first one to run into the bug, it happens and I >> knew it >> > was a risk when installing gluster. >> >> I know. >> >> > But since then I didn't see any warnings anywhere except here, I agree >> > with you that it should be mentionned in big bold letters on the site. >> > >> > Might even be worth adding a warning directly on the cli when trying to >> > add bricks if sharding is enabled, to make sure no-one will destroy a >> > whole cluster for a known bug. >> >> Exactly. This is making me angry. >> >> Even $BigVendor usually release a security bulletin, in example: >> https://support.citrix.com/article/CTX214305 >> https://support.citrix.com/article/CTX214768 >> >> Immediatly after discovering that bug, a report was made available (on >> official website, not on a mailinglist) >> telling users which operations should be avoided until a fix is made. >> >> Gluster don't. There is a huge bug that isn't referenced in official docs. >> >> Is not acting like a customer, i'm just asking for some transparancy. >> >> Even if this is an open source project, nobody should play with user data. >> This bug (or, better, these bugs) are know from time, an there is NO WORDS >> in any official docs nor the web site. >> >> is not a rare bug, it *always* loose data when used with VMs and >> sharding during a rebalance. >> this feature should be disabled or users should be warned somewhere on >> web site and not forcing >> all of them to look through ML archives. >> >> Anyway, i've just asked for a feature like simplifying the add-brick >> process. Gluster devs are free to ignore it >> but if they are interest in something similiar, i'm willing to provide >> more info (if I can, i'm not a developer) >> >> I really love gluster, lack of metadata server is awesome, files >> stored "verbatim" with no alteration is amazing (almost all SDS alter >> file when stored on disks) >> but being forced to add bricks in a multiple of replica count is >> making gluster very expesive (yes, there is a workaround with multiple >> steps, but this is prone to >> error, thus i'm asking to simplify this phase allowing users to add a >> single brick to a replica X volume with automatic member replacement >> and rebalance) >> > > IMHO It is difficult to implement what you are asking for without metadata > server which stores where each replica is stored. >Anther way is probably loading replica on top of distribute, but that is architecture change and may need lot of testing and fixing corner cases. I don't think it is easier to get this done.> > >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > > > -- > Pranith >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/dbda9c51/attachment.html>
2017-05-01 18:23 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:> IMHO It is difficult to implement what you are asking for without metadata > server which stores where each replica is stored.Can't you distribute a sort of file mapping to each node ? AFAIK , gluster already has some metadata stored in the cluster, what is missing is a mapping between each file/shard and brick. Maybe a simple DB (just as an idea: sqlite, berkeleydb, ...) stored in a fixed location on gluster itself, being replicated across nodes.