Kevin Lemonnier
2016-Nov-12 09:21 UTC
[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
> Don't get me wrong but I'm seeing too many "critical" issues like file > corruptions, crashes or similiar recently > Is gluster ready for production? > I'm scared about placing our production VMs (more or less 80) on gluster, > in case of corruption I'll loose everythingWe've had a lot of problems in the past, but at least for us 3.7.12 (and 3.7.15) seems to be working pretty well as long as you don't add bricks. We started doing multiple little clusters and abandonned the idea of one big cluster, had no issues since :) -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161112/4123d235/attachment.sig>
Gandalf Corvotempesta
2016-Nov-12 10:58 UTC
[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
Il 12 nov 2016 10:21, "Kevin Lemonnier" <lemonnierk at ulrar.net> ha scritto:> We've had a lot of problems in the past, but at least for us 3.7.12 (and3.7.15)> seems to be working pretty well as long as you don't add bricks. Westarted doing> multiple little clusters and abandonned the idea of one big cluster, hadno> issues since :) >Well, adding bricks could be usefull... :) Having to create multiple cluster is not a solution and is much more expansive. And if you corrupt data from a single cluster you still have issues I think would be better to add less features and focus more to stability. In a software defined storage, stability and consistency are the most important things I'm also subscribed to moosefs and lizardfs mailing list and I don't recall any single data corruption/data loss event In gluster, after some days of testing I've found a huge data corruption issue that is still unfixed on bugzilla. If you change the shard size on a populated cluster, you break all existing data. Try to do this on a cluster with working VMs and see what happens.... a single cli command break everything and is still unfixed. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161112/d6ee5cc6/attachment.html>
Gandalf Corvotempesta
2016-Nov-12 16:40 UTC
[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
2016-11-12 10:21 GMT+01:00 Kevin Lemonnier <lemonnierk at ulrar.net>:> We've had a lot of problems in the past, but at least for us 3.7.12 (and 3.7.15) > seems to be working pretty well as long as you don't add bricks. We started doing > multiple little clusters and abandonned the idea of one big cluster, had no > issues since :)I was thinking about this. If you meant creating multiple volumes, ok, but having to create multiple clusters is a bad idea. Gluster performs better with multiple nodes, if you have to split the infrastructure (and nodes) in multiple cluster, you'll affect the performance. Is something like to create multiple enterprise SAN to avoid data corruption. Data corruption must be address with firmware/software updates, not by creating multiple storages. if you create 2 gluster storages, you'll get the same issues in multiple storages. It's a bad workaround, not a solution.