Il 26-08-2017 01:13 WK ha scritto:> Big +1 on what was Kevin just said.? Just avoiding the problem is the > best strategy.Ok, never run Gluster with anything less than a replica2 + arbiter ;)> However, for the record,? and if you really, really want to get deep > into the weeds on the subject, then the? Gluster people have docs on > Split-Brain recovery. > > https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/ > > and if you Google the topic, there are a lot of other blog posts, > emails, etc that discuss it. > > I'd recommend reviewing those as well to wrap your head around what is > going on.I'll surely give a look at the documentation. I have the "bad" habit of not putting into production anything I know how to repair/cope with. Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8
Il 26-08-2017 07:38 Gionatan Danti ha scritto:> I'll surely give a look at the documentation. I have the "bad" habit > of not putting into production anything I know how to repair/cope > with. > > Thanks.Mmmm, this should read as: "I have the "bad" habit of not putting into production anything I do NOT know how to repair/cope with" Really :D Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8
Everton Brogliatto
2017-Aug-30 01:57 UTC
[Gluster-users] GlusterFS as virtual machine storage
Ciao Gionatan, I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide storage for oVirt 4.x and I have had no major issues so far. I have done online upgrades a couple of times, power losses, maintenance, etc with no issues. Overall, it is very resilient. Important thing to keep in mind is your network, I run the Gluster nodes on a redundant network using bonding mode 1 and I have performed maintenance on my switches, bringing one of them off-line at a time without causing problems in my Gluster setup or in my running VMs. Gluster recommendation is to enable jumbo frames across the subnet/servers/switches you use for Gluster operations. Your switches must support MTU 9000 + 208 at least. There were two occasions where I purposely caused a split brain situation and I was able to heal the files manually. Volume performance tuning can make a significant difference in Gluster. As others have mentioned previously, sharding is recommended when running VMs as it will split big files in smaller pieces, making it easier for the healing to occur. When you enable sharding, the default sharding block size is 4MB which will significantly reduce your writing speeds. oVirt recommends the shard block size to be 512MB. The volume options you are looking here are: features.shard on features.shard-block-size 512MB I had an experimental setup in replica 2 using an older version of Gluster few years ago and it was unstable, corrupt data and crashed many times. Do not use replica 2. As others have already said, minimum is replica 2+1 arbiter. If you have any questions that I perhaps can help with, drop me an email. Regards, Everton Brogliatto On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <g.danti at assyoma.it> wrote:> Il 26-08-2017 07:38 Gionatan Danti ha scritto: > >> I'll surely give a look at the documentation. I have the "bad" habit >> of not putting into production anything I know how to repair/cope >> with. >> >> Thanks. >> > > Mmmm, this should read as: > > "I have the "bad" habit of not putting into production anything I do NOT > know how to repair/cope with" > > Really :D > > > Thanks. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.danti at assyoma.it - info at assyoma.it > GPG public key ID: FF5F32A8 > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170830/508b075a/attachment.html>