It's a given, but test it well before going into production. People have occasionally had problems with corruption when converting to shards. In my initial tests, enabling sharding took our I/O down to 15Kbps from 300Mpbs without. data-self-heal-algorithm full>That could be painful. Any particular reason you've chosen full?> > All Bricks 1TB SSDImage Sizes ? Up to 300GB>If your images easily fit within the bricks, why do you need sharding in the first place? It adds an extra layer of complexity & removes the cool feature of having entire files on each brick, making DR & things a lot easier. Doug On 20 January 2017 at 00:11, Gustave Dahl <gustave at dahlfamily.net> wrote:> I am looking for guidance on the recommend settings as I convert to > shards. I have read most of the list back through last year and I think > the conclusion I came to was to keep it simple. > > > > One: It may take months to convert my current VM images to shard?s, do you > see any issues with this? My priority is to make sure future images are > distributed as shards. > > Two: Settings, my intent is to set it as follows based on guidance on the > Redhat site and what I have been reading here. Do these look okay? > Additional suggestions? > > > > Modified Settings > > ====================> > features.shard enable > > features.shard-block-size 512MB > > data-self-heal-algorithm full > > > > Current Hardware > > ====================> > Hyper-converged. VM?s running Gluster Nodes > > Currently across three servers. Distributed-Replicate - All Bricks 1TB > SSD > > Network - 10GB Connections > > Image Sizes ? Up to 300GB > > > > Current Gluster Version > > ======================> > 3.8.4 > > > > Current Settings > > ====================> > Type: Distributed-Replicate > > Number of Bricks: 4 x 3 = 12 > > Transport-type: tcp > > Options Reconfigured: > > cluster.server-quorum-type: server > > cluster.quorum-type: auto > > network.remote-dio: enable > > cluster.eager-lock: enable > > performance.stat-prefetch: off > > performance.io-cache: off > > performance.read-ahead: off > > performance.quick-read: off > > server.allow-insecure: on > > performance.readdir-ahead: on > > performance.cache-size: 1GB > > performance.io-thread-count: 64 > > nfs.disable: on > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/78967914/attachment.html>
> data-self-heal-algorithm fullThere was a bug in the default algo, at least for VM hosting, not that long ago. Not sure if it was fixed but I know we were told here to use full instead, I'm guessing that's why he's using it too.> If your images easily fit within the bricks, why do you need sharding in > the first place? It adds an extra layer of complexity & removes the cool > feature of having entire files on each brick, making DR & things a lot > easier.Because healing a VM disk without sharding freezes it for the duration of the heal, possibly hours depending on the size. That's just not acceptable. Unless that's related to the bug in the heal algo and it's been fixed ? Not sure -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/3f565142/attachment.sig>
Lindsay Mathieson
2017-Jan-20 14:40 UTC
[Gluster-users] Convert to Shard - Setting Guidance
On 21/01/2017 12:07 AM, Gambit15 wrote:> If your images easily fit within the bricks, why do you need sharding > in the first place? It adds an extra layer of complexity & removes the > cool feature of having entire files on each brick, making DR & things > a lot easier.Because healing with large VM images completes orders of magnitude faster and consumes far less bandwidth/cpu/disk IO -- Lindsay Mathieson