I am looking for guidance on the recommend settings as I convert to shards. I have read most of the list back through last year and I think the conclusion I came to was to keep it simple. One: It may take months to convert my current VM images to shard's, do you see any issues with this? My priority is to make sure future images are distributed as shards. Two: Settings, my intent is to set it as follows based on guidance on the Redhat site and what I have been reading here. Do these look okay? Additional suggestions? Modified Settings ==================== features.shard enable features.shard-block-size 512MB data-self-heal-algorithm full Current Hardware ==================== Hyper-converged. VM's running Gluster Nodes Currently across three servers. Distributed-Replicate - All Bricks 1TB SSD Network - 10GB Connections Image Sizes - Up to 300GB Current Gluster Version ====================== 3.8.4 Current Settings ==================== Type: Distributed-Replicate Number of Bricks: 4 x 3 = 12 Transport-type: tcp Options Reconfigured: cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off server.allow-insecure: on performance.readdir-ahead: on performance.cache-size: 1GB performance.io-thread-count: 64 nfs.disable: on -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170119/c53ec012/attachment.html>
> > One: It may take months to convert my current VM images to shard's, do you > see any issues with this? My priority is to make sure future images are > distributed as shards.You should be able to do that while your VMs are running. I guess it depends on your hypvervisor, but with KVM just moving the disk to a new filename while the VM is running should be enough, as it'll create a new file and copy the data, thus creating the shards. But it'll take a while for sure.> > Two: Settings, my intent is to set it as follows based on guidance on the > Redhat site and what I have been reading here. Do these look okay? > Additional suggestions? > > > > Modified Settings > > ====================> > features.shard enable > > features.shard-block-size 512MB >That seems huge to me, especialy if you're going for the full algo. We're using 64MB and we're quite happy with it. But we don't have 10 GB connections between the servers so maybe you're fine ..> data-self-heal-algorithm full > > > > Current Hardware > > ====================> > Hyper-converged. VM's running Gluster Nodes > > Currently across three servers. Distributed-Replicate - All Bricks 1TB SSD > > Network - 10GB Connections > > Image Sizes - Up to 300GB > > > > Current Gluster Version > > ======================> > 3.8.4 > > > > Current Settings > > ====================> > Type: Distributed-Replicate > > Number of Bricks: 4 x 3 = 12 > > Transport-type: tcp > > Options Reconfigured: > > cluster.server-quorum-type: server > > cluster.quorum-type: auto > > network.remote-dio: enable > > cluster.eager-lock: enable > > performance.stat-prefetch: off > > performance.io-cache: off > > performance.read-ahead: off > > performance.quick-read: off > > server.allow-insecure: on > > performance.readdir-ahead: on > > performance.cache-size: 1GB > > performance.io-thread-count: 64 > > nfs.disable: on >-- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/267fc71b/attachment.sig>
It's a given, but test it well before going into production. People have occasionally had problems with corruption when converting to shards. In my initial tests, enabling sharding took our I/O down to 15Kbps from 300Mpbs without. data-self-heal-algorithm full>That could be painful. Any particular reason you've chosen full?> > All Bricks 1TB SSDImage Sizes ? Up to 300GB>If your images easily fit within the bricks, why do you need sharding in the first place? It adds an extra layer of complexity & removes the cool feature of having entire files on each brick, making DR & things a lot easier. Doug On 20 January 2017 at 00:11, Gustave Dahl <gustave at dahlfamily.net> wrote:> I am looking for guidance on the recommend settings as I convert to > shards. I have read most of the list back through last year and I think > the conclusion I came to was to keep it simple. > > > > One: It may take months to convert my current VM images to shard?s, do you > see any issues with this? My priority is to make sure future images are > distributed as shards. > > Two: Settings, my intent is to set it as follows based on guidance on the > Redhat site and what I have been reading here. Do these look okay? > Additional suggestions? > > > > Modified Settings > > ====================> > features.shard enable > > features.shard-block-size 512MB > > data-self-heal-algorithm full > > > > Current Hardware > > ====================> > Hyper-converged. VM?s running Gluster Nodes > > Currently across three servers. Distributed-Replicate - All Bricks 1TB > SSD > > Network - 10GB Connections > > Image Sizes ? Up to 300GB > > > > Current Gluster Version > > ======================> > 3.8.4 > > > > Current Settings > > ====================> > Type: Distributed-Replicate > > Number of Bricks: 4 x 3 = 12 > > Transport-type: tcp > > Options Reconfigured: > > cluster.server-quorum-type: server > > cluster.quorum-type: auto > > network.remote-dio: enable > > cluster.eager-lock: enable > > performance.stat-prefetch: off > > performance.io-cache: off > > performance.read-ahead: off > > performance.quick-read: off > > server.allow-insecure: on > > performance.readdir-ahead: on > > performance.cache-size: 1GB > > performance.io-thread-count: 64 > > nfs.disable: on > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/78967914/attachment.html>
Lindsay Mathieson
2017-Jan-20 14:44 UTC
[Gluster-users] Convert to Shard - Setting Guidance
On 20/01/2017 1:11 PM, Gustave Dahl wrote:> > One: It may take months to convert my current VM images to shard?s, do > you see any issues with this? My priority is to make sure future > images are distributed as shards. > > Two: Settings, my intent is to set it as follows based on guidance on > the Redhat site and what I have been reading here. Do these look > okay? Additional suggestions? >They look good to me, except I would go with a smaller chunksize, but like Kevin, I don't have 10G ethernet or SSD bricks (Jellus). You do get slightly better write performance with the larger shard sizes. One question - how do you plan to convert the VM's? - setup a new volume and copy the VM images to that? - or change the shard setting inplace? (I don't think that would work) -- Lindsay Mathieson -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170121/aea76a94/attachment.html>