Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is around 130mbs. --------- gluster version 3.8.13 -------- Volume name: data Number of bricks : 4 * 3 = 12 Bricks: Brick1: server1:/brick/data1 Brick2: server1:/brick/data2 Brick3: server1:/brick/data3 Brick4: server1:/brick/data4 Brick5: server2:/brick/data1 . . . Options reconfigure: Performance.strict-o-direct: off Cluster.nufa: off Features.shard-block-size: 512MB Features.shard: on Cluster.server-quorum-type: server Cluster.quorum-type: auto Cluster.eager-lock: enable Network.remote-dio: on Performance.readdir-ahead: on Any idea on how to improve my performance? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170903/c3d8fe7f/attachment.html>
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is around 130mbs. --------- gluster version 3.8.13 -------- Volume name: data Number of bricks : 4 * 3 = 12 Bricks: Brick1: server1:/brick/data1 Brick2: server1:/brick/data2 Brick3: server1:/brick/data3 Brick4: server1:/brick/data4 Brick5: server2:/brick/data1 . . . Options reconfigure: Performance.strict-o-direct: off Cluster.nufa: off Features.shard-block-size: 512MB Features.shard: on Cluster.server-quorum-type: server Cluster.quorum-type: auto Cluster.eager-lock: enable Network.remote-dio: on Performance.readdir-ahead: on Any idea on how to improve my performance? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170904/97877c4b/attachment.html>
Hi, Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the aggregated file size - all of which require one network call each, which further get blown up once they reach AFR (replicate) into many more network calls. What this also means is that the performance with and without shard will be the same with this change. Also, could you also enable client-io-threads and see if that improves performance? There's a patch that is part of 3.11.1 that has been found to improve performance for vm workloads based on our testing - https://review.gluster.org/#/c/17391/ You can give this version a try. -Krutika On Mon, Sep 4, 2017 at 7:48 PM, Roei G <ganor.roei98 at gmail.com> wrote:> Hey everyone! > I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet > connection. > > The storage is configured with 3 gluster volumes, every volume has 12 > bricks (4 bricks on every server, 1 per ssd in the server). > > With the 'features.shard' off option my writing speed (using the 'dd' > command) is approximately 250 Mbs and when the feature is on the writing > speed is around 130mbs. > > --------- gluster version 3.8.13 -------- > > Volume name: data > Number of bricks : 4 * 3 = 12 > Bricks: > Brick1: server1:/brick/data1 > Brick2: server1:/brick/data2 > Brick3: server1:/brick/data3 > Brick4: server1:/brick/data4 > Brick5: server2:/brick/data1 > . > . > . > Options reconfigure: > Performance.strict-o-direct: off > Cluster.nufa: off > Features.shard-block-size: 512MB > Features.shard: on > Cluster.server-quorum-type: server > Cluster.quorum-type: auto > Cluster.eager-lock: enable > Network.remote-dio: on > Performance.readdir-ahead: on > > Any idea on how to improve my performance? > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170905/97b5eb0c/attachment.html>
>From my understanding of gluster, that is to be expected since insteadof having to stat a single file without sharding, now you have to stat multiple files when you shard. Remember that gluster is not so great at dealing with "lots" of files, so if you have a single 100GB file/image stored in gluster, and it gets sharded into 512MB pieces, you are going to now have to stat ~195 files instead of a single file. The more files you have to stat, the slower gluster is, specially in replica situations since my understanding is that you have to stat each file in each brick. On the other hand, if you had a single file and one of your nodes is rebooted, the i/o to the whole 100GB is stopped while healing if you are *not* using sharding, so you have to wait for the whole 100GB to be synced without sharding. It is a trade off that you will have to evaluate, single file vs sharded and performance when healing. If you are storing VM images, you may want to look into applying the gluster settings for VMs: https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example This may help improve performance, but I think you will still have higher throughput with a single file vs sharded, whereas you will have faster healing with sharding, as you only heal the modified shards. Also make sure to be careful because once sharding is enabled and you have sharded files, if you disable it, it will corrupt your sharded vms. Diego On Sun, Sep 3, 2017 at 12:22 PM, Roei G <ganor.roei98 at gmail.com> wrote:> Hey everyone! > I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet > connection. > > The storage is configured with 3 gluster volumes, every volume has 12 bricks > (4 bricks on every server, 1 per ssd in the server). > > With the 'features.shard' off option my writing speed (using the 'dd' > command) is approximately 250 Mbs and when the feature is on the writing > speed is around 130mbs. > > --------- gluster version 3.8.13 -------- > > Volume name: data > Number of bricks : 4 * 3 = 12 > Bricks: > Brick1: server1:/brick/data1 > Brick2: server1:/brick/data2 > Brick3: server1:/brick/data3 > Brick4: server1:/brick/data4 > Brick5: server2:/brick/data1 > . > . > . > Options reconfigure: > Performance.strict-o-direct: off > Cluster.nufa: off > Features.shard-block-size: 512MB > Features.shard: on > Cluster.server-quorum-type: server > Cluster.quorum-type: auto > Cluster.eager-lock: enable > Network.remote-dio: on > Performance.readdir-ahead: on > > Any idea on how to improve my performance? > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
Seemingly Similar Threads
- Poor performance with shard
- On sharded tiered volume, only first shard of new file goes on hot tier.
- Reconstructing files from shards
- Sharding problem - multiple shard copies with mismatching gfids
- Sharding problem - multiple shard copies with mismatching gfids