Looks good mostly. You can also turn on performance.stat-prefetch, and also set client.event-threads and server.event-threads to 4. And if your bricks are on ssds, then you could also enable performance.client-io-threads. And if your bricks and hypervisors are on same set of machines (hyperconverged), then you can turn off cluster.choose-local and see if it helps read performance. Do let us know what helped and what didn't. -Krutika On Thu, Apr 18, 2019 at 1:05 PM <lemonnierk at ulrar.net> wrote:> Hi, > > We've been using the same settings, found in an old email here, since > v3.7 of gluster for our VM hosting volumes. They've been working fine > but since we've just installed a v6 for testing I figured there might > be new settings I should be aware of. > > So for access through the libgfapi (qemu), for VM hard drives, is that > still optimal and recommended ? > > Volume Name: glusterfs > Type: Replicate > Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: ips1adm.X:/mnt/glusterfs/brick > Brick2: ips2adm.X:/mnt/glusterfs/brick > Brick3: ips3adm.X:/mnt/glusterfs/brick > Options Reconfigured: > performance.readdir-ahead: on > cluster.quorum-type: auto > cluster.server-quorum-type: server > network.remote-dio: enable > cluster.eager-lock: enable > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > features.shard: on > features.shard-block-size: 64MB > cluster.data-self-heal-algorithm: full > network.ping-timeout: 30 > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > transport.address-family: inet > nfs.disable: on > performance.client-io-threads: off > > Thanks ! > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190419/56a73c6a/attachment.html>
On Fri, Apr 19, 2019 at 06:47:49AM +0530, Krutika Dhananjay wrote:> Looks good mostly. > You can also turn on performance.stat-prefetch, and also setAh the corruption bug has been fixed, I missed that. Great !> client.event-threads and server.event-threads to 4.I didn't realize that would also apply to libgfapi ? Good to know, thanks.> And if your bricks are on ssds, then you could also enable > performance.client-io-threads.I'm surprised by that, the doc says "This feature is not recommended for distributed, replicated or distributed-replicated volumes." Since this volume is just a replica 3, shouldn't this stay off ? The disks are all nvme, which I assume would count as ssd.> And if your bricks and hypervisors are on same set of machines > (hyperconverged), > then you can turn off cluster.choose-local and see if it helps read > performance.Thanks, we'll give those a try !