Strahil
2019-Apr-22 04:27 UTC
[Gluster-users] Gluster 5.6 slow read despite fast local brick
Hello Community, I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right? I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities. Is there something I can do about that ? Maybe change cluster.choose-local, as I don't see it on my other volumes ? What are the risks associated with that? Volume Name: data_fast Type: Replicate Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/data_fast/data_fast Brick2: ovirt2:/gluster_bricks/data_fast/data_fast Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter) Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off storage.owner-uid: 36 storage.owner-gid: 36 performance.strict-o-direct: on cluster.granular-entry-heal: enable network.ping-timeout: 30 cluster.enable-shared-storage: enable Best Regards, Strahil Nikolov -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190422/c27f3151/attachment.html>
Strahil Nikolov
2019-Apr-22 14:18 UTC
[Gluster-users] Gluster 5.6 slow read despite fast local brick
As I had the option to rebuild the volume - I did it and it still reads quite slower than before 5.6 upgrade. I have set cluster.choose-local to 'on' but still the same performance. Volume Name: data_fast Type: Replicate Volume ID: 888a32ea-9b5c-4001-a9c5-8bc7ee0bddce Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/data_fast/data_fast Brick2: ovirt2:/gluster_bricks/data_fast/data_fast Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter) Options Reconfigured: cluster.choose-local: on network.ping-timeout: 30 cluster.granular-entry-heal: enable performance.strict-o-direct: on storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable Any issues expected when downgrading the version ? Best Regards,Strahil Nikolov ? ??????????, 22 ????? 2019 ?., 0:26:51 ?. ???????-4, Strahil <hunter86_bg at yahoo.com> ??????: Hello Community, I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right? I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities. Is there something I can do about that ? Maybe change cluster.choose-local, as I don't see it on my other volumes ? What are the risks associated with that? Volume Name: data_fast Type: Replicate Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/data_fast/data_fast Brick2: ovirt2:/gluster_bricks/data_fast/data_fast Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter) Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off storage.owner-uid: 36 storage.owner-gid: 36 performance.strict-o-direct: on cluster.granular-entry-heal: enable network.ping-timeout: 30 cluster.enable-shared-storage: enable Best Regards, Strahil Nikolov Hello Community, I have been left with the impression that FUSE mounts will read from both local and remote bricks , is that right? I'm using oVirt as a hyperconverged setup and despite my slow network (currently 1 gbit/s, will be expanded soon), I was expecting that at least the reads from the local brick will be fast, yet I can't reach more than 250 MB/s while the 2 data bricks are NVME with much higher capabilities. Is there something I can do about that ? Maybe change cluster.choose-local, as I don't see it on my other volumes ? What are the risks associated with that? Volume Name: data_fast Type: Replicate Volume ID: b78aa52a-4c49-407d-bfd8-fdffb2a3610a Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/data_fast/data_fast Brick2: ovirt2:/gluster_bricks/data_fast/data_fast Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter) Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off storage.owner-uid: 36 storage.owner-gid: 36 performance.strict-o-direct: on cluster.granular-entry-heal: enable network.ping-timeout: 30 cluster.enable-shared-storage: enable Best Regards, Strahil Nikolov -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190422/95ad09f1/attachment.html>