Hello, I'm running oVirt with Glusterfs as storage. On a 10GB network. Gluster version: glusterfs 6.10 Configuration: # gluster volume info data2 Volume Name: data2 Type: Distribute Volume ID: 3fc4d067-f845-47bc-beae-2be0106116b9 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: engine.pt.ags.corp:/storage/brick1 Options Reconfigured: performance.client-io-threads: off server.event-threads: 2 client.event-threads: 4 cluster.choose-local: on user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on On the same disks I did these tests: On a NFS storage: # dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync 1+0 registos dentro 1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 1,70022 s, 632 MB/s # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync 0+1 registos dentro 0+1 registos fora 2147479552 bytes (2,1 GB) copiados, 2,94035 s, 730 MB/s On the Gluster Storage dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync 1+0 registos dentro 1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 2,22618 s, 482 MB/s # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync 0+1 registos dentro 0+1 registos fora 2147479552 bytes (2,1 GB) copiados, 4,10337 s, 523 MB/s This means that NFS is faster than Gluster? How can I improve Gluster performance? Thanks -- Com os melhores cumprimentos | Kind Regards | Meilleures salutations | Met vriendelijke groeten, Descubra AQUI os novos superpoderes do Zimbra 9 Jos? Ferradeira T.: +351 214 261 698 --------------------------------------------- Logicworks Tecnologias de Inform?tica http://www.logicworks.pt www.acloud.pt servi?os de alojamento e virtualiza??o -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210408/33d6e6a4/attachment.html>
Hello, I can see that you are using one brick for the volume. To get the most of distribute volume, increase the number of bricks and see there is any improvement. Also, glusterfs has optimization settings for oVirt env, please do *gluster volume set $VOLNAME group virt . *Hope this <https://access.redhat.com/documentation/en-us/red_hat_enterprise_virtualization/3.6/html/administration_guide/sect-using_red_hat_gluster_storage_as_a_storage_domain> document will help you Regards Vinayak On Fri, Apr 9, 2021 at 12:58 AM Jos? Ferradeira <jf at logicworks.pt> wrote:> Hello, > > I'm running oVirt with Glusterfs as storage. > On a 10GB network. > > Gluster version: glusterfs 6.10 > > Configuration: > # gluster volume info data2 > > Volume Name: data2 > Type: Distribute > Volume ID: 3fc4d067-f845-47bc-beae-2be0106116b9 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: engine.pt.ags.corp:/storage/brick1 > Options Reconfigured: > performance.client-io-threads: off > server.event-threads: 2 > client.event-threads: 4 > cluster.choose-local: on > user.cifs: off > features.shard: on > cluster.shd-wait-qlength: 10000 > cluster.locking-scheme: granular > cluster.data-self-heal-algorithm: full > cluster.server-quorum-type: server > cluster.quorum-type: auto > cluster.eager-lock: enable > network.remote-dio: enable > performance.low-prio-threads: 32 > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > nfs.disable: on > > > On the same disks I did these tests: > > On a NFS storage: > > # dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync > 1+0 registos dentro > 1+0 registos fora > 1073741824 bytes (1,1 GB) copiados, 1,70022 s, 632 MB/s > > # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync > 0+1 registos dentro > 0+1 registos fora > 2147479552 bytes (2,1 GB) copiados, 2,94035 s, 730 MB/s > > > On the Gluster Storage > > dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync > 1+0 registos dentro > 1+0 registos fora > 1073741824 bytes (1,1 GB) copiados, 2,22618 s, 482 MB/s > > # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync > 0+1 registos dentro > 0+1 registos fora > 2147479552 bytes (2,1 GB) copiados, 4,10337 s, 523 MB/s > > > This means that NFS is faster than Gluster? > How can I improve Gluster performance? > > Thanks > > -- > <http://webmail.acloud.pt/home/info at logicworks.pt/Assinatura/index.html>Com > os melhores cumprimentos | Kind Regards | Meilleures salutations | Met > vriendelijke groeten, > > *Descubra AQUI <https://youtu.be/q4Y1mWbqezo> os novos superpoderes do > Zimbra 9* > > Jos? Ferradeira > T.: +351 214 261 698 > --------------------------------------------- > Logicworks Tecnologias de Inform?tica > <http://www.logicworks.pt/>http://www.logicworks.pt > www.acloud.pt > servi?os de alojamento e virtualiza??o > > > > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210409/82185ee2/attachment.html>
Yes, FUSE is slower as it's in user space - which leads to more system calls than when using NFS.Amar has explained it in?https://lists.gluster.org/pipermail/gluster-users/2020-November/038996.html?. Yet, gluster can tolerate a single node failure , while NFS requires cluster software like corosync/pacemaker + a shared storage device. First of all you can start testing with real world workload. Next, take profile data from server and client. Verify that client is connected to all bricks. Next ensure that if you use hardware raid ,LVM and XFS are aligned properly. Another idea that comes to my mind is to test with different settings for client and server threads (more is not always better). Best Regards, Strahil Nikolov ? ?????????, 8 ????? 2021 ?., 22:28:10 ?. ???????+3, Jos? Ferradeira <jf at logicworks.pt> ??????: Hello, I'm running oVirt with Glusterfs as storage. On?a 10GB network. Gluster version:?glusterfs 6.10 Configuration: # gluster volume info data2 Volume Name: data2 Type: Distribute Volume ID: 3fc4d067-f845-47bc-beae-2be0106116b9 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: engine.pt.ags.corp:/storage/brick1 Options Reconfigured: performance.client-io-threads: off server.event-threads: 2 client.event-threads: 4 cluster.choose-local: on user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on On the same disks I did these tests: On a NFS storage: # dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync 1+0 registos dentro 1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 1,70022 s, 632 MB/s # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync 0+1 registos dentro 0+1 registos fora 2147479552 bytes (2,1 GB) copiados, 2,94035 s, 730 MB/s On the Gluster Storage dd if=/dev/zero of=test5.img bs=1G count=1 oflag=dsync 1+0 registos dentro 1+0 registos fora 1073741824 bytes (1,1 GB) copiados, 2,22618 s, 482 MB/s # dd if=/dev/zero of=test5.img bs=5G count=1 oflag=dsync 0+1 registos dentro 0+1 registos fora 2147479552 bytes (2,1 GB) copiados, 4,10337 s, 523 MB/s This means that NFS is faster than Gluster? How can I improve Gluster performance? Thanks -- Com os melhores cumprimentos | Kind Regards |?Meilleures salutations | Met vriendelijke groeten, ? Descubra?AQUI?os novos superpoderes do Zimbra 9 Jos? Ferradeira T.: +351 214 261 698? --------------------------------------------- Logicworks Tecnologias de Inform?tica http://www.logicworks.pt www.acloud.pt servi?os de alojamento e virtualiza??o ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users