Strahil Nikolov
2022-Mar-05 22:37 UTC
[Gluster-users] Create more would increse performance?
Sharding is created for virtualization and it? provides better performance in a distributed- replicated volumes, as each shard is "placed" ontop of DHT.This way when the VM reads a large file (which spans over several shards), each shard can be read from a different brick -> speeding up the read. Also, you can explore libgfapi which , despite it's drawbacks , brings a lot of performance (at least based on several reports in the oVirt list). Overall, more subvolumes (replica sets) will bring better performance (most probably you will feel it in the reads) and with libgfapi the performance can go better. Best Regards,Strahil Nikolov On Sun, Mar 6, 2022 at 0:23, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi I'm working with kvm/qemu virtualization here.I already activated virt group.However I am considering make some changes.Mostly it's work with really big files. Em s?b, 5 de mar de 2022 18:21, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu: It depends. What kind of workload do you have ? Best Regards,Strahil Nikolov On Sat, Mar 5, 2022 at 17:22, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi there.Usually I create one gluster volume and one brick, /mnt/data.If create more than on brick, like server1:/data1 server1:/data2 n.... would this increate overall performance??Thanks---Gilberto Nunes Ferreira ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220305/7fb3df71/attachment.html>
Gilberto Ferreira
2022-Mar-05 22:49 UTC
[Gluster-users] Create more would increse performance?
That's nice to hear. Regarding libgfapi you mean this: # gluster volume set VOL_NAME server.allow-insecure on Can you point some docs about create a gluster subvols? Thanks Em s?b, 5 de mar de 2022 19:37, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:> Sharding is created for virtualization and it provides better performance > in a distributed- replicated volumes, as each shard is "placed" ontop of > DHT. > This way when the VM reads a large file (which spans over several shards), > each shard can be read from a different brick -> speeding up the read. > > Also, you can explore libgfapi which , despite it's drawbacks , brings a > lot of performance (at least based on several reports in the oVirt list). > > Overall, more subvolumes (replica sets) will bring better performance > (most probably you will feel it in the reads) and with libgfapi the > performance can go better. > > Best Regards, > Strahil Nikolov > > On Sun, Mar 6, 2022 at 0:23, Gilberto Ferreira > <gilberto.nunes32 at gmail.com> wrote: > Hi > > I'm working with kvm/qemu virtualization here. > I already activated virt group. > However I am considering make some changes. > Mostly it's work with really big files. > > > Em s?b, 5 de mar de 2022 18:21, Strahil Nikolov <hunter86_bg at yahoo.com> > escreveu: > > It depends. What kind of workload do you have ? > > Best Regards, > Strahil Nikolov > > On Sat, Mar 5, 2022 at 17:22, Gilberto Ferreira > <gilberto.nunes32 at gmail.com> wrote: > Hi there. > Usually I create one gluster volume and one brick, /mnt/data. If create > more than on brick, like server1:/data1 server1:/data2 n.... would this > increate overall performance?? > Thanks > --- > Gilberto Nunes Ferreira > > > > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220305/6b166638/attachment.html>