Strahil Nikolov
2020-Jun-29 03:52 UTC
[Gluster-users] Latest NFS-Ganesha Gluster Integration docs
Last time I did storhaug+NFS-Ganesha I used https://github.com/gluster/storhaug/wiki . I guess you can setup NFS-Ganesha without HA and check the performance before proceeding further. Have you tuned your I/O scheduler, tuned profile , aligned your PV ,etc ? There are alot of stuff that can improve your Gluster. Also, you can check the settings in /var/lib/glusterd/groups/virt . The settings are used by oVirt/RHV and are the optimal settings for a Virtualization. P.S.: Red Hat support Hyperconverged Infrastructure with 512MB shards, while the default shard size is 64MB. You can test on another volume setting a bigger shard size. Best Regards, Strahil Nikolov ?? 29 ??? 2020 ?. 5:00:22 GMT+03:00, "wkmail at bneit.com" <wkmail at bneit.com> ??????:>For many years, we have maintained a number of standalone, >hyperconverged Gluster/Libvirt clusters? Replica 2 + Arbiter using Fuse > >mount and Sharding. > >Performance has been mostly acceptable. The clusters have high >availability and we have had very zero problems over the years as long >as we do green field upgrades to new major versions. > >So that is the beauty of Gluster in our opinion. It is easy to setup >and >use and it simply works without much thought. > >As we ask more of our VMs, we see that disk i/o sometimes is a >bottleneck and I am looking at improving things. > >On this list I keep on seeing comments that VM performance is better on > >NFS and a general dissatisfaction with Fuse. So we are looking to see >for ourselves if NFS would be an improvement. > >We looked into gfapi but our hosts are mostly Ubuntu and gfapi is not >built in. We would prefer to stay with stock components for the >critical >underbelly of the infrastructure. > >So in looking into NFS,? I find instructions for NFS-Ganesha as a >standalone, and I see mentions of Gluster Drivers but then I see >mentions of StorHaug which sort of seems to be worked on. I also see >pacemaker type failover instructions. > >Is there a set of instructions for replacing our Fuse Mount setup with >NFS/Ganesha and how HA is achieved? (with or without StorHaug). > >Any advice in this area would be appreciated. > >-wk > > > >________ > > > >Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >Bridge: https://bluejeans.com/441850968 > >Gluster-users mailing list >Gluster-users at gluster.org >https://lists.gluster.org/mailman/listinfo/gluster-users
On 6/28/2020 8:52 PM, Strahil Nikolov wrote:> Last time I did storhaug+NFS-Ganesha I used https://github.com/gluster/storhaug/wiki .Well, that certainly helps but since i have no experience with Samba, I guess I have to learn about ctdb What I see are lots of layers here. Even a simple graphic would help, but I guess I will just have to soldier through it.> I guess you can setup NFS-Ganesha without HA and check the performance before proceeding further.yes, I setup a simple NFS-Ganesha single node and have begun to play with that using an XFS store. Pretty Straightforward. Next step would be to use the Gluster Storage Driver. Then figure out the HA part of Storhaug/CTDB and how well it can be run in a hyperconverged scenario. Not exactly like the QuickStart on the Gluster docs though <Grin>> > Have you tuned your I/O scheduler, tuned profile , aligned your PV ,etc ? There are alot of stuff that can improve your Gluster.yes, we have been doing this awhile (since Gluster 3.3) and do tuning. Again, Our Gluster performance isn't 'bad' from our perspective. We are just looking to see if there are some noticeable gains to be made with NFS vs FuseMount. I suppose if we hadn't seen so many complaints about Fuse on the mailing list we wouldn't have thought much about it <Grin>. Of course with lots of small files we have alwayss used MooseFS (since 1.6), as that is Glusters weakness.? They make a good combination of tools.> > Also, you can check the settings in /var/lib/glusterd/groups/virt . The settings are used by oVirt/RHV and are the optimal settings for a Virtualization.yes we always enable Virt settings and they make a big difference.> P.S.: Red Hat support Hyperconverged Infrastructure with 512MB shards, while the default shard size is 64MB. You can test on another volume setting a bigger shard size. >Yes we noticed a while back that there was a? discrepancy between the RedHat docs saying bigger shards are better (i.e. 512MB) and 64MB on the virt group. We have played with different settings but didn't really notice much of a difference. You get a smaller number of heals but they are bigger and take longer to sync. Does anyone know why the difference and the reasoning involved? -WK