Shreyansh Shah
2020-Sep-30 15:22 UTC
[Gluster-users] Description of performance.cache-size
Hi Strahil, Thanks for taking out time to help me. This is not a hyperconverged setup. We have 7 nodes with 2 bricks on each node. Total 14 node distributed setup. The host on which i saw the increased RAM is a client with glusterfs client version 5.10. On Wed, Sep 30, 2020 at 8:42 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:> Sadly I can't help much here. > > Is this a Hyperconverged setup (host is also a client) ? > > Best Regards, > Strahil Nikolov > > > > > > ? ???????, 29 ????????? 2020 ?., 18:29:20 ???????+3, Shreyansh Shah < > shreyansh.shah at alpha-grep.com> ??????: > > > > > > Hi All, > Can anyone help me out with this? > > On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah < > shreyansh.shah at alpha-grep.com> wrote: > > Hi, > > We are using distributed gluster version 5.10 (7 nodes with 2 bricks per > node, i.e 14 bricks total). > > > > We have set the performance.cache-size parameter as 8GB on server. We > assumed that this config parameter indicates the amount of RAM that will be > used on the client machine (i.e. upto 8 GB of RAM to be used for data > caching at clients). But we observed that on a machine the RAM usage of > glusterfs process was around 17GB. > > > > So we want to know whether our understanding of the parameter is > correct? Or something else that we have missed. > > > > Below are the options configured at glusterfs server, please advise if > we can add/tune some parameters to extract more performance. > > storage.health-check-interval: 10 > > performance.client-io-threads: on > > performance.cache-refresh-timeout: 60 > > performance.cache-size: 8GB > > transport.address-family: inet > > nfs.disable: on > > server.keepalive-time: 60 > > client.keepalive-time: 60 > > network.ping-timeout: 90 > > > > -- > > Regards,Shreyansh Shah > > > > > -- > Regards,Shreyansh Shah > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://bluejeans.com/441850968 > > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-- Regards, Shreyansh Shah -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200930/8099ec09/attachment.html>
Strahil Nikolov
2020-Sep-30 15:29 UTC
[Gluster-users] Description of performance.cache-size
Hm... . Can you check the cluster op version via: gluster volume get all cluster.op-version And the max version: gluster volume get all cluster.max-op-version If you restart the client (umount and then mount) , do you have the same memory usage? In your case the client is 5.10 , so you can try to update it to 5.11 (if the Gluster Cluster is on 5.11 or higher) and monitor it closely. Best Regards, Strahil Nikolov ? ?????, 30 ????????? 2020 ?., 18:22:33 ???????+3, Shreyansh Shah <shreyansh.shah at alpha-grep.com> ??????: Hi Strahil, Thanks for taking out time to help me. This is not a hyperconverged setup. We have 7 nodes with 2 bricks on each node. Total 14 node distributed setup. The host on which i saw the increased RAM is a client with glusterfs client version 5.10. On Wed, Sep 30, 2020 at 8:42 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:> Sadly I can't help much here. > > Is this a Hyperconverged setup (host is also a client) ? > > Best Regards, > Strahil Nikolov > > > > > > ? ???????, 29 ????????? 2020 ?., 18:29:20 ???????+3, Shreyansh Shah <shreyansh.shah at alpha-grep.com> ??????: > > > > > > Hi All, > Can anyone help me out with this? > > On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah <shreyansh.shah at alpha-grep.com> wrote: >> Hi, >> We are using distributed gluster version 5.10 (7 nodes with 2 bricks per node, i.e 14 bricks total). >> >> We have set the performance.cache-size parameter as 8GB on server. We assumed that this config parameter indicates the amount of RAM that will be used on the client machine (i.e. upto 8 GB of RAM to be used for data caching at clients). But we observed that on a machine the RAM usage of glusterfs process was around 17GB. >> >> So we want to know whether our understanding of the parameter is correct? Or something else that we have missed. >> >> Below are the options configured at glusterfs server, please advise if we can add/tune some parameters to extract more performance. >> storage.health-check-interval: 10 >> performance.client-io-threads: on >> performance.cache-refresh-timeout: 60 >> performance.cache-size: 8GB >> transport.address-family: inet >> nfs.disable: on >> server.keepalive-time: 60 >> client.keepalive-time: 60 >> network.ping-timeout: 90 >> >> -- >> Regards,Shreyansh Shah >> > > > -- > Regards,Shreyansh Shah > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://bluejeans.com/441850968 > > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-- Regards,Shreyansh Shah