Hi all, Are there any scale limitations in terms of how many nodes can be in a single Gluster Cluster or how much storage capacity can be managed in a single cluster? What are some of the large deployments out there that you know of? Thanks, Mayur ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." ********************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171030/94d4fc37/attachment.html>
Hi, After ~2500 bricks it takes too much time bricks getting online after a reboot. So I think ~2500 bricks is an upper limit per cluster. I have two 40 nodes/19PiB clusters. They have only one big EC volume and used for backup/archive purpose. On Tue, Oct 31, 2017 at 12:51 AM, Mayur Dewaikar <mdewaikar at commvault.com> wrote:> Hi all, > > Are there any scale limitations in terms of how many nodes can be in a > single Gluster Cluster or how much storage capacity can be managed in a > single cluster? What are some of the large deployments out there that you > know of? > > > > Thanks, > > Mayur > > > > > > ***************************Legal Disclaimer*************************** > "This communication may contain confidential and privileged material for the > sole use of the intended recipient. Any unauthorized review, use or > distribution > by others is strictly prohibited. If you have received the message by > mistake, > please advise the sender by reply email and delete the message. Thank you." > ********************************************************************** > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar <mdewaikar at commvault.com> wrote:> Hi all, > > Are there any scale limitations in terms of how many nodes can be in a > single Gluster Cluster or how much storage capacity can be managed in a > single cluster? What are some of the large deployments out there that you > know of? > >The current design of GlusterD is not capable of handling too many nodes in the cluster specially on the node restart/reboot condition. We have heard about deployments with ~100-150 nodes where things are stable but in node reboot scenario some special tweaking of parameters like network.listen-backlog is required to ensure TCP packets don?t get overflowed resulting into connection between brick to glusterd fail. GlusterD2 project will definitely address this aspect of the problems. Also since all the directory layouts are replicated on all the bricks of a volume, mkdir, unlink or any other directory operations are costly and with more number of bricks this impacts the latency. We?re also working on a project called RIO to address this issue.> > Thanks, > > Mayur > > > > > ***************************Legal Disclaimer*************************** > "This communication may contain confidential and privileged material for > the > sole use of the intended recipient. Any unauthorized review, use or > distribution > by others is strictly prohibited. If you have received the message by > mistake, > please advise the sender by reply email and delete the message. Thank you." > ********************************************************************** > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- - Atin (atinm) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171102/09bc9e52/attachment.html>
Hi All? Thanks for the responses. I am mainly curious about performance impact for read/write workloads associated with metadata updates as the number of nodes increase. Any commentary on performance impact specific to various read/write random/sequential IO scenario as the scale increases? Not particularly worried about restart/reboot condition as that is an edge use case for us. Thanks, Mayur From: Atin Mukherjee [mailto:amukherj at redhat.com] Sent: Wednesday, November 1, 2017 8:53 PM To: Mayur Dewaikar <mdewaikar at commvault.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] Gluster Scale Limitations On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar <mdewaikar at commvault.com<mailto:mdewaikar at commvault.com>> wrote: Hi all, Are there any scale limitations in terms of how many nodes can be in a single Gluster Cluster or how much storage capacity can be managed in a single cluster? What are some of the large deployments out there that you know of? The current design of GlusterD is not capable of handling too many nodes in the cluster specially on the node restart/reboot condition. We have heard about deployments with ~100-150 nodes where things are stable but in node reboot scenario some special tweaking of parameters like network.listen-backlog is required to ensure TCP packets don?t get overflowed resulting into connection between brick to glusterd fail. GlusterD2 project will definitely address this aspect of the problems. Also since all the directory layouts are replicated on all the bricks of a volume, mkdir, unlink or any other directory operations are costly and with more number of bricks this impacts the latency. We?re also working on a project called RIO to address this issue. Thanks, Mayur ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." ********************************************************************** _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> http://lists.gluster.org/mailman/listinfo/gluster-users -- - Atin (atinm) ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." ********************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171102/651f22d8/attachment.html>
Maybe Matching Threads
- Gluster Scale Limitations
- [LLVMdev] bx instruction getting generated in arm assembly for O1
- [LLVMdev] initialization list with conversion operator dont work properly and report error
- [LLVMdev] operator overloading fails while debugging with gdb for i386
- [LLVMdev] initialization list with conversion operator dont work properly and report error