Hi, We are evaluating dell DSS7000 chassis with 90 disks. Has anyone used that much brick per server? Any suggestions, advices? Thanks, Serkan
We have 12 on order. Actually the DSS7000 has two nodes in the chassis, and each accesses 45 bricks. We will be using an erasure code scheme probably 24:3 or 24:4, we have not sat down and really thought about the exact scheme we will use. On 15 February 2017 at 14:04, Serkan ?oban <cobanserkan at gmail.com> wrote:> Hi, > > We are evaluating dell DSS7000 chassis with 90 disks. > Has anyone used that much brick per server? > Any suggestions, advices? > > Thanks, > Serkan > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170216/27ead49a/attachment.html>
> We are evaluating dell DSS7000 chassis with 90 disks. > Has anyone used that much brick per server? > Any suggestions, advices?90 disks per server is a lot. In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth. Most people who put that many disks in a server use some sort of RAID (HW or SW) to combine them into a smaller number of physical volumes on top of which filesystems and such can be built. If you can't do that, or don't want to, you're in poorly explored territory. My suggestion would be to try running as 90 bricks. It might work fine, or you might run into various kinds of contention: (1) Excessive context switching would indicate not enough CPU. (2) Excessive page faults would indicate not enough memory. (3) Maxed-out network ports . . . well, you can figure that one out. ;) If (2) applies, you might want to try brick multiplexing. This is a new feature in 3.10, which can reduce memory consumption by more than 2x in many cases by putting multiple bricks into a single process (instead of one per brick). This also drastically reduces the number of ports you'll need, since the single process only needs one port total instead of one per brick. In terms of CPU usage or performance, gains are far more modest. Work in that area is still ongoing, as is work on multiplexing in general. If you want to help us get it all right, you can enable multiplexing like this: gluster volume set all cluster.brick-multiplex on If multiplexing doesn't help for you, speak up and maybe we can make it better, or perhaps come up with other things to try. Good luck!
I wouldn't do that kind of per-server density for anything but cold storage. Putting that many eggs in one basket increases the potential for catastrophic failure. On February 15, 2017 11:04:16 AM PST, "Serkan ?oban" <cobanserkan at gmail.com> wrote:>Hi, > >We are evaluating dell DSS7000 chassis with 90 disks. >Has anyone used that much brick per server? >Any suggestions, advices? > >Thanks, >Serkan >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://lists.gluster.org/mailman/listinfo/gluster-users-- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170217/9538f33e/attachment.html>