Next year i'll start with our first production cluster. I'll put on that many VMs images (XenServer, ProxMox, ...). Currently I have 3 SuperMicro 6028R-E1CR12T to be used as storage nodes. I'll put 2 more 10GbT cards on each. Primary goal is to have MAXIMUM data redundancy and protection. we can live with not top-notch performances but data protection must be assured all the time in all conditions. Some questions: 1) should I create any raid on each server? If yes, which level and how many? - 4 RAID-5 with 3 disks each ? With this i'll create 4 bricks on each server - 6 RAID-1 with 2 disks each ? With this i'll create 6 bricks on each server (but space wasted is too high) - 2 RAID-6 with 6 disks each? With this i'll create 2 bricks on each server 2) Should I use standard replication (replicate 3) or EC ? 3) Our servers has 2 SSD in the back. Can I use this as tiering ? 4) Our servers has 1 SSD inside (not hotplug) for the OS. What would happens in case of a crash of this SSD ? Is gluster able to recover the whole failed node ?
No one? Il 16 set 2016 18:53, "Gandalf Corvotempesta" < gandalf.corvotempesta at gmail.com> ha scritto:> Next year i'll start with our first production cluster. > I'll put on that many VMs images (XenServer, ProxMox, ...). > > Currently I have 3 SuperMicro 6028R-E1CR12T to be used as storage nodes. > > I'll put 2 more 10GbT cards on each. > > Primary goal is to have MAXIMUM data redundancy and protection. > we can live with not top-notch performances but data protection must > be assured all the time in all conditions. > > Some questions: > 1) should I create any raid on each server? If yes, which level and how > many? > - 4 RAID-5 with 3 disks each ? With this i'll create 4 bricks on each > server > - 6 RAID-1 with 2 disks each ? With this i'll create 6 bricks on each > server (but space wasted is too high) > - 2 RAID-6 with 6 disks each? With this i'll create 2 bricks on each > server > > 2) Should I use standard replication (replicate 3) or EC ? > > 3) Our servers has 2 SSD in the back. Can I use this as tiering ? > > 4) Our servers has 1 SSD inside (not hotplug) for the OS. What would > happens in case of a crash of this SSD ? Is gluster able to recover > the whole failed node ? >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160919/ccdf0143/attachment.html>
On Fri, Sep 16, 2016 at 10:23 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote:> Next year i'll start with our first production cluster. > I'll put on that many VMs images (XenServer, ProxMox, ...). > > Currently I have 3 SuperMicro 6028R-E1CR12T to be used as storage nodes. > > I'll put 2 more 10GbT cards on each. > > Primary goal is to have MAXIMUM data redundancy and protection. > we can live with not top-notch performances but data protection must > be assured all the time in all conditions. >Are you willing to take VM snapshots at the right times?> > Some questions: > 1) should I create any raid on each server? If yes, which level and how > many? > - 4 RAID-5 with 3 disks each ? With this i'll create 4 bricks on each > server > - 6 RAID-1 with 2 disks each ? With this i'll create 6 bricks on each > server (but space wasted is too high) > - 2 RAID-6 with 6 disks each? With this i'll create 2 bricks on each > server > > 2) Should I use standard replication (replicate 3) or EC ? >Please don't use EC for Random write workload, it is better for extending write workload at the moment. Replica 3/Arbiter is what I would suggest.> > 3) Our servers has 2 SSD in the back. Can I use this as tiering ? >> 4) Our servers has 1 SSD inside (not hotplug) for the OS. What would > happens in case of a crash of this SSD ? Is gluster able to recover > the whole failed node ? > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160920/1209dc0c/attachment.html>