Hi,
our old setup is not really comparable, but i thought i'd drop some
lines... we once had a Distributed-Replicate setup with 4 x 3 = 12
disks (10 TB hdd). Simple JBOD, every disk == brick. Was running
pretty good, until one of the disks died. The restore (reset-brick)
took about a month, because the application has a quite high I/O and
therefore slows down the volume and the disks.
Next step: take servers with 10x10TB disks and build a RAID 10; raid
array == brick, replicate volume (1 x 3 = 3). When a disk fails, you
only have to rebuild the SW RAID which takes about 3-4 days, plus the
periodic redundany checks. This was way better than the
JBOD/reset-scenario before. But still not optimal. Upcoming step:
build a distribute-replicate with lots of SSDs (maybe again with a
RAID underneath) .
tl;dr what i wanted to say: we waste a lot of disks. It simply depends
on which setup you have and how to handle the situation when one of
the disks fails - and it will! ;-(
regards
Hubert
Am Di., 14. Jan. 2020 um 12:36 Uhr schrieb Markus Kern <gluster at
military.de>:>
>
> Greetings again!
>
> After reading RedHat documentation regarding optimizing Gluster storage
> another question comes to my mind:
>
> Let's presume that I want to go the distributed dispersed volume way.
> Three nodes which two bricks each.
> According to RedHat's recommendation, I should use RAID6 as underlying
> RAID for my planned workload.
> I am frightened by that "waste" of disks in such a case:
> When each brick is a RAID6, I would "loose" two disks per brick -
12
> lossed disks in total.
> In addition to this, distributed dispersed volume adds another layer of
> lossed disk space.
>
> Am I wrong here? Maybe I didn't understand the recommendations wrong?
>
> Markus
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users