Thank you Niels for your input, that definitely makes me more curious... Now let
me tell you a bit more about my intended setup. First of all my major difference
is that I will not be using XFS but ZFS. Then second major difference I will not
be using any hardware RAID card but one single HBA (LSI 3008 chip). My inteded
ZFS setup would consist of one ZFS pool per node. This pool will have 3 virtual
devices of 12 disks each (6 TB per disk) each using RAIDZ-2 (equivalent to RAID
6) for integrity. This gives me a total of 36 disks for a total of 180 TB of raw
capacity.
I will then create one big 180 TB ZFS data set (virtual device, file system or
whatever you want to call it) for my GlusterFS brick. Now as mentioned I could
also create have two bricks by creating two ZFS data sets of around 90 TB each.
But as everything is behind the same HBA and same ZFS pool there will not be any
gain in performance nor availability from the ZFS side.
On the other hand, you mention in your mail that having two bricks per node
means having two glusterfsd processes running and allows me to handle more
clients. Can you tell me more about that? Will I also see any genernal
performance gain? For example in terms of MB/s throughput? Also are there maybe
any disadvantages of running two bricks on the same node, especially in my case?
On Saturday, February 7, 2015 10:24 AM, Niels de Vos <ndevos at
redhat.com> wrote:
On Fri, Feb 06, 2015 at 05:06:38PM +0000, ML mail wrote:
> Hello,
>
> I read in the Gluster Getting Started leaflet
>
(https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
> that the max recommended brick size should be 100 TB.
>
> Once my storage server nodes filled up with disks they will have in
> total 192 TB of storage space, does this mean I should create two
> bricks per storage server node?
>
> Note here that these two bricks would still be on the same controller
> so I don't really see the point or advantage of having two 100 TB
> bricks instead of one single brick of 200 TB per node. But maybe
> someone can explain the rational here?
This is based on the recommendation that RHEL has for maximum size of
XFS filesystems. They might have adjusted the size with more recent
releases, though.
However, having multiple bricks per server can help with other things
too. Multiple processes (one per brick) could handle more clients at the
same time. Depending on how you configure your RAID for the bricks, you
could possibly reduce the performance loss while a RAID-set gets rebuild
after a disk loss.
Best practise seems to be to use 12 disks per RAID-set, mostly RAID10 or
RAID6 is advised.
HTH,
Niels