I have a similar system with 4 nodes and 2 bricks per node, where
each brick is a single large filesystem (4TB x 24 RAID 6). The
computers are all on QDR Infinband with Gluster using IPOIB. I
have a cluster of Infiniband clients that access the data on the
servers. I can only get about 1.0 to 1.2 GB/s throughput with my
system though. Can you tell us the peak throughput that you are
getting. I just don?t have a sense of what I should expect from
my system. A similar Luster setup could achieve 2-3 GB/s, which
I attributed to the fact that it didn?t use IPOIB, but instead used
RDMA. I?d really like to know if I am wrong here and there is
some configuration I can tweak to make things faster.
Andy
On Dec 7, 2014, at 8:43 PM, Franco Broi <franco.broi at iongeo.com> wrote:
> On Fri, 2014-12-05 at 14:22 +0000, Kiebzak, Jason M. wrote:
>> May I ask why you chose to go with 4 separate bricks per server rather
than one large brick per server?
>
> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> logical to keep the bricks and ZFS filesystems confined to physical
> hardware units, ie I could disconnect a brick and move it to another
> server.
>
>>
>> Thanks
>> Jason
>>
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Franco Broi
>> Sent: Thursday, December 04, 2014 7:56 PM
>> To: gluster-users at gluster.org
>> Subject: [Gluster-users] A year's worth of Gluster
>>
>>
>> 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
server has 10Gbit Ethernet.
>>
>> Each brick is a ZOL RADIZ2 pool with a single filesystem.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users