On Mon, Feb 18, 2019 at 11:23 PM Lindolfo Meira <meira at cesup.ufrgs.br>
wrote:
> We're running some benchmarks on a striped glusterfs volume.
>
>
Hi Lindolfo,
We are not supporting Stripe anymore, and planning to remove it from build
too by glusterfs-6.0 (ie, next release). See if you can use 'Shard' for
the
usecase.
> We have 6 identical servers acting as bricks. Measured link speed between
> these servers is 3.36GB/s. Link speed between clients of the parallel file
> system and its servers is also 3.36GB/s. So we're expecting this system
to
> have a write performance of around 20.16GB/s (6 times 3.36GB/s) minus some
> write overhead.
>
> If we write to the system from a single client, we manage to write at
> around 3.36GB/s. That's okay, because we're limited by the max
throughput
> of that client's network adapter. But when we account for that and
write
> from 6 or more clients, we can never get past 11GB/s. Is that right? Is
> this really the overhead to be expected? We'd appreciate any inputs.
>
>
Lame question: Are we getting more than 11GB/s from disks ?
Please collect `gluster volume profile gfs0 info`, that can give more
information.
> Output of gluster volume info:
>
> Volume Name: gfs0
> Type: Stripe
> Volume ID: 2ca3dd45-6209-43ff-a164-7f2694097c64
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 6 = 6
> Transport-type: tcp
> Bricks:
> Brick1: pfs01-ib:/mnt/data
> Brick2: pfs02-ib:/mnt/data
> Brick3: pfs03-ib:/mnt/data
> Brick4: pfs04-ib:/mnt/data
> Brick5: pfs05-ib:/mnt/data
> Brick6: pfs06-ib:/mnt/data
> Options Reconfigured:
> cluster.stripe-block-size: 128KB
> performance.cache-size: 32MB
> performance.write-behind-window-size: 1MB
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> performance.stat-prefetch: off
> server.event-threads: 4
> client.event-threads: 2
> performance.io-thread-count: 16
> transport.address-family: inet
> nfs.disable: on
> cluster.localtime-logging: enable
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190220/b602cf98/attachment.html>