Lindsay-
What?s your CPU and disk layout for those? You?re close to what I?m running,
curious how it compares.
My prod cluster:
3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro
used for zfs cache, no zil
2x 9 x 1G drives in straight zfs stripe
1x 8 x 2G drives in straight zfs stripe
I use lz4 compressions on my stores. The underlying storage seems to be capable
of ~400MB/s writes and 1-1.5GB/s reads, although the pair of 850s I?m caching on
probably max out around 1.2GB/s.
I found that event threads improved heal performance significantly, hadn?t had
the oppourtunity to convert to shards yet, but everything is finally up to
3.7.14, so starting that today actually.
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
nfs.drc: off
server.event-threads: 3
client.event-threads: 8
performance.io-thread-count: 32
performance.low-prio-threads: 32
I use ovirt, so no gfapi yet. Haven?t done any hard core benchmarking, but I
seem to be able to sustain 200+MB/s sequential writes from VMs, although things
get a bit chunky if there?s a lot of random writing going on across the cluster.
Holding up ~70 vms without too much trouble.
What are you doing to benchmark your IO?
-Darrell
> On Nov 3, 2016, at 11:06 PM, Lindsay Mathieson <lindsay.mathieson at
gmail.com> wrote:
>
> On 4 November 2016 at 03:38, Gambit15 <dougti+gluster at gmail.com>
wrote:
>> There are lots of factors involved. Can you describe your setup &
use case a
>> little more?
>
>
> Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage
> 450 MB/s write, 1.2GB/s Read.
> - 2 * 1GB Bond, Balance-alb
> - 64 MB Shards
> - KVM VM Hosting via gfapi
>
> Looking at improving the IOPS for the VM's
>
> --
> Lindsay
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users