Lindsay- What?s your CPU and disk layout for those? You?re close to what I?m running, curious how it compares. My prod cluster: 3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro used for zfs cache, no zil 2x 9 x 1G drives in straight zfs stripe 1x 8 x 2G drives in straight zfs stripe I use lz4 compressions on my stores. The underlying storage seems to be capable of ~400MB/s writes and 1-1.5GB/s reads, although the pair of 850s I?m caching on probably max out around 1.2GB/s. I found that event threads improved heal performance significantly, hadn?t had the oppourtunity to convert to shards yet, but everything is finally up to 3.7.14, so starting that today actually. performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: on cluster.eager-lock: enable network.remote-dio: enable nfs.drc: off server.event-threads: 3 client.event-threads: 8 performance.io-thread-count: 32 performance.low-prio-threads: 32 I use ovirt, so no gfapi yet. Haven?t done any hard core benchmarking, but I seem to be able to sustain 200+MB/s sequential writes from VMs, although things get a bit chunky if there?s a lot of random writing going on across the cluster. Holding up ~70 vms without too much trouble. What are you doing to benchmark your IO? -Darrell> On Nov 3, 2016, at 11:06 PM, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote: > > On 4 November 2016 at 03:38, Gambit15 <dougti+gluster at gmail.com> wrote: >> There are lots of factors involved. Can you describe your setup & use case a >> little more? > > > Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage > 450 MB/s write, 1.2GB/s Read. > - 2 * 1GB Bond, Balance-alb > - 64 MB Shards > - KVM VM Hosting via gfapi > > Looking at improving the IOPS for the VM's > > -- > Lindsay > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
On 5/11/2016 1:30 AM, Darrell Budic wrote:> What?s your CPU and disk layout for those? You?re close to what I?m running, curious how it compares.All my nodes are running RAIDZ10. I have SSD 5GB slog partion, 100GB Cache Cache is hardly used, I think you'll find with VM workload you're only getting around 4% hit rates. You're better off using the SSD for slog, it improves sync writes consdierably. I tried the Samsung 850 pro, found them pretty bad in practice. Their sustained seq writes were atrocious in production and their lifetime very limted. Gluster/VM usage resultys in very high writes, ours all packed it in under a year. We have Kingston Hyper somethings :) they have a TBW of 300TB which is much better and their uncompressed write speed is very high.> hat are you doing to benchmark your IO?bonnie++ on the ZFS pools and Crystal DiskMark in the VM's, plus test real world workloads. VNA: - 2 * Xeon E5-2660 2.2 Ghz - 64Gb RAM - 2*1G balance-alb (Gluster Network) - 1G Public network pool: tank config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WX81E81AFWJ4 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV240 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV027 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU903 ONLINE 0 0 0 ata-WDC_WD6000HLHX-01JJPV0_WD-WXB1E81EFFT2 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1UFDFKA ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7ZKLK52 ONLINE 0 0 0 logs ata-KINGSTON_SHSS37A240G_50026B7267031966-part1 ONLINE 0 0 0 cache ata-KINGSTON_SHSS37A240G_50026B7267031966-part2 ONLINE 0 0 0 VNB, VNG - Xenon E5-2620 2Ghz - 64GB RAM - 2*1G balance-alb (Gluster Network) - 1G Public network pool: tank config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N2874892 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR8C2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR3Y0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TKR84T ONLINE 0 0 0 logs ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part1 ONLINE 0 0 0 cache ata-KINGSTON_SHSS37A240G_50026B7266074B8A-part2 ONLINE 0 0 0> > My prod cluster: > 3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro used for zfs cache, no zil > 2x 9 x 1G drives in straight zfs stripe > 1x 8 x 2G drives in straight zfs stripe > > I use lz4 compressions on my stores. The underlying storage seems to be capable of ~400MB/s writes and 1-1.5GB/s reads, although the pair of 850s I?m caching on probably max out around 1.2GB/s.-- Lindsay Mathieson