Lindsay Mathieson
2016-Jan-12 03:22 UTC
[Gluster-users] High I/O And Processor Utilization
On 11/01/16 15:37, Krutika Dhananjay wrote:> Kyle, > > Based on the testing we have done from our end, we've found that 512MB > is a good number that is neither too big nor too small, > and provides good performance both on the IO side and with respect to > self-heal.Hi Krutika, I experimented a lot with different chunk sizes, didn't find all that much difference between 4MB and 1GB But benchmarks are tricky things - I used Crystal Diskmark inside a VM, which is probably not the best assessment. And two of the bricks on my replica 3 are very slow, just test drives, not production. So I guess that would effevt things :) These are my current setting - what do you use? Volume Name: datastore1 Type: Replicate Volume ID: 1261175d-64e1-48b1-9158-c32802cc09f0 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: vnb.proxmox.softlog:/vmdata/datastore1 Brick2: vng.proxmox.softlog:/vmdata/datastore1 Brick3: vna.proxmox.softlog:/vmdata/datastore1 Options Reconfigured: network.remote-dio: enable cluster.eager-lock: enable performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.stat-prefetch: off performance.strict-write-ordering: on performance.write-behind: off nfs.enable-ino32: off nfs.addr-namelookup: off nfs.disable: on performance.cache-refresh-timeout: 4 performance.io-thread-count: 32 cluster.server-quorum-type: server cluster.quorum-type: auto client.event-threads: 4 server.event-threads: 4 cluster.self-heal-window-size: 256 features.shard-block-size: 512MB features.shard: on performance.readdir-ahead: off -- Lindsay Mathieson
Pranith Kumar Karampuri
2016-Jan-12 03:32 UTC
[Gluster-users] High I/O And Processor Utilization
On 01/12/2016 08:52 AM, Lindsay Mathieson wrote:> On 11/01/16 15:37, Krutika Dhananjay wrote: >> Kyle, >> >> Based on the testing we have done from our end, we've found that >> 512MB is a good number that is neither too big nor too small, >> and provides good performance both on the IO side and with respect to >> self-heal. > > > Hi Krutika, I experimented a lot with different chunk sizes, didn't > find all that much difference between 4MB and 1GB > > But benchmarks are tricky things - I used Crystal Diskmark inside a > VM, which is probably not the best assessment. And two of the bricks > on my replica 3 are very slow, just test drives, not production. So I > guess that would effevt things :) > > These are my current setting - what do you use? > > Volume Name: datastore1 > Type: Replicate > Volume ID: 1261175d-64e1-48b1-9158-c32802cc09f0 > Status: Started > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: vnb.proxmox.softlog:/vmdata/datastore1 > Brick2: vng.proxmox.softlog:/vmdata/datastore1 > Brick3: vna.proxmox.softlog:/vmdata/datastore1 > Options Reconfigured: > network.remote-dio: enable > cluster.eager-lock: enable > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > performance.stat-prefetch: off > performance.strict-write-ordering: on > performance.write-behind: off > nfs.enable-ino32: off > nfs.addr-namelookup: off > nfs.disable: on > performance.cache-refresh-timeout: 4 > performance.io-thread-count: 32 > cluster.server-quorum-type: server > cluster.quorum-type: auto > client.event-threads: 4 > server.event-threads: 4 > cluster.self-heal-window-size: 256 > features.shard-block-size: 512MB > features.shard: on > performance.readdir-ahead: off > >Most of these tests are done by Paul Cuzner (CCed). Pranith