Gandalf Corvotempesta
2016-Jul-07 11:52 UTC
[Gluster-users] New cluster - first experience
2016-07-07 13:22 GMT+02:00 Lindsay Mathieson <lindsay.mathieson at gmail.com>:> Yes. However maildir involves many tens of thousands of small & large files, > I *think* that glusters performances isn't the best with very large numbers > of files in a dir, but hopefully someone else with more experience can chime > in on that.Performance for maildir shoul not be much important, I think. I can also create a VM with the mail server, in this case, on gluster i'll put the VM image and not the plain maildir, but I prefere the first solution.> That does sound slow - how big was the tar file? what is your network speed > and setup?I've used the 4.7-rc6 from here: https://www.kernel.org/ Gigabit network # gluster volume info Volume Name: gv0 Type: Replicate Volume ID: 2a36dc0f-1d9b-469c-82de-9d8d98321b83 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 77.95.175.112:/export/sdb1/brick Brick2: 77.95.175.113:/export/sdb1/brick Brick3: 77.95.175.114:/export/sdb1/brick Options Reconfigured: features.shard: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on> Not sure. Disperse Replicated vol maybe?Is disperse based on erasure code ? I've read that erasure code store the encoded file, I don't like to store encoded file, in case of issue, the encoding could lead to a mess.> If you're not using a dual or better bonded connection on replica 3 then > your write speeds will be limited to 1Gb/3 max.Ok.> Are your clients on the storage nodes or are they dedicated?I'm using a dedicated client.
On 7/07/2016 9:52 PM, Gandalf Corvotempesta wrote:> Volume Name: gv0 > Type: Replicate > Volume ID: 2a36dc0f-1d9b-469c-82de-9d8d98321b83 > Status: Started > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: 77.95.175.112:/export/sdb1/brick > Brick2: 77.95.175.113:/export/sdb1/brick > Brick3: 77.95.175.114:/export/sdb1/brick > Options Reconfigured: > features.shard: on > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: onThe default shard size is 4MB, I'd tend towards a larger one which improves write speed. For my VM cluster I use shardsize = 64MB nb. To change the shardsize you should recreate the volume. -- Lindsay Mathieson