jayakrishnan mm
2016-Aug-22 04:39 UTC
[Gluster-users] Disperse volume performance improvement using clients-io-theads
Hi, Glusterfs Ver: 3.7.6 1) Currently I am using disperse volume and its performance is not satisfactory. So I tried "sudo gluster v set ec-vol performance.client-io-threads on" (as per https://bugzilla.redhat.com/show_bug.cgi?id=1349953) on the volume . But I am getting Connection failed. Please check if gluster daemon is operational. error. volume info Volume Name: ec-vol Type: Disperse Volume ID: 66d9f9e9-8bbd-46d8-b491-5b3e48219ee1 Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: jk:/home/jk/gluster/brick1 Brick2: jk:/home/jk/gluster/brick2 Brick3: jk:/home/jk/gluster/brick3 Brick4: jk:/home/jk/gluster/brick4 Brick5: jk:/home/jk/gluster/brick5 Brick6: jk:/home/jk/gluster/brick6 Options Reconfigured: performance.readdir-ahead: on Attached the log. How to resolve this error ? 2) If I replace the Erasure code algorithm in the EC with a simpler one , will it give a better performance ? What is the real bottleneck ? Best Regards JK -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160822/d861cbc9/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: usr-local-etc-glusterfs-glusterd.vol.log-3 Type: application/octet-stream Size: 8520 bytes Desc: not available URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160822/d861cbc9/attachment.obj>
Atin Mukherjee
2016-Aug-22 05:02 UTC
[Gluster-users] [Gluster-devel] Disperse volume performance improvement using clients-io-theads
On Mon, Aug 22, 2016 at 10:09 AM, jayakrishnan mm <jayakrishnan.mm at gmail.com> wrote:> Hi, > Glusterfs Ver: 3.7.6 > > 1) Currently I am using disperse volume and its performance is not > satisfactory. > So I tried "sudo gluster v set ec-vol performance.client-io-threads on" > (as per https://bugzilla.redhat.com/show_bug.cgi?id=1349953) > > on the volume . But I am getting > Connection failed. Please check if gluster daemon is operational. error. >Ideally, this indicates that CLI is not able to talk to GlusterD as it the latter could be down. Are you sure GlusterD was up by the time you executed this command? If yes, please attach the cmd_history and cli log.> > volume info > > Volume Name: ec-vol > Type: Disperse > Volume ID: 66d9f9e9-8bbd-46d8-b491-5b3e48219ee1 > Status: Started > Number of Bricks: 1 x (4 + 2) = 6 > Transport-type: tcp > Bricks: > Brick1: jk:/home/jk/gluster/brick1 > Brick2: jk:/home/jk/gluster/brick2 > Brick3: jk:/home/jk/gluster/brick3 > Brick4: jk:/home/jk/gluster/brick4 > Brick5: jk:/home/jk/gluster/brick5 > Brick6: jk:/home/jk/gluster/brick6 > Options Reconfigured: > performance.readdir-ahead: on > > > Attached the log. > > How to resolve this error ? > > 2) If I replace the Erasure code algorithm in the EC with a simpler one > , will it give a better performance ? What is the real bottleneck ? > > > Best Regards > JK > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-devel >-- --Atin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160822/796049a2/attachment.html>