Geoffrey Letessier
2015-Aug-13 12:09 UTC
[Gluster-users] Lower IO performance with 3.7.x than with 3.5.x
Hi, Previously we used GlusterFS v3.5.3 and we could achieve around 1.1GBs for read/write in plain distributed volume. Now, since I have upgraded GlusterFS to 3.7.3, i can only achieve around 700MBs (max). Here some additional information concerning my volume. # gluster volume info vol_workdir_amd Volume Name: vol_workdir_amd Type: Distribute Volume ID: 42d8f9fa-3b24-414f-a275-dadcfdd8d3db Status: Started Number of Bricks: 4 Transport-type: tcp,rdma Bricks: Brick1: ib-storage1:/export/brick_workdir/brick1/data Brick2: ib-storage3:/export/brick_workdir/brick1/data Brick3: ib-storage1:/export/brick_workdir/brick2/data Brick4: ib-storage3:/export/brick_workdir/brick2/data Options Reconfigured: client.event-threads: 4 performance.readdir-ahead: on auth.allow: 10.0.* cluster.min-free-disk: 5% performance.cache-size: 1GB performance.io-thread-count: 32 diagnostics.brick-log-level: CRITICAL nfs.disable: on performance.read-ahead: on and the status is OK for all bricks in. Thanks in advance, Geoffrey ------------------------------------------------------ Geoffrey Letessier Responsable informatique & ing?nieur syst?me UPR 9080 - CNRS - Laboratoire de Biochimie Th?orique Institut de Biologie Physico-Chimique 13, rue Pierre et Marie Curie - 75005 Paris Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at ibpc.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150813/93fbe452/attachment.html>
Krutika Dhananjay
2015-Aug-14 07:30 UTC
[Gluster-users] Lower IO performance with 3.7.x than with 3.5.x
CC'ing distribute component owners. -Krutika ----- Original Message -----> From: "Geoffrey Letessier" <geoffrey.letessier at cnrs.fr> > To: gluster-users at gluster.org > Sent: Thursday, August 13, 2015 5:39:47 PM > Subject: [Gluster-users] Lower IO performance with 3.7.x than with 3.5.x> Hi,> Previously we used GlusterFS v3.5.3 and we could achieve around 1.1GBs for > read/write in plain distributed volume. > Now, since I have upgraded GlusterFS to 3.7.3, i can only achieve around > 700MBs (max).> Here some additional information concerning my volume.> # gluster volume info vol_workdir_amd> Volume Name: vol_workdir_amd > Type: Distribute > Volume ID: 42d8f9fa-3b24-414f-a275-dadcfdd8d3db > Status: Started > Number of Bricks: 4 > Transport-type: tcp,rdma > Bricks: > Brick1: ib-storage1:/export/brick_workdir/brick1/data > Brick2: ib-storage3:/export/brick_workdir/brick1/data > Brick3: ib-storage1:/export/brick_workdir/brick2/data > Brick4: ib-storage3:/export/brick_workdir/brick2/data > Options Reconfigured: > client.event-threads: 4 > performance.readdir-ahead: on > auth.allow: 10.0.* > cluster.min-free-disk: 5% > performance.cache-size: 1GB > performance.io-thread-count: 32 > diagnostics.brick-log-level: CRITICAL > nfs.disable: on > performance.read-ahead: on> and the status is OK for all bricks in.> Thanks in advance, > Geoffrey > ------------------------------------------------------ > Geoffrey Letessier > Responsable informatique & ing?nieur syst?me > UPR 9080 - CNRS - Laboratoire de Biochimie Th?orique > Institut de Biologie Physico-Chimique > 13, rue Pierre et Marie Curie - 75005 Paris > Tel: 01 58 41 50 93 - eMail: geoffrey.letessier at ibpc.fr> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150814/6eddc05a/attachment.html>