Hello, I have GlusterFS 3.8.8. I am using IB RDMA. I have noticed during Write or Read the throughput doesn't seem consistent for same workload(fio command). Sometimes I get higher throughput sometimes it quickly goes into half, then stays there. I cannot predict a consistent behavior every time when I run the same workload. The time to complete varies. Is there any log file or something I can look into, to understand this behavior. I am single client(fuse) running 32 thread, 1mb block size, creating 200GB or reading 200GB files randomly with directIO. -- Deepak ----------------------------------------------------------------------------------- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ----------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170222/ed8257ae/attachment.html>
----- Original Message -----> From: "Deepak Naidu" <dnaidu at nvidia.com> > To: gluster-users at gluster.org > Sent: Wednesday, February 22, 2017 3:36:22 PM > Subject: [Gluster-users] GlusterFS throughput inconsistent > > > > Hello, > > > > I have GlusterFS 3.8.8. I am using IB RDMA. I have noticed during Write or > Read the throughput doesn?t seem consistent for same workload(fio command). > Sometimes I get higher throughput sometimes it quickly goes into half, then > stays there.This is strange, if it were me I would try to create a TCP volume transport instead of RDMA and see if you can reproduce there.> > > > I cannot predict a consistent behavior every time when I run the same > workload. The time to complete varies. Is there any log file or something I > can look into, to understand this behavior. I am single client(fuse) running > 32 thread, 1mb block size, creating 200GB or reading 200GB files randomly > with directIO.To see upto +-5-10% variance between runs is normal(closer to 5%, 10 is a little high). I haven't seen a 50% like you mentioned above, like I said I wonder if this is reproducible on a TCP transport volume? Can you provide some FIO output from a few runs for us to have a look at? -b> > > > -- > > Deepak > > This email message is for the sole use of the intended recipient(s) and may > contain confidential information. Any unauthorized review, use, disclosure > or distribution is prohibited. If you are not the intended recipient, please > contact the sender by reply email and destroy all copies of the original > message. > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
The only log available for a client process is located at /var/log/glusterfs/<mnt_path>. You can also see if there is anything in the bricks logs, Regards Rafi KC On 02/23/2017 02:06 AM, Deepak Naidu wrote:> > Hello, > > > > I have GlusterFS 3.8.8. I am using IB RDMA. I have noticed during > Write or Read the throughput doesn?t seem consistent for same > workload(fio command). Sometimes I get higher throughput sometimes it > quickly goes into half, then stays there. > > > > I cannot predict a consistent behavior every time when I run the same > workload. The time to complete varies. Is there any log file or > something I can look into, to understand this behavior. I am single > client(fuse) running 32 thread, 1mb block size, creating 200GB or > reading 200GB files randomly with directIO. > > > > -- > > Deepak > > ------------------------------------------------------------------------ > This email message is for the sole use of the intended recipient(s) > and may contain confidential information. Any unauthorized review, > use, disclosure or distribution is prohibited. If you are not the > intended recipient, please contact the sender by reply email and > destroy all copies of the original message. > ------------------------------------------------------------------------ > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170223/97bf6772/attachment.html>