Hi, I have a small (5 node) compute cluster running an MPI job that needs to read from a few 20 GB files. These files are stored in a gluster volume composed of bricks from four storage nodes. I originally made the volume with a replica count of 2, but in an attempt to improve read performance, I recreated it with a replica count of 4. However, reading turned out to be even slower! I checked out disk access with iotop, and for one file, only two of the bricks are being read from. For another file, only _one_ brick is accessed! Volume top read also shows the same thing. What might be the problem? There are 40 threads across the cluster trying to read at the same time; shouldn't this be automatically load balanced across each brick? Thanks, Ray -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140807/2d0ee0be/attachment.html>
Pranith Kumar Karampuri
2014-Aug-11 04:53 UTC
[Gluster-users] Reading not distributed across bricks
hi Ray, Reads are served from the bricks which respond the fastest at the moment. They are not load-balanced. Pranith On 08/08/2014 03:06 AM, Ray Mannings wrote:> Hi, > > I have a small (5 node) compute cluster running an MPI job that needs > to read from a few 20 GB files. These files are stored in a gluster > volume composed of bricks from four storage nodes. I originally made > the volume with a replica count of 2, but in an attempt to improve > read performance, I recreated it with a replica count of 4. > > However, reading turned out to be even slower! I checked out disk > access with iotop, and for one file, only two of the bricks are being > read from. For another file, only _one_ brick is accessed! Volume top > read also shows the same thing. > > What might be the problem? There are 40 threads across the cluster > trying to read at the same time; shouldn't this be automatically load > balanced across each brick? > > Thanks, > Ray > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140811/cdc690f3/attachment.html>