Mohammed Rafi K C
2016-Sep-03 07:07 UTC
[Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD
Hi Benjamin, Can you tell us more about your work-load like the file size, size of both hot and cold storage, how the files are created (after attaching the tier or before attaching the tier) , how long have you been using the tier volume, etc. Regards Rafi KC On 09/03/2016 09:16 AM, Benjamin Kingston wrote:> Hello all, > > I've discovered an issue in my lab that went unnoticed until recently, > or just came about with the latest Centos release. > > When the SSD hot tier is enabled read from the volume is 2MB/s, after > detaching AND committing, read of the same file is at 150MB/s to /dev/null > > If I copy the file to the hot bricks directly the write is 150MB/s and > read is 500MB/s on the first read, and then 4GB/s on the subsequent > reads (filesystem RAM caching) to /dev/null > > Just enabling tier storage takes the performance to ~10-20IOPS and > 2-10MB/s even for local node mounted volume. > > I don't see any major log issues and I detached and did a fix-layout, > but it persists when re-enabling the tier. > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160903/55dbb80f/attachment.html>
Benjamin Kingston
2016-Sep-03 16:04 UTC
[Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD
Hello Rafi, Thanks for the reply please see my answers below On Sat, Sep 3, 2016 at 12:07 AM, Mohammed Rafi K C <rkavunga at redhat.com> wrote:> Hi Benjamin, > > > Can you tell us more about your work-load like the file size >Files are a range from 10GB test file generated from /dev/urandom, many 100MB folder separated files to smaller images and text files This is a lab so typically there is no load, in the case of my testing this issue the only file being accessed was the 10GB test file.> , size of both hot and cold storage >Hot storage is a 315GB portion of a 512GB SSD Cold storage is a replica made up of 3 bricks on two nodes. totaling 17TB> how the files are created (after attaching the tier or before attaching > the tier) >I've experienced performance issues with files created before hot attachment, 10GB test file, and copying files to the volume after attachment (100MB files)> , how long have you been using the tier volume, etc. >At the moment I am not using the hot tier after discovering the performance loss. I originally turned on after learning of the feature a month or so ago. I feel like I may have actually had this issue since then because I have been troubleshooting a network throughalput issue since then that was contributing to a 4MB/s write to the volume. I recently resolved that issue and the 4MB/s write was still observed until I removed the hot tier. Unfortunate timing on both issues.> > Regards > > Rafi KC > > > > > On 09/03/2016 09:16 AM, Benjamin Kingston wrote: > > Hello all, > > I've discovered an issue in my lab that went unnoticed until recently, or > just came about with the latest Centos release. > > When the SSD hot tier is enabled read from the volume is 2MB/s, after > detaching AND committing, read of the same file is at 150MB/s to /dev/null > > If I copy the file to the hot bricks directly the write is 150MB/s and > read is 500MB/s on the first read, and then 4GB/s on the subsequent reads > (filesystem RAM caching) to /dev/null > > Just enabling tier storage takes the performance to ~10-20IOPS and > 2-10MB/s even for local node mounted volume. > > I don't see any major log issues and I detached and did a fix-layout, but > it persists when re-enabling the tier. > > > > > _______________________________________________ > Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160903/8dd14609/attachment.html>