Mohammed Rafi K C
2016-Sep-03 18:41 UTC
[Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD
On 09/03/2016 09:34 PM, Benjamin Kingston wrote:> Hello Rafi, > > Thanks for the reply please see my answers below > > On Sat, Sep 3, 2016 at 12:07 AM, Mohammed Rafi K C > <rkavunga at redhat.com <mailto:rkavunga at redhat.com>> wrote: > > Hi Benjamin, > > > Can you tell us more about your work-load like the file size > > Files are a range from 10GB test file generated from /dev/urandom, > many 100MB folder separated files to smaller images and text files > This is a lab so typically there is no load, in the case of my testing > this issue the only file being accessed was the 10GB test file. > > , size of both hot and cold storage > > Hot storage is a 315GB portion of a 512GB SSD > Cold storage is a replica made up of 3 bricks on two nodes. totaling 17TB > > how the files are created (after attaching the tier or before > attaching the tier) > > I've experienced performance issues with files created before hot > attachment, 10GB test file, and copying files to the volume after > attachment (100MB files)Files created before attaching hot tier will be present on hot brick until it gets heated and migrated completely. During this time interval we won't get the benefit of hot storage since the files are served from cold brick. For smallfiles (order of kbs), We are experiencing some performance issue mostly with EC (disperse ) volume. Follow up questions, What is the volume type ? and version of glusterfs ? Does reread gives an equal performance when it hits the server ? Have you done any rename for those files ? Regards Rafi KC> , how long have you been using the tier volume, etc. > > At the moment I am not using the hot tier after discovering the > performance loss. I originally turned on after learning of the feature > a month or so ago. I feel like I may have actually had this issue > since then because I have been troubleshooting a network throughalput > issue since then that was contributing to a 4MB/s write to the volume. > I recently resolved that issue and the 4MB/s write was still observed > until I removed the hot tier. > > Unfortunate timing on both issues. > > > Regards > > Rafi KC > > > > > On 09/03/2016 09:16 AM, Benjamin Kingston wrote: >> Hello all, >> >> I've discovered an issue in my lab that went unnoticed until >> recently, or just came about with the latest Centos release. >> >> When the SSD hot tier is enabled read from the volume is 2MB/s, >> after detaching AND committing, read of the same file is at >> 150MB/s to /dev/null >> >> If I copy the file to the hot bricks directly the write is >> 150MB/s and read is 500MB/s on the first read, and then 4GB/s on >> the subsequent reads (filesystem RAM caching) to /dev/null >> >> Just enabling tier storage takes the performance to ~10-20IOPS >> and 2-10MB/s even for local node mounted volume. >> >> I don't see any major log issues and I detached and did a >> fix-layout, but it persists when re-enabling the tier. >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://www.gluster.org/mailman/listinfo/gluster-users >> <http://www.gluster.org/mailman/listinfo/gluster-users> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160904/81f7cf82/attachment.html>
Benjamin Kingston
2016-Sep-04 16:40 UTC
[Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD
Thanks for the help, see below: On Sat, Sep 3, 2016 at 11:41 AM, Mohammed Rafi K C <rkavunga at redhat.com> wrote:> Files created before attaching hot tier will be present on hot brick until > it gets heated and migrated completely. During this time interval we won't > get the benefit of hot storage since the files are served from cold brick. >The issue I'm having is that when the hot tier is added read from the volume drops from 150MB/s (no tier) to 2-10MB/s (tier added). I experienced this issue for a while even after the hot brick reached 90% capacity, so many files were promoted. I detached the hotbrick and verified there weren't any files left over, and the performance was still effected. It was only until I committed the detach that throughput returned to 150MB/s. Reattaching the hot tier immediately caused the issue to return until the tier was again detach-committed> > > For smallfiles (order of kbs), We are experiencing some performance issue > mostly with EC (disperse ) volume. > > Follow up questions, > What is the volume type ? and version of glusterfs ? >The volume is a 2x replica on btrfs bricks that are luks encrypted. 2048K GPT offset for 1st partition . Gluster 3.8.3, built aug 22 running on Centos.> > Does reread gives an equal performance when it hits the server ? > reading any files with hot tier results in 2-10MB/s (node mounted volume > via localhost6:/volume), reading them again any number of times does not > improve performance >> Have you done any rename for those files ? >Some files have been renamed, however the 10GB test file has not it is the original name/copy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160904/ee6a9314/attachment.html>