Pranith Kumar Karampuri
2015-Dec-21 13:25 UTC
[Gluster-users] gluster volume traffic shaping / throttling
On 12/21/2015 02:26 PM, Mateusz Zajakala wrote:> Pranith, > > that sounds good, Option to configure relative priorities of READ and > WRITE operations would be probably sufficient in my case (as long as > it really affects the operations throughput). > > But what if you wanted to differentiate write/read priorities from > different clients? maybe it's worth thinking about some way to enable > this?Oh, these options will be on server side. So all clients will see same priorities. That is probably the reason why the Original developer didn't allow these priorities to be changed. I am not sure how we can configure in a generic way. Any thoughts? Pranith> > > Thanks! > Mat > > > > On Mon, Dec 21, 2015 at 5:00 AM, Pranith Kumar Karampuri > <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote: > > > > On 12/21/2015 02:28 AM, Mateusz Zajakala wrote: >> Hi, >> >> I have a question about ways to control/shape IO traffic to >> gluster volumes. >> >> I have the following setup:gluster 3.7.6, distributed/disperse >> volume (20 HDD bricks, disperse 5, redundancy 1), mount points >> via glusterfs fuse. >> >> I have multiple read sessions (hundreds of clients reading >> sequentially large >1GB files) and multiple write sessions >> writing such files. While I care that read sessions proceed with >> maximum speed I can get from my HDDs, I can live with the fact >> that write (archiving) sessions will give way and proceed more >> slowly. >> >> Is there a way to throttle write sessions? Ideally I'd like them >> to have lower priority than read sessions, but also not be >> limited in case there are no read sessions at the moment. It >> seems like I need some "ionice" couterpart for gusterfs. >> >> Does is exist? I was wondering if this could be achieved by >> tweaking "ionice" values on the client side for writes and reads, >> but since clients only use glusterfs fuse mountpoint I don't >> think it would work... ? > hi Mat, > I just checked io-threads code (This feature decides > priority of different operations and multiple threads execute the > operations.), Both READ and WRITE are in same priority. May be we > can give an option to configure this? Would you like that? I will > discuss this more on gluster-devel to see what others have to say > before coming to a conclusion about how to go about this. Your > feedback is much welcome. > > Pranith >> >> Thanks >> Mat >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://www.gluster.org/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151221/4a67dcab/attachment.html>
Vijay Bellur
2015-Dec-22 04:18 UTC
[Gluster-users] gluster volume traffic shaping / throttling
On 12/21/2015 08:25 AM, Pranith Kumar Karampuri wrote:> > > On 12/21/2015 02:26 PM, Mateusz Zajakala wrote: >> Pranith, >> >> that sounds good, Option to configure relative priorities of READ and >> WRITE operations would be probably sufficient in my case (as long as >> it really affects the operations throughput). >> >> But what if you wanted to differentiate write/read priorities from >> different clients? maybe it's worth thinking about some way to enable >> this? > > Oh, these options will be on server side. So all clients will see same > priorities. That is probably the reason why the Original developer > didn't allow these priorities to be changed. I am not sure how we can > configure in a generic way. Any thoughts? >All clients observing the same set of priorities does address the initial problem discussed here. Addressing the second one - configurable priorities based on client's identity is something that we intend addressing as part of quality of service in Gluster.next releases. Mateusz - are there other use cases related to traffic shaping/throttling that would interest you? Thanks, Vijay