It seems Andreas Klauer''s fairnat has experimental support for using HTB''s MPU and overhead options. fairnat.config: # Use MPU for HTB. From the LARTC Howto on MPU: # "A zero-sized packet does not use zero bandwidth. For ethernet, no packet # uses less than 64 bytes. The Minimum Packet Unit determines the minimal # token usage for a packet." HTB_MPU=0 # HTB_MPU=64 # Ethernet # HTB_MPU=106 # According to Andy Furniss, this value is suited for DSL users I imagine that 106 value is a reference to this post: http://mailman.ds9a.nl/pipermail/lartc/2004q2/012369.html The patch seems to be available here: http://luxik.cdi.cz/~devik/qos/htb/v3/htb_tc_overhead.diff In any case, I applied the patch to `tc` and recompiled. The resulting binary let me set ''mpu'' when using HTB, so I set it to 106 as suggested above. As far as I can tell, nothing changed. Should there be some notable outcome from setting this parameter, as I suspect there should, or should I be using some other value? Was there a HTB component to this patch as well? I patched `tc`, but not HTB in my 2.6.6 kernel. I wasn''t able to locate a kernel patch for this, is there one? Here''s the actual configuration: tc qdisc add dev eth0 root handle 1: htb default 90 tc class add dev eth0 parent 1: classid 1:1 htb rate 160kbit ceil \ 160kbit mpu 106 tc class add dev eth0 parent 1:1 classid 1:10 htb rate 64kbit ceil \ 64kbit mpu 106 prio 0 tc class add dev eth0 parent 1:1 classid 1:20 htb rate 96kbit ceil \ 160kbit mpu 106 prio 1 tc class add dev eth0 parent 1:1 classid 1:50 htb rate 8kbit ceil \ 160kbit mpu 106 prio 1 tc class add dev eth0 parent 1:1 classid 1:90 htb rate 8kbit ceil \ 160kbit mpu 106 prio 1 tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 20 tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 20 tc qdisc add dev eth0 parent 1:50 handle 50: sfq perturb 20 tc qdisc add dev eth0 parent 1:90 handle 90: sfq perturb 20 My connection is a ADSL line. When the link is saturated with a large quantity of small UDP packets (~ 100 bytes each) I find the modem begins to queue locally when I use a rate of 190kbit for my parent class. So, I was forced to switch to 160kbit. That seems symptomatic of HTB not knowing the true cost of sending a packet across the ADSL link, which is of essential importance when there are many small packets. It''s my suspicion that the MPU and overhead options for HTB would assist in resolving this and enable me to resume using 190kbit instead of 160kbit for the outer most parent class. Is my suspicion correct? Thanks. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
>I imagine that 106 value is a reference to this post: > >http://mailman.ds9a.nl/pipermail/lartc/2004q2/012369.html > >...>It''s my suspicion that the MPU and overhead options for HTB would assist in >resolving this and enable me to resume using 190kbit instead of 160kbit for >the outer most parent class. > >Is my suspicion correct? > >Read the follows to that post as well. Basically it''s only an approximation. The "MPU" is basically pointing out that your ADSL stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte packets. You need at least 2 of these, hence the 108 figure for MPU. Now you also need to estimate overhead which is going to be the size of the header on those ATM packets. However, that still leaves the "wasted space" on the end of small packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take up). I suggested a crude way to tweak that patch (easy to see how it works if you look at the relevant lines in the orig file). However, I dont even have a working QOS system so I haven''t even compiled it! Look up the specs for ATM though and you should be able to tweak that suggested line change and get something. I for one would be really interested to hear if it solves the problem! Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Monday 17 May 2004 17:23, Ed Wildgoose wrote: <snip>> Read the follows to that post as well. Basically it''s only an > approximation. The "MPU" is basically pointing out that your ADSL > stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte > packets. You need at least 2 of these, hence the 108 figure for MPU. > Now you also need to estimate overhead which is going to be the size of > the header on those ATM packets.Now I''m confused. Is it 53 bytes or 64 bytes? http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html> However, that still leaves the "wasted space" on the end of small > packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take > up). > > I suggested a crude way to tweak that patch (easy to see how it works if > you look at the relevant lines in the orig file). However, I dont even > have a working QOS system so I haven''t even compiled it! Look up the > specs for ATM though and you should be able to tweak that suggested line > change and get something.So the patch is supposed to increase the cost of dequeuing packets, then, provided you know what numbers to use?> I for one would be really interested to hear if it solves the problem! > > Ed W_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:>On Monday 17 May 2004 17:23, Ed Wildgoose wrote: ><snip> > > >>Read the follows to that post as well. Basically it''s only an >>approximation. The "MPU" is basically pointing out that your ADSL >>stream is encapsulated in an ATM stream. ATM uses fixed size 64 byte >>packets. You need at least 2 of these, hence the 108 figure for MPU. >>Now you also need to estimate overhead which is going to be the size of >>the header on those ATM packets. >> >> > >Now I''m confused. Is it 53 bytes or 64 bytes? > >http://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html > >You are right. Something happened and I somehow failed to divide 106 by 2 and get 53... I have been doing a load of code using 2^n all day, and 32/64, etc were really on my mind just then. Sorry....>>However, that still leaves the "wasted space" on the end of small >>packets (eg those that take up 2.5 ATM cells, how much does the 0.5 take >>up). >> >>I suggested a crude way to tweak that patch (easy to see how it works if >>you look at the relevant lines in the orig file). However, I dont even >>have a working QOS system so I haven''t even compiled it! Look up the >>specs for ATM though and you should be able to tweak that suggested line >>change and get something. >> >> > >So the patch is supposed to increase the cost of dequeuing packets, then, >provided you know what numbers to use? > >Well, I haven''t taken the time to trace that code, but with a 10 sec look at it, it appears to be simply accumulating the size of incoming packets based on the actual size of the data. So I simply suggested dividing by 53, rounding up, then adding on the "overhead" on a per packet basis, rather than a per data block basis Actually having looked at your ADSL HOWTO link, of course the best calculation would be to simply divide the amount of data by 48 (the data size of ATM packets). Then round up (since 0.5 packets means needing 1 whole packet). Then multiply this number by 53 (size of atm packet including its header). This would give the exact amount of bandwidth. I would code this as: size = ( (int)((datasize-1)/48) + 1) * 53 You could hardcode something similar into your tc and see if it helps (just remove PMU and overhead code added by the existing patch). If you are scared of looking at code. Don''t be. It really isn''t as scary as it might look! Good luck. Interested to hear if it works... Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Am Tuesday 18 May 2004 08:38 schrieb Ed Wildgoose:> I would code this as: > > size = ( (int)((datasize-1)/48) + 1) * 53 > > You could hardcode something similar into your tc and see if it helps > (just remove PMU and overhead code added by the existing patch).How does modifying the tc code affect the way rates are calculated and limited in the kernel? Isn''t it just a userspace tool to create qdisc / class structures and read statistics? Andreas _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Andreas Klauer wrote:>Am Tuesday 18 May 2004 08:38 schrieb Ed Wildgoose: > > >>I would code this as: >> >>size = ( (int)((datasize-1)/48) + 1) * 53 >> >>You could hardcode something similar into your tc and see if it helps >>(just remove PMU and overhead code added by the existing patch). >> >> > >How does modifying the tc code affect the way rates are calculated and >limited in the kernel? Isn''t it just a userspace tool to create qdisc / >class structures and read statistics? > >Dunno, haven''t had time to read through the code much. It started because someone earlier in this thread pointed out that there was a patch available on the tc website to better handle overhead and MPU. I just altered the patch in a different way based on what looked fairly obvious. However, I notice that tc is noted to "have a full implementation of HTB inside it". Perhaps there are two HTB implementations kicking around? If I get a chance I will have a poke around in the code. If the flow is this straightforward in most of the kernel modules then it looks pretty straightforward to implement some options to control padding packets to simulate the underlying protocol. However, since no one else has done it, I doubt it is so... Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/