Hi. I wrote in a reply to a mail on here recently that you can''t set mpu (minimum packet unit) on HTB as you can on CBQ. I''ve just noticed that there is a patch on devik''s site which does mpu and overhead. http://luxik.cdi.cz/~devik/qos/htb/ For dsl users mpu is, for practical purposes going to be 106 - overhead is still variable though, depending on packet size. Having these should let you push upstream bandwidth rates a bit closer to the limit. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Am Thursday 13 May 2004 15:54 schrieb Andy Furniss:> I''ve just noticed that there is a patch on devik''s site which does mpu > and overhead. > > http://luxik.cdi.cz/~devik/qos/htb/Great, all the gems are hidden in the Changelog. ;-) Direct link: http://luxik.cdi.cz/~devik/qos/htb/v3/htb_tc_overhead.diff I''ll give it a try. Thanks for the hint. Andreas _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Am Thursday 13 May 2004 16:38 schrieb Andreas Klauer:> Am Thursday 13 May 2004 15:54 schrieb Andy Furniss: > > I''ve just noticed that there is a patch on devik''s site which does mpu > > and overhead. > > I''ll give it a try. Thanks for the hint.Well, patching was a little difficult... it didn''t like the debian patch and I didn''t succeed in joining the two patches together because of the weird inject stuff. But anyway. It seems to work, and it looks useful, so I added it to the "Hacks" section of my Fair NAT script together with a patched binary. Andreas _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Thursday 13 May 2004 13:28, Andreas Klauer wrote:> Am Thursday 13 May 2004 16:38 schrieb Andreas Klauer: > > Am Thursday 13 May 2004 15:54 schrieb Andy Furniss: > > > I''ve just noticed that there is a patch on devik''s site which does mpu > > > and overhead. > > > > I''ll give it a try. Thanks for the hint. > > Well, patching was a little difficult... it didn''t like the debian patch > and I didn''t succeed in joining the two patches together because of the > weird inject stuff. But anyway. It seems to work, and it looks useful, so > I added it to the "Hacks" section of my Fair NAT script together with a > patched binary.Nifty. But how do you determine what your minimum packet unit (MPU) is? How about overhead for a PPPoE connection? With shaping I can max my upstream and still maintain ~ 120ms ping times, but I''d like to get it down to around ~ 70ms.> Andreas_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Andy Furniss wrote:> Hi. > > I wrote in a reply to a mail on here recently that you can''t set mpu > (minimum packet unit) on HTB as you can on CBQ. > > I''ve just noticed that there is a patch on devik''s site which does mpu > and overhead. > > http://luxik.cdi.cz/~devik/qos/htb/ > > For dsl users mpu is, for practical purposes going to be 106 - > overhead is still variable though, depending on packet size. > > Having these should let you push upstream bandwidth rates a bit closer > to the limit.What about changing that patch a little (bear in mind I don''t understand how it works though). I appears that you could change the patch in tc/core in fn tc_calc_rtable, from: + if (overhead) + sz += overhead; to something like: + if (overhead) + sz += (((sz-1)/mpu)+1) * overhead; Where that little calculation is trying to turn the mpu into a packet size, work out how many packets would be required for the size (sz) of data, and apply the overhead per packet. You would then set mpu to be the atm packet size, ie 54 To be honest though, this packing of the params into a single var seems unneccessary. The function tc_calc_rtab is only obviously used in the tc code, and it could be easily changed to have a prototype with an extra param. I would have to have a flick through the rest of the code, but it might be quite easy to add per packet overhead to the cbq code in the same way, and also whatever m_police is? Can someone with a working setup try this out and see if it helps? Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Ed Wildgoose wrote:> Andy Furniss wrote: > >> Hi. >> >> I wrote in a reply to a mail on here recently that you can''t set mpu >> (minimum packet unit) on HTB as you can on CBQ. >> >> I''ve just noticed that there is a patch on devik''s site which does mpu >> and overhead. >> >> http://luxik.cdi.cz/~devik/qos/htb/ >> >> For dsl users mpu is, for practical purposes going to be 106 - >> overhead is still variable though, depending on packet size. >> >> Having these should let you push upstream bandwidth rates a bit closer >> to the limit. > > > > What about changing that patch a little (bear in mind I don''t understand > how it works though). > I appears that you could change the patch in tc/core in fn > tc_calc_rtable, from: > > + if (overhead) > + sz += overhead; > > to something like: > > + if (overhead) > + sz += (((sz-1)/mpu)+1) * overhead; > > Where that little calculation is trying to turn the mpu into a packet > size, work out how many packets would be required for the size (sz) of > data, and apply the overhead per packet. You would then set mpu to be > the atm packet size, ie 54 > > To be honest though, this packing of the params into a single var seems > unneccessary. The function tc_calc_rtab is only obviously used in the > tc code, and it could be easily changed to have a prototype with an > extra param. I would have to have a flick through the rest of the code, > but it might be quite easy to add per packet overhead to the cbq code in > the same way, and also whatever m_police is? > > Can someone with a working setup try this out and see if it helps?The patch author has mailed with similar suggestion, so there may be something new soon. People will need to work out their ppp overhead first. I know mine for pppoa/vc mux in the UK - it''s 10 (the RFC says 9 or 10) so it''s lucky my modem gives a cell count so I can tell easily. I don''t know how many there are, or what figure other variants use. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:> On Thursday 13 May 2004 13:28, Andreas Klauer wrote: > >>Am Thursday 13 May 2004 16:38 schrieb Andreas Klauer: >> >>>Am Thursday 13 May 2004 15:54 schrieb Andy Furniss: >>> >>>>I''ve just noticed that there is a patch on devik''s site which does mpu >>>>and overhead. >>> >>>I''ll give it a try. Thanks for the hint. >> >>Well, patching was a little difficult... it didn''t like the debian patch >>and I didn''t succeed in joining the two patches together because of the >>weird inject stuff. But anyway. It seems to work, and it looks useful, so >>I added it to the "Hacks" section of my Fair NAT script together with a >>patched binary. > > > Nifty. > > But how do you determine what your minimum packet unit (MPU) is? How about > overhead for a PPPoE connection?If you can get a cell count from your modem you can work it out with ping. I don''t know what your pppoe is.> > With shaping I can max my upstream and still maintain ~ 120ms ping times, but > I''d like to get it down to around ~ 70ms. >Your upstream worst case depends on your bitrate and your MTU. If it''s 128k you add about 90ms, 256k 45ms for 1500b packets. What''s yours? Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Friday 14 May 2004 05:55, Andy Furniss wrote: <snip>> If you can get a cell count from your modem you can work it out with > ping. I don''t know what your pppoe is.I can probably get my USB Stringray out of the closet and hook it up. I think the Windows diagnostic utility for it actually included some stuff about frames and cell sizes. The browser based diagnostics on my Westell don''t include anything interesting.> Your upstream worst case depends on your bitrate and your MTU. If it''s > 128k you add about 90ms, 256k 45ms for 1500b packets. What''s yours?My upstream is supposedly 256Kbps. I am running the ADSL modem in pass-through mode, so it gives my Linux router the live IP. When I did PPPoE internally I had an MTU of 1492 and used the RP PPPoE daemon.> Andy. >-- Jason Boxman Perl Programmer / *NIX Systems Administrator Shimberg Center for Affordable Housing | University of Florida http://edseek.com/ - Linux and FOSS stuff _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:> On Friday 14 May 2004 05:55, Andy Furniss wrote: > <snip> > >>If you can get a cell count from your modem you can work it out with >>ping. I don''t know what your pppoe is. > > > I can probably get my USB Stringray out of the closet and hook it up. I think > the Windows diagnostic utility for it actually included some stuff about > frames and cell sizes. The browser based diagnostics on my Westell don''t > include anything interesting. > > >>Your upstream worst case depends on your bitrate and your MTU. If it''s >>128k you add about 90ms, 256k 45ms for 1500b packets. What''s yours? > > > My upstream is supposedly 256Kbps. I am running the ADSL modem in > pass-through mode, so it gives my Linux router the live IP. When I did PPPoE > internally I had an MTU of 1492 and used the RP PPPoE daemon.Could be this then - You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in net/sched/sch_htb.c. To save time - if you built HTB as a module, you can probably (well it worked for me) get away with editing htb.c and do make SUBDIRS=net/sched modules and replacing /lib/modules/[kversion]/kernel/net/sched/htb.o with the new htb.o from your source tree. If you are doing it live stop shaping and check with lsmod that modprobe -r gets rid (do it again if it''s still there) of the old htb.o and reload shaping scripts. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Andy Furniss wrote:> You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in > net/sched/sch_htb.c. > > To save time - if you built HTB as a module, you can probably (well it > worked for me) get away with editing htb.c and do > > make SUBDIRS=net/sched modules > and replacing /lib/modules/[kversion]/kernel/net/sched/htb.o with the > new htb.o from your source tree. > > If you are doing it live stop shaping and check with lsmod that modprobe > -r gets rid (do it again if it''s still there) of the old htb.o and > reload shaping scripts.Oops the htb.c or o should read sch_htb.c or o Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Monday 17 May 2004 18:36, Andy Furniss wrote: <snip>> Could be this then - > > You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in > net/sched/sch_htb.c. >I have been messing with producing graphs with SNMP, so I only just did this. I was hoping to get before and after graphs to verify any changes, but I finally just did it. Lucky for me, my ADSL line died tonight, so when it came back up I was able to see my ping on a completely idle link. Using HTB with HTB_HYSTERESIS set to 0 appears to have greatly reduced my ping time. It still skips up more often than when the link is completely idle, but it appears to be (without any graphs to verify) a marked improvement. (Now I see more 75ms and an occasional 145ms instead of the complete reverse.) 68/70/75 out of 20 ICMP packets when idle. 68.3/91.6/215.7 ms out of 323 ICMP packets at 85% utilization. From the comments in sch_htb.c I take it I just traded speed for accuracy in some of HTB''s calculations, which on such a slow link is probably not an issue? <snip>> Andy.Thanks! -- Jason Boxman Perl Programmer / *NIX Systems Administrator Shimberg Center for Affordable Housing | University of Florida http://edseek.com/ - Linux and FOSS stuff _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:> On Monday 17 May 2004 18:36, Andy Furniss wrote: > <snip> > >>Could be this then - >> >>You can make HTB more accurate by setting HTB_HYSTERESIS to 0 in >>net/sched/sch_htb.c. >> > > > I have been messing with producing graphs with SNMP, so I only just did this. > I was hoping to get before and after graphs to verify any changes, but I > finally just did it.I was just thinking about making ping graphs with sed/xplot myself. Is it easy with SNMP?> > Lucky for me, my ADSL line died tonight, so when it came back up I was able to > see my ping on a completely idle link. Using HTB with HTB_HYSTERESIS set to > 0 appears to have greatly reduced my ping time. It still skips up more often > than when the link is completely idle, but it appears to be (without any > graphs to verify) a marked improvement. (Now I see more 75ms and an > occasional 145ms instead of the complete reverse.)Good - there is another timing tweak I should have mentioned which you may or may not be able to use. http://www.docum.org/stef.coene/qos/faq/cache/40.html I use this - but still notice some things (not TC related) use 100hz. When I finally finish my LFS setups I am going to try and tweak this aswell.> > 68/70/75 out of 20 ICMP packets when idle. > 68.3/91.6/215.7 ms out of 323 ICMP packets at 85% utilization.Assuming there is only upstream traffic for the test that still seems high - but then I don''t know what your pinging - my first hop is usually OK. What is the best min you can get pinging your first hop with an empty line - with and without traffic control in use. TC it''s self doesn''t seem to affect my best empty line rates. At 85% you should see max around 70-80 assuming 25ms baseline pings.> > From the comments in sch_htb.c I take it I just traded speed for accuracy in > some of HTB''s calculations, which on such a slow link is probably not an > issue? >Yes. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
[...]> I''ve just noticed that there is a patch on devik''s site which does mpu > and overhead. > For dsl users mpu is, for practical purposes going to be 106 - > overhead is still variable though, depending on packet size. > Having these should let you push upstream bandwidth rates a bit closer > to the limit.Hmm now I''m trying to use HFSC, I''d love to see a similar feature with HFSC :) -- _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Friday 14 May 2004 03:05, Ed Wildgoose wrote: <snip>> I appears that you could change the patch in tc/core in fn > tc_calc_rtable, from: > > + if (overhead) > + sz += overhead; > > to something like: > > + if (overhead) > + sz += (((sz-1)/mpu)+1) * overhead;I did that and recompiled iproute2. I kicked my rate up to my actual connection, 256Kbps, and I was nailed as usual. No measurable change using the above with an mpu of 54 for each class. Nothing changed at my handicapped rate of 160kbit either. tc qdisc add dev eth0 root handle 1: htb default 90 tc class add dev eth0 parent 1: classid 1:1 htb rate 160kbit ceil 160kbit \ mpu 54 tc class add dev eth0 parent 1:1 classid 1:10 htb rate 64kbit ceil 64kbit \ mpu 54 prio 0 tc class add dev eth0 parent 1:1 classid 1:20 htb rate 80kbit ceil 160kbit \ mpu 54 prio 1 tc class add dev eth0 parent 1:1 classid 1:50 htb rate 8kbit ceil 160kbit \ mpu 54 prio 1 tc class add dev eth0 parent 1:1 classid 1:90 htb rate 8kbit ceil 160kbit \ mpu 54 prio 1 <snip>> Can someone with a working setup try this out and see if it helps?No joy. I had more success modifying the HTB_HYSTERESIS compile time option. What would be nice is something that would calculate the actual PPPo(E|A) overhead on the fly at runtime and schedule accordingly. Afterall, this whole [your rate] * 0.8/.75/.65 (I''m stuck with the latter value) is kind of a hack. If a scheduler existed that understood the packets were ATM''d and the overhead imposed therein, you could simply specify your rate as what it really is. By using a fraction of your actual egress bandwidth you''re configuring for the worst case scenario. In reality, depending on your traffic I think you can approach your actual rate more closely. (The classical example being sending an unloaded TCP ACK costing your two ATM cells and essentially wasting an entire ATM cell. But in some situations your traffic might be mostly large IP packets and then your waste overhead is greatly reduced...) Anyway, is there any known work on such a scheduler? I''d be interested in beta testing anything under development.> Ed W >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:>On Friday 14 May 2004 03:05, Ed Wildgoose wrote: ><snip> > > >>I appears that you could change the patch in tc/core in fn >>tc_calc_rtable, from: >> >> + if (overhead) >> + sz += overhead; >> >>to something like: >> >> + if (overhead) >> + sz += (((sz-1)/mpu)+1) * overhead; >> >> > >I did that and recompiled iproute2. I kicked my rate up to my actual >connection, 256Kbps, and I was nailed as usual. No measurable change using >the above with an mpu of 54 for each class. Nothing changed at my >handicapped rate of 160kbit either. > > >I think that calculation needs to be changed so that the divisor "mpu" should become 48, and the overhead will be 5 You could change the whole size calculation to be this instead (ie no IF): sz = ( (int)((sz-1)/48) + 1) * 53; Note I don''t have the code in front of me, so you may need to tweak that a bit. The idea though is that you get 48 data bytes in each ATM cell, hence we work out how many cells are required. Then we multiply by 53 which is that actual size of the atm cell. Clear as mud?>What would be nice is something that would calculate the actual PPPo(E|A) >overhead on the fly at runtime and schedule accordingly. > >That''s what it ought to do... Please try this alteration and see if it works any better. (Note: I think that MPU will need to be 48 for the purposes of this code? Check my logic, but setting it to 48 is a little low, but the above calculation will then kick in and change it to 53 which is your real min packet size. Otherwise we will double count. Interested to hear if this works... Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:> On Friday 14 May 2004 03:05, Ed Wildgoose wrote: > <snip> > >>I appears that you could change the patch in tc/core in fn >>tc_calc_rtable, from: >> >> + if (overhead) >> + sz += overhead; >> >>to something like: >> >> + if (overhead) >> + sz += (((sz-1)/mpu)+1) * overhead; > > > I did that and recompiled iproute2. I kicked my rate up to my actual > connection, 256Kbps, and I was nailed as usual. No measurable change using > the above with an mpu of 54 for each class. Nothing changed at my > handicapped rate of 160kbit either. > > tc qdisc add dev eth0 root handle 1: htb default 90 > tc class add dev eth0 parent 1: classid 1:1 htb rate 160kbit ceil 160kbit \ > mpu 54 > tc class add dev eth0 parent 1:1 classid 1:10 htb rate 64kbit ceil 64kbit \ > mpu 54 prio 0 > tc class add dev eth0 parent 1:1 classid 1:20 htb rate 80kbit ceil 160kbit \ > mpu 54 prio 1 > tc class add dev eth0 parent 1:1 classid 1:50 htb rate 8kbit ceil 160kbit \ > mpu 54 prio 1 > tc class add dev eth0 parent 1:1 classid 1:90 htb rate 8kbit ceil 160kbit \ > mpu 54 prio 1 > > <snip> > >>Can someone with a working setup try this out and see if it helps? > > > No joy. I had more success modifying the HTB_HYSTERESIS compile time option. > > What would be nice is something that would calculate the actual PPPo(E|A) > overhead on the fly at runtime and schedule accordingly. > > Afterall, this whole [your rate] * 0.8/.75/.65 (I''m stuck with the latter > value) is kind of a hack. If a scheduler existed that understood the packets > were ATM''d and the overhead imposed therein, you could simply specify your > rate as what it really is. By using a fraction of your actual egress > bandwidth you''re configuring for the worst case scenario. In reality, > depending on your traffic I think you can approach your actual rate more > closely. > > (The classical example being sending an unloaded TCP ACK costing your two ATM > cells and essentially wasting an entire ATM cell. But in some situations > your traffic might be mostly large IP packets and then your waste overhead is > greatly reduced...) > > Anyway, is there any known work on such a scheduler? I''d be interested in > beta testing anything under development.Reading your other post I see your small traffic is ~100b - this would use three cells, so as a temporary kludge you could set your mpu to 159 and see how it goes. AFAIK the author of the HTB patch is looking into modifying it to do the sums properly for DSL. There isn''t one answer though - Eds'' formula is fine doing the cells bit, but before this you need to add a ppp overhead to the IP packet length and this varies with pppoa+vc mux/pppoe/bridged pppoe and probably other varieties of dsl implementations. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
On Friday 28 May 2004 14:54, Andy Furniss wrote: <snip>> Reading your other post I see your small traffic is ~100b - this would > use three cells, so as a temporary kludge you could set your mpu to 159 > and see how it goes. > > AFAIK the author of the HTB patch is looking into modifying it to do the > sums properly for DSL. There isn''t one answer though - Eds'' formula is > fine doing the cells bit, but before this you need to add a ppp overhead > to the IP packet length and this varies with pppoa+vc mux/pppoe/bridged > pppoe and probably other varieties of dsl implementations.But there''s no tried and true method of determining that information? You mention at least three methods of mangling PPP with Ethernet/ATM. And the overhead of each kind of setup also would vary depending on the specifics of that setup? (i.e., knowing you have bridged PPPoE doesn''t instantly qualify you as having an overhread of 123i.) Sounds particularly complicated. But the overhead would be a fixed cost, no? If that is the case you can play whack-a-mole with that until you find a ''good'' number. But, as I see it, without a realtime ATM cost scheduler, even if I figure out my true ''overhead'' it won''t make much difference. Thoughts, anyone?> Andy. >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> > Reading your other post I see your small traffic is ~100b - this would > use three cells, so as a temporary kludge you could set your mpu to > 159 and see how it goes. > > AFAIK the author of the HTB patch is looking into modifying it to do > the sums properly for DSL. There isn''t one answer though - Eds'' > formula is fine doing the cells bit, but before this you need to add a > ppp overhead to the IP packet length and this varies with pppoa+vc > mux/pppoe/bridged pppoe and probably other varieties of dsl > implementations.I think he said that he is on BT atm based adsl? Can we perhaps tweak that formula (which is already hardwired) and try to get him something useful. It sounds like it would be a good vindication for the technique and if it works then we can retro fit it to some modular params which work for more people. Can''t be any worse than the current patch which already doesn''t help most adsl users completely... In other words, how would I calc the overhead for BT''s ppp system? Happy to help write the patch if you can supply the info Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman wrote:> On Friday 28 May 2004 14:54, Andy Furniss wrote: > <snip> > >>Reading your other post I see your small traffic is ~100b - this would >>use three cells, so as a temporary kludge you could set your mpu to 159 >>and see how it goes. >> >>AFAIK the author of the HTB patch is looking into modifying it to do the >>sums properly for DSL. There isn''t one answer though - Eds'' formula is >>fine doing the cells bit, but before this you need to add a ppp overhead >>to the IP packet length and this varies with pppoa+vc mux/pppoe/bridged >>pppoe and probably other varieties of dsl implementations. > > > But there''s no tried and true method of determining that information? > > You mention at least three methods of mangling PPP with Ethernet/ATM. And the > overhead of each kind of setup also would vary depending on the specifics of > that setup? (i.e., knowing you have bridged PPPoE doesn''t instantly qualify > you as having an overhread of 123i.) > > Sounds particularly complicated. > > But the overhead would be a fixed cost, no? If that is the case you can play > whack-a-mole with that until you find a ''good'' number. But, as I see it, > without a realtime ATM cost scheduler, even if I figure out my true > ''overhead'' it won''t make much difference. > > Thoughts, anyone?You can find it by experementation - if you get a cell count from your modem it''s easy. If you are on BT in the UK using pppoa/vc mux it''s 10 (you can''t even look that up - the RFC says 9 or 10). ping -s 10 uses 1 cell -s 11 2. 10 data + 20 IP + 8 ICMP = 38, ATM cell data size = 48 so ppp overhead is 10. Like ED I haven''t really looked at the code - but will eventually If it doesn''t get done by anyone else first :-) Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Ed Wildgoose wrote:> >> >> Reading your other post I see your small traffic is ~100b - this would >> use three cells, so as a temporary kludge you could set your mpu to >> 159 and see how it goes. >> >> AFAIK the author of the HTB patch is looking into modifying it to do >> the sums properly for DSL. There isn''t one answer though - Eds'' >> formula is fine doing the cells bit, but before this you need to add a >> ppp overhead to the IP packet length and this varies with pppoa+vc >> mux/pppoe/bridged pppoe and probably other varieties of dsl >> implementations. > > > > I think he said that he is on BT atm based adsl? Can we perhaps tweak > that formula (which is already hardwired) and try to get him something > useful. It sounds like it would be a good vindication for the technique > and if it works then we can retro fit it to some modular params which > work for more people. Can''t be any worse than the current patch which > already doesn''t help most adsl users completely... > In other words, how would I calc the overhead for BT''s ppp system? > Happy to help write the patch if you can supply the infoSee my post to Jason - I think it should be doable, I was just waiting to see if it got put into the patch - as the author knows the code and has done most of the work already. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/