Ow Mun Heng
2004-Jul-09 19:11 UTC
RED/GRED implementation for InBound Traffic Control (from ISP)
Hi all, Can anyone show me pointers on how to get this implemented on a Linux box with tc rules? I also want to know, just how efficient is this Algorithm. AFAIK, inbound traffic control can''t really be achieved without losing bandwidth. In some of the howtos'' I''ve read, one guy had to limit his downspeed to 1/2 his bandwidth to actually control it. And he was saying that the only way to actually efficiently control inbound traffic control is to use TCP windowshaping which there is not an oss implementation of it. Can anyone please shed light on this? FWIW, this discussion was in http://my-opensource.org/lists/myoss/2004-07/msg00051.html http://my-opensource.org/lists/myoss/2004-06/msg00167.html http://www.redhat.com/archives/fedora-list/2004-July/msg01492.html Thanks -- Ow Mun Heng Fedora GNU/Linux Core 2 (Tettnang) on D600 1.4Ghz CPU kernel 2.6.7-2.jul1-interactive Neuromancer 12:06:38 up 3:13, 3 users, load average: 1.80, 1.23, 1.41 _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Ed Wildgoose
2004-Jul-10 08:17 UTC
Re: RED/GRED implementation for InBound Traffic Control (from ISP)
>>So the solution is to throttle incoming to 99.9% of total incoming >>bandwidth. Well, actually since you have no control over who can send >>you data, this only works in steady state. So perhaps you should make >>it 95% or 90%. It depends whether you mind there being the odd blip >>where someone starts sending you traffic, but it takes a second or so >>while you instruct other senders to slow down. In the meantime you will >>be overloaded. >> >> > >And how does RED/GRED solve that or are you not addressing that? > >RED and GRED are used in conjunction with something that queues packets and releases them slowly. So you can use IMQ on the incoming stream and then HTB, etc or do it on the outgoing stream. The idea is that you could temporarily let the incoming queue get really, really large until the sender fills their send window with data, then perhaps this might throttle the send speed... (iffy), or you could look at your queue and when it gets beyond a certain size then you (wastefully) drop some of the packets, which because of the way TCP works means that the sender slows down. The "R" in these two algorithms means that packets get dropped randomly, as opposed to, say, waiting for the queue to fill up and then drop any incoming packets until it clears down a bit - the theory is that this is fairer ("drop randomly" instead of "drop most recent").>My understanding when talking to this guy and all the stuffs which I''ve >read seems to point that RED is good at handling these sort of things. >(but then again, it''s not as good as TCP windowshaping, which >incidently, all i''ve heard/read is that it''s Good, but whether or not it >drops packets(or compared to RED/GRED), I have no idea. > >You need to read how TCP works. People have a variable sized output buffer and keep that amount of data "in transit" at any one time. Once the window fills up, ie there is a load of data in-transit, then they pause until they get some acknowlegments that the data was received. TCP also changes the size of this window based on whether packet drops occur, and in fact the whole point of a load of clever TCP algorithms are to find the optimal window size so that we don''t overload the receiver, but still keep the net link in full use. Simple throttling algorithms just drop a few packets to encourage the sender to slowdown. However, this is wasteful because you already downloaded them, then throw away, then you clearly have to download them again! However, fiddling with windowsize is obviously going to be complicated... No one has written anything free yet.>> In this case you pay >>for NOT 512Kbit/s of IP bandwidth, but 512Kb/s of ATM bandwidth. And >>unfortunately the relationship between the two is slightly complicated. >> >> >I have no idea what''s the difference actually. > >Well read the rest of the very clear flipping email that I took 20 mins to write!!!!!>>To save you the headache of worrying about those calculations consider >>sending a 49 byte packet. It will clearly need to be split into two 48 >>byte packets (yes?), >> >> >1st packet = 48byte >2nd packet = 1 byte >YES? > >No second packet = 48 bytes!!! It only has 1 byte of data in it, the rest is blank. ATM ***only*** sends data in 53 byte packets - 48 contain data + 5 byte header. So if your MTU is a multiple of 48 then you will waste very few packets, otherwise you will have some wastage. If you have a P2P app which sends data in random, probably small sized packets, then frequently they won''t be a multiple of 48, and the wastage will be large compared with the size of the IP packet being sent..... However, the kernel is throttling based on the size of the IP bandwidth consumed, whereas you might already have overloaded your link despite the kernel thinking its 3/4 full Solution is to enhance the kernel calculation of rate on the ADSL line so that it knows it is different to the rate used on an ethernet connection. However, every ADSL provider does it slightly differently. It''s not easy to find the correct calculation....>>then each packet has a 5 byte header = 53 bytes >> >> >1st packet = 48+5 = 53byte >2nd packet = 1+5 = 6byte > >Nope.... See above. (Or search the net for ADSL QOS and ATM. There are plenty of references and a really good HOWTO)>huh?? I take it that you''re saying the maximum/min for each packet is >53bytes (yes?) > >....you''re getting it!>>So big FTP transfers with large IP packets don''t waste too much, but if >>you have a load of SSH users, or some P2P users, or something else which >>spits out tons of small packets then the IP bandwidth might be loads >>less than the ADSL bandwidth, hence some people really throttle back to >>be sure they have control of the inbound connection >> >> > >That''s what I want actually. The (or rather my) holy grail and without >severely limiting my inbound traffic. (50%?? Man.. I''m not gonna waste >50% of what I''m paying. It''s like buying a big mac and only getting the >buns minus the patties) > >Well, some people prefer to avoid any blips in their latency rather than worrying about some wasted bandwidth. Different needs that''s all (Some people buy a big mac and throw out the gurkin as well...) However, without a clever patch then there was previously no other way to limit the download link. Remember they *weren''t* limiting themselves to 50% of what they paid for, what they were doing was putting 50% of the magic number in the script. Because of the size of packets in transit they were actually consuming 100% of the link, but only registering as using 50% of the equiv ethernet bandwidth... People fiddle around and determine this number empiracally based on the type of data they receive.... Look at it another way, depending on the type of data you transmit, eg P2P it can consume (waste) up to 50% of the bandwidth in useless ATM cells...>>Clear as mud? >> >> > >I didn''t know mud was clear. (so that means, I''ve not a clue) > >Sorry, english uses irony a lot. My fault. The phrase means "clear as something which isn''t very clear"..? ie Did I explain it badly? or "Still confused?" So the answer was obviously yes.... Hopefully the above helps. Try the ADSL-QOS Howto is you still have questions Ed W _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/