George Spiliotis
2003-Apr-25 10:57 UTC
BW Management: Shaping using IMQ or the *inner* interface?
Dear list members, I want to develop a bandwidth manager for shaping traffic in my company. I am trying to find out if it is better to use an IMQ interface or shape the traffic towards the internal machines, so any help on this matter will be much appreciated... The specifics. The design is simple (so that me, and possibly others, can understand the principle): We have one machine acting as a firewall/bandwidth manager on a SDSL line and two internal hosts connected to it. We want to assign the available DSL bandwidth with a rate 80/20 to these two hosts. The schematic is as follows: +----------+ +--------------->| Host 1 | (10.0.0.1) | +----------+ +----------------+ | <-- SDSL -->| fw/bw manager |<---+ +----------------+ | | +----------+ <---(1) --->(3) +--------------->| Host 2 | --->(2) +----------+ I have marked the possible points for shaping traffic with the marks (1),(2) and (3). Now flow (1) is the first to address (which happens to be the easy part of the construct) with an htb qdisc (expressed in tcng): eth0 { egress { class (<$h1>) if ip_src == 10.0.0.1; class (<$h2>) if 1; htb () { class (rate 512Kbps, ceil 512Kbps) { $h1 = class (rate 410Kbps, ceil 512Kbps) { sfq; } // 80% $h2 = class (rate 102Kbps, ceil 512Kbps) { sfq; } // 20% } } } } For simplicity lets assume that no NAT or other mangling of the packets happen at the firewall in both directions, so packets enter and leave with their source and destination IPs unchanged. Now, suppose that host 1 and host 2 start to generate traffic towards the internet at a rate of 512Kbps each, at the same time, for 5 seconds. It is easy to see that host 1 data will leave the fw/bw box at a rate of 410Kbps while host 2 data will queue up leaving at a rate of 102Kbps, till all data from host 1 is sent, thus leaving the whole 512Kbps for host 2. What happens if host 1 and host 2 initiate a connection with a foreign host on the internet requesting both 1Mbyte of data to travel from the internet towards host 1 & 2? Both hosts send relatively small packets which reach quickly the destination host and data starts to flow from that host on the internet to our ISP and through the DSL to our fw/bw box. Because ACK packets generated by hosts 1 & 2 are relatively small they are never queued at flow point (1) so data flowing from the internet towards hosts 1 & 2 are sharing in equal parts the available DSL bandwidth (50/50) which is obviously not adhering to our 80/20 rule for hosts 1 & 2. Now suppose that we implement the same bandwidth management rules as in flow point (1) at flow point (3) changing eth0 to eth1 (i.e., the internal interface) and ip_src with ip_dst (as is been suggested by LARTC). What should happen (**am I right on this one??**) is that packets traveling towards host 2 will start to be dropped because host 2 gets the information at a rate of only 102Kbps so, after an allowed latency time, the host on the internet which sends information for our host 2 will slow down eventually to a rate of 102Kbps. The rest 410Kbps of our DSL line is used for the information flowing towards our host 1. This should accomplice our task partially. What happens if the fw box has also a proxy service running and some of the information requested by host 2 is already on the fw box? Then it seems that creating such a low (512Kbps) ceil on a real 10Mbps internal interface is not the best approach. Now suppose we create an IMQ interface at flow point (2) and attach the same disciplines (htb) as in flow point (1). This should eventually accomplice our 80/20 division of bandwidth but does it really works on the traffic entering our fw/bw manager box (considering some latency time for the flows to become stabilized)? It seems this is the way to go for policing traffic entering our fw/bw management box but I really need more information on the subject. Thank you for your time, George. __________________________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo http://search.yahoo.com _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Rob Cresswell
2003-Apr-28 09:25 UTC
Re: BW Management: Shaping using IMQ or the *inner* interface?
Since the absolute rate-limiting step (no pun intended) is your SDSL line, I _think_ you might achieve the desired affect through either method, applying egress traffic control to eth1, or ingress control through an IMQ device. Of course, you don''t want to limit all traffic from eth1 towards your client-"hosts" to this tiny 512k ... but that can be taken care of with some more fine-tuned filtering on your egress case. I am currently experimenting with a similar situation where the wirespeed is unlimited (100Mbit compared to the ~10Mbit that I want to throttle at) and my initial setup was to do egress managment on both sides of the FW box. It works well, but I have a problem: traffic from the internet to eth1 (to keep with the conventions of your example) gets limited at flow point (3) ... however, incoming traffic at eth0 measures signifcantly (25%) higher than the eth1-ceiling rate -- the FW box seemingly can receive (and throw away) packets willy-nilly since there''s no set limit on eth0''s incoming. I thought that throttling eth1 would be enough (assuming some sort of rate-discovery smarts) but I guess I was wrong. Once I do some testing with an IMQ setup on eth0, I''ll be happy to let you know how it goes (assuming others aven''t already piped up). Can anyone give me a pointer to an explanation of how tcp rate-discovery works? Was it a silly assumption to make that if one side of a router was choked that the other side would refrain from this wasteful sort of behavior? Or, is this a context dependent situation where single tcp streams would do what I want, but thousands of small ones may not have a long enough connection time to behave properly when such a choke-point is pegged at its limit? thanks, -rob On Fri, Apr 25, 2003 at 03:57:39AM -0700, George Spiliotis wrote:> > Now suppose that we implement the same bandwidth management > rules as in flow point (1) at flow point (3) changing eth0 > to eth1 (i.e., the internal interface) and ip_src with > ip_dst (as is been suggested by LARTC). What should happen > (**am I right on this one??**) is that packets traveling > towards host 2 will start to be dropped because host 2 gets > the information at a rate of only 102Kbps so, after an > allowed latency time, the host on the internet which sends > information for our host 2 will slow down eventually to a > rate of 102Kbps. The rest 410Kbps of our DSL line is used > for the information flowing towards our host 1. This should > accomplice our task partially. > > What happens if the fw box has also a proxy service running > and some of the information requested by host 2 is already > on the fw box? Then it seems that creating such a low > (512Kbps) ceil on a real 10Mbps internal interface is not > the best approach. Now suppose we create an IMQ interface > at flow point (2) and attach the same disciplines (htb) as > in flow point (1). This should eventually accomplice our > 80/20 division of bandwidth but does it really works on the > traffic entering our fw/bw manager box (considering some > latency time for the flows to become stabilized)? It seems > this is the way to go for policing traffic entering our > fw/bw management box but I really need more information on > the subject. > > Thank you for your time, > George. > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/