I''m trying to create a relatively simple traffic shaping environment; basically, we''re a home network with three different classes of traffic: 1) High-priority, which means usually low bandwidth but very demanding latency requirements. In English: online gaming. :) 2) Medium-priority, which includes most of what people think of as "normal" Internet traffic. Web browsing, email, USENET, IRC, etc. 3) Low-priority, which includes bulk traffic like big file downloads. Basically, the cake that I''m trying to have and eat it too is where we can be running a bunch of stuff like BitTorrent clients to download new Quake maps, and still be playing Battlefield 1942 without getting hammered on by the P2P clients'' data transfers and node-building traffic. This is what I have so far; it has made a definite improvement for "prio 1 traffic" (the medium stuff, web browsing and such) but doesn''t seem to be enough; online gaming is still quite laggy while the file transfers and such are active. At this point it seems that what I basically need to do is tweak the values of the $UPSTREAM_* variables, but I thought I might ask here first to see if there''s an entire design-level improvement to be made. The basic idea is that "medium" traffic should be able to stomp on "low" traffic (represented by the default case) when it needs bandwidth/latency, and that "high" traffic should be able to stomp on both "medium" and "low" when it needs bandwidth/latency...but the lower classes can borrow bandwidth when the classes that outrank them aren''t using it. From reading the parts of the HOWTO that I could get my mind around, I understand that only outbound traffic can be molded, so the script below makes no attempt to do anything with inbound traffic. In a tangentally-related question, I''m having some trouble determining what number I should put for $UPSTREAM_TOTAL. I sort of arrived at 15 by trial and error -- but if anybody has any suggestions on ways to empirically determine what your upload speed actually is, they would be most welcome. :) Oh, one other thing...does "u32 match ip [sd]port N" match both TCP and UDP port N, or just TCP? I''m wondering if that may be part of the problem, since most online games use UDP for the client connections. Thanks to anyone who takes a look; let me know if there''s any more information from our configuration/setup that would be helpful. ----- cut here #! /bin/sh if [ $1 = "status" ] ; then tc -s qdisc ls dev eth0 ; exit 0 ; fi IP="/bin/ip" TC="/sbin/tc" IPT="/sbin/iptables" IFACE_NET="eth0" ## These are numbers in kilobytes per second UPSTREAM_TOTAL="15" ## These next three should add up to _TOTAL UPSTREAM_HI="9" UPSTREAM_MED="5" UPSTREAM_LO="1" ## Interface Maximum Transmission Unit MTU_NET="1500" PORTS_HI="21 22 23 53 123 5190 5191 5192 5193 5222 5269 8767 14567 14568 14690" PORTS_MED="20 25 80 110 113 119 143 443 6667" ############################################################################### ## Delete old rules ${TC} qdisc del dev ${IFACE_NET} root ## Set MTU ${IP} link set dev ${IFACE_NET} mtu ${MTU_NET} ## Set queue size ${IP} link set dev ${IFACE_NET} qlen 2 ## Create root queue discipline ${TC} qdisc add dev ${IFACE_NET} root handle 1:0 htb default 12 ## Create root class ${TC} class add dev ${IFACE_NET} parent 1:0 classid 1:1 htb rate ${UPSTREAM_TOTAL}kbps ## Create leaf classes where packets will actually be classified ${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:10 htb prio 0 rate ${UPSTREAM_HI}kbps ceil ${UPSTREAM_TOTAL}kbps ${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:11 htb prio 1 rate ${UPSTREAM_MED}kbps ceil ${UPSTREAM_TOTAL}kbps ${TC} class add dev ${IFACE_NET} parent 1:1 classid 1:12 htb prio 2 rate ${UPSTREAM_LO}kbps ceil ${UPSTREAM_TOTAL}kbps ## Add SFQ for beneath these classes ${TC} qdisc add dev ${IFACE_NET} parent 1:10 handle 10: sfq perturb 10 ${TC} qdisc add dev ${IFACE_NET} parent 1:11 handle 11: sfq perturb 10 ${TC} qdisc add dev ${IFACE_NET} parent 1:12 handle 12: sfq perturb 10 ## Add the filters which direct traffic to the right classes ## High-priority traffic ${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip protocol 1 0xff flowid 1:10 ## ICMP for PORT in ${PORTS_HI}; do ${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip dport ${PORT} 0xffff flowid 1:10 ${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip sport ${PORT} 0xffff flowid 1:10 done ## Normal traffic for PORT in ${PORTS_MED}; do ${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip dport ${PORT} 0xffff flowid 1:11 ${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 0 u32 match ip sport ${PORT} 0xffff flowid 1:11 done ## Bulk traffic is anything not already classified, so comment this line ## out as it''s redundant and anyway it generates an error I don''t feel ## like debugging yet :) #${TC} filter add dev ${IFACE_NET} protocol ip parent 1:0 prio 2 u32 match 0xffff flowid 1:12 ----- cut here -- John ! "We have no opinion on your Arab/Arab conflicts, such as Buttery! your dispute with Kuwait." www.io.c! -- Bush Ambassador April Glaspie, giving Saddam Hussein the om/~john! green light to invade Kuwait _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
John Buttery wrote:> I''m trying to create a relatively simple traffic shaping environment; > basically, we''re a home network with three different classes of traffic: > > 1) High-priority, which means usually low bandwidth but very demanding > latency requirements. In English: online gaming. :) > 2) Medium-priority, which includes most of what people think of as > "normal" Internet traffic. Web browsing, email, USENET, IRC, etc. > 3) Low-priority, which includes bulk traffic like big file downloads. > > Basically, the cake that I''m trying to have and eat it too is where we > can be running a bunch of stuff like BitTorrent clients to download new > Quake maps, and still be playing Battlefield 1942 without getting > hammered on by the P2P clients'' data transfers and node-building > traffic.Bittorrent is the hardest thing I''ve tried to shape yet. It uses 20+ full duplex tcp and your peers may well have a 2 sec flooded buffer. This means that if you use it alot you would be best to get it in it''s own class and set, say max 60% down - and down is the problem, you have total control over upstream so that is sortable.> This is what I have so far; it has made a definite improvement for > "prio 1 traffic" (the medium stuff, web browsing and such) but doesn''t > seem to be enough; online gaming is still quite laggy while the file > transfers and such are active. At this point it seems that what I > basically need to do is tweak the values of the $UPSTREAM_* variables, > but I thought I might ask here first to see if there''s an entire > design-level improvement to be made. > The basic idea is that "medium" traffic should be able to stomp on > "low" traffic (represented by the default case) when it needs > bandwidth/latency, and that "high" traffic should be able to stomp on > both "medium" and "low" when it needs bandwidth/latency...but the lower > classes can borrow bandwidth when the classes that outrank them aren''t > using it.This is sort of what I am doing (though what I do keeps changing). TCP slowstart is a pain - It''s hard not to get a latency blip, when new connections start - but they don''t last that long.> From reading the parts of the HOWTO that I could get my mind around, I > understand that only outbound traffic can be molded, so the script below > makes no attempt to do anything with inbound traffic.You can and need to shape downstream - it''s not actually possible to do it perfectly with the TC stuff, but you can make it alot better than doing nothing. It involves sacrificing some 20-40% of your bandwidth - depending on how many active tcp connections and how much you care about latency. How you do it exactly depends on your setup - there is an example of using the basic ingress policer in the wonder shaper script. It is better to shape using htb & queues on your LAN facing interface if possible, you can also use IMQ. For ingress I find esfq is better than sfq as you can limit the queue length. I am still experementing with my home setup and have modified the hash for ingress and made it head drop, which seem to help a bit - but I haven''t tested enough to see if it''s now broken in any way.> > In a tangentally-related question, I''m having some trouble determining > what number I should put for $UPSTREAM_TOTAL. I sort of arrived at 15 > by trial and error -- but if anybody has any suggestions on ways to > empirically determine what your upload speed actually is, they would be > most welcome. :)Most people know what their bandwidth is :-) However If you have dsl and it''s sold as 128kbit/s up then 15KB is probably too high - there is alot of overhead and while htb calculates an empty ack as 40 bytes, it actually uses 106 on a dsl wire. I throttle to 85% upstream - could go higher I guess, but remember that while a bulk upload may be OK at say 90%, 30 small game packets/sec will be miscalculated more % than bigger packets. Another tweak which helped me, was changing a setting in htb, before I did this I found that packets were being sent in pairs. set HTB_HYSTERESIS 0 in net/sched/sch_htb.c> > Oh, one other thing...does "u32 match ip [sd]port N" match both TCP and > UDP port N, or just TCP? I''m wondering if that may be part of the > problem, since most online games use UDP for the client connections. > > Thanks to anyone who takes a look; let me know if there''s any more > information from our configuration/setup that would be helpful.I can''t help you there - I mark with iptables and filter on the marks. Andy. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/