I have 2 ethernet subnets -- one wireless (eth0 192.168.0), one 10Mbs wired (eth1 192.168.1) connected to a 512 up / 256 down adsl connection (ppp0). I have a single ip address on the ppp0 connection, and use nat to the devices on the ethernet networks (there''s only 1 on each!) I''d like to priotize small packets (<500 bytes). May also look at other packet markings as time goes on (ie audio stream> icmp/small> http > ftp). I''d also like local traffic to be unaffected. To help in testing, I''ve bounded the classes for now, but will likely remove this when I know it''s working. I''ve just realised something though.... How do I allow the total download bandwidth hitting eth0+eth1 from the net connection (ppp0) to be limited at 512, but "pooled" it I don''t want to divide 256/256 per ethernet segment. All these restrictions are device specific? Here''s what I''ve done so far - I know it''s rather long. I think it works in part, but the multiple ethernet issue I''m really confused by!. Also I don''t know what to set the other cbq parms to Help!? I think I need to throttle #!/bin/sh tc qdisc del root dev eth0 2>/dev/null tc qdisc del root dev eth1 2>/dev/null tc qdisc del root dev ppp0 2>/dev/null tc class del root dev eth0 2>/dev/null tc class del root dev eth1 2>/dev/null tc class del root dev ppp0 2>/dev/null # small packets - mark with 3 iptables -t mangle -A OUTPUT -m length --length 0:500 -j MARK --set-mark 3 iptables -t mangle -A OUTPUT -p icmp -j MARK --set-mark 3 # large packets - mark with 4 iptables -t mangle -A OUTPUT -m length --length 500:15000 -j MARK --set-mark 4 iptables -t mangle -A OUTPUT -p icmp -j MARK --set-mark 3 # Mark local traffic between ethernet segments iptables -t mangle -A OUTPUT -s 192.168.0.1/16 -d eth1 -j MARK --set-mark 9 iptables -t mangle -A OUTPUT -s 192.168.0.1/16 -d eth0 -j MARK --set-mark 9 # # root queueing discipline - adsl is 256 upstream tc qdisc add dev ppp0 root handle 10: cbq bandwidth 256kbit avpkt 1000 tc qdisc add dev eth0 root handle 11: cbq bandwidth 10Mbit avpkt 1000 tc qdisc add dev eth1 root handle 12: cbq bandwidth 10Mbit avpkt 1000 # see http://www.prout.be/qos/QoS-connection-tuning-HOWTO.txt # here''s the total 256 for upstream adsl tc class add dev ppp0 parent 10:0 classid 10:1 cbq bandwidth 256kbit rate 256kbit prio 4 bounded isolated allot 1514 # 10:2 for interactive tc class add dev ppp0 parent 10:1 classid 10:2 cbq bandwidth 256kbit rate 85Kbit prio 1 bounded isolated allot 1514 # 10:3 for large traffic tc class add dev ppp0 parent 10:1 classid 10:3 cbq bandwidth 256kbit rate 171kbit prio 8 bounded isolated allot 1514 # base 10 Mbs ethernet tc class add dev eth0 parent 11:0 classid 11:1 cbq bandwidth 10Mbit rate 512kbit prio 4 bounded isolated allot 1514 # local traffic on 11:9 tc class add dev eth0 parent 11:0 classid 11:9 cbq bandwidth 10Mbit rate 10Mbit prio 4 bounded allot 1514 # interactive traffic tc class add dev eth0 parent 11:1 classid 11:2 cbq bandwidth 512kbit rate 171Kbit prio 1 bounded isolated allot 1514 # batch tc class add dev eth0 parent 11:1 classid 11:3 cbq bandwidth 512kbit rate 442Kbit prio 8 bounded isolated allot 1514 tc class add dev eth1 parent 12:0 classid 12:1 cbq bandwidth 10Mbit rate 512Kbit prio 4 bounded isolated allot 1514 tc class add dev eth1 parent 12:1 classid 12:2 cbq bandwidth 512kbit rate 171Kbit prio 1 bounded isolated allot 1514 tc class add dev eth1 parent 12:1 classid 12:3 cbq bandwidth 512kbit rate 442Kbit prio 8 bounded isolated allot 1514 tc class add dev eth1 parent 12:0 classid 12:9 cbq bandwidth 10Mbit rate 10Mbit prio 4 bounded allot 1514 # Add sfq for fairness tc qdisc add dev ppp0 parent 10:2 sfq quantum 1514b perturb 15 tc qdisc add dev ppp0 parent 10:3 sfq quantum 1514b perturb 15 tc qdisc add dev eth0 parent 11:2 sfq quantum 1514b perturb 15 tc qdisc add dev eth0 parent 11:3 sfq quantum 1514b perturb 15 tc qdisc add dev eth0 parent 11:9 sfq quantum 1514b perturb 15 tc qdisc add dev eth1 parent 12:2 sfq quantum 1514b perturb 15 tc qdisc add dev eth1 parent 12:3 sfq quantum 1514b perturb 15 tc qdisc add dev eth1 parent 12:9 sfq quantum 1514b perturb 15 #Use marks to allocate to queues tc filter add dev ppp0 parent 10:0 protocol ip handle 3 fw flowid 10:2 tc filter add dev ppp0 parent 10:0 protocol ip handle 4 fw flowid 10:3 tc filter add dev eth0 parent 11:0 protocol ip handle 3 fw flowid 11:2 tc filter add dev eth0 parent 11:0 protocol ip handle 4 fw flowid 11:3 tc filter add dev eth0 parent 11:0 protocol ip handle 9 fw flowid 11:9 tc filter add dev eth1 parent 12:0 protocol ip handle 3 fw flowid 12:2 tc filter add dev eth1 parent 12:0 protocol ip handle 4 fw flowid 12:3 tc filter add dev eth1 parent 12:0 protocol ip handle 9 fw flowid 12:9 for dev in eth0 eth1 ppp0 Nigel Jones
> I''ve just realised something though.... How do I allow the total download > bandwidth hitting eth0+eth1 from the net connection (ppp0) to be limited at > 512, but "pooled" it I don''t want to divide 256/256 per ethernet segment. > All these restrictions are device specific?Unfortunately yes. If you want to limit sum of two-interface output then you can use my IMQ patch. Vanilla kernel can''t do it. devik luxik.cdi.cz/~devik/qos/
What''s wrong with using ingress on the ppp0 (adsl) device? Instead of queuing packets leaving eth0 and eth1 to a combined 512, just attach something like the following to ppp0 (remembering to make the rate a bit under 512, experiment to find the best value): tc qdisc del dev ppp0 ingress tc qdisc add dev ppp0 handle ffff: ingress tc filter add dev ppp0 parent ffff: protocol ip prio 10 u32 match ip src \ 0.0.0.0/0 police rate 500kbit buffer 5k drop flowid :1 -Ross Skaliotis On Sat, 9 Feb 2002, Martin Devera wrote:> > I''ve just realised something though.... How do I allow the total download > > bandwidth hitting eth0+eth1 from the net connection (ppp0) to be limited at > > 512, but "pooled" it I don''t want to divide 256/256 per ethernet segment. > > All these restrictions are device specific? > > Unfortunately yes. If you want to limit sum of two-interface output > then you can use my IMQ patch. Vanilla kernel can''t do it. > > devik > luxik.cdi.cz/~devik/qos/ > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://ds9a.nl/lartc/ >
Nothing wrong ;) It is matter of personal taste. Simply policing flow doesn''t differentiate between subflows (like SFQ does) which leads to unfair sharing per subflow. But yes it is option and is in vanilla kernel. devik On Sat, 9 Feb 2002, Ross Skaliotis wrote:> What''s wrong with using ingress on the ppp0 (adsl) device? Instead of > queuing packets leaving eth0 and eth1 to a combined 512, just attach > something like the following to ppp0 (remembering to make the rate a bit > under 512, experiment to find the best value): > > tc qdisc del dev ppp0 ingress > tc qdisc add dev ppp0 handle ffff: ingress > tc filter add dev ppp0 parent ffff: protocol ip prio 10 u32 match ip src \ > 0.0.0.0/0 police rate 500kbit buffer 5k drop flowid :1 > > -Ross Skaliotis > > On Sat, 9 Feb 2002, Martin Devera wrote: > > > > I''ve just realised something though.... How do I allow the total download > > > bandwidth hitting eth0+eth1 from the net connection (ppp0) to be limited at > > > 512, but "pooled" it I don''t want to divide 256/256 per ethernet segment. > > > All these restrictions are device specific? > > > > Unfortunately yes. If you want to limit sum of two-interface output > > then you can use my IMQ patch. Vanilla kernel can''t do it. > > > > devik > > luxik.cdi.cz/~devik/qos/ > > > > _______________________________________________ > > LARTC mailing list / LARTC@mailman.ds9a.nl > > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://ds9a.nl/lartc/ > > > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://ds9a.nl/lartc/ > >
"Martin Devera" <devik@cdi.cz> wrote in message news:Pine.LNX.4.10.10202092343330.15162-100000@luxik.cdi.cz...> > I''ve just realised something though.... How do I allow the totaldownload> > bandwidth hitting eth0+eth1 from the net connection (ppp0) to be limitedat> > 512, but "pooled" it I don''t want to divide 256/256 per ethernetsegment.> > All these restrictions are device specific? > > Unfortunately yes. If you want to limit sum of two-interface output > then you can use my IMQ patch. Vanilla kernel can''t do it.Martin, Many thanks. I''ve download the patch (with htb2), and can see that if I run ifconfig imq up then the packets appear to get enqueued/dequeued via this new virtual device imq So I could create filters on the imq device So what rate do I specify to htb? 10Mbs? Do I create a 10Mbs class & try to use this for eth0-eth1 comms? via iptables --set-mark? And a 512Mbs class as a wrapper for all ppp->eth0,1 traffic (adsl download) And a 256Mbs class for all eth0,1->ppp (adsl upload) Then within each class subdivide as per my requirements. Is this the approach to take? This is my attempt so far - does this make sense? (It doesn''t seem to work....) PATH=/usr/local/bin:$PATH ifconfig imq up tc qdisc del root dev imq 2>/dev/null tc class del root dev imq 2>/dev/null # INBOUND - 1x OUTBOUND - 2x LOCAL - 3x # small packets - mark with 3 # These rules don''t seem to work. I want the -s & -m ANDED iptables -t mangle -A OUTPUT -s 192.168.0.1/16 -m length --length 0:500 -j MARK --set-mark 11 iptables -t mangle -A OUTPUT -s 192.168.0.1/16 -m length --length 500:15000 -j MARK --set-mark 14 iptables -t mangle -A OUTPUT -d 192.168.0.1/16 -m length --length 0:500 -j MARK --set-mark 21 iptables -t mangle -A OUTPUT -d 192.168.0.1/16 -m length --length 500:15000 -j MARK --set-mark 24 iptables -t mangle -A OUTPUT -s 192.168.0.1/16 -d 192.168.0.1/16 -j MARK --set-mark 31 # # root queueing discipline tc qdisc add dev imq root handle 10: htb default 10 # Base classes for ethernet (10Mbs), adsl up (256), adsl down (512). No borrowing tc class add dev imq parent 10: classid 10:10 htb rate 10Mbps ceil 10Mbps burst 2k prio 3 tc class add dev imq parent 10: classid 10:20 htb rate 512kbps ceil 512kbps burst 2k prio 3 tc class add dev imq parent 10: classid 10:30 htb rate 256kbps ceil 256kbps burst 2k prio 3 # tc class add dev imq parent 10:20 classid 10:21 htb rate 400kbps ceil 500kbps burst 2k prio 4 tc class add dev imq parent 10:20 classid 10:22 htb rate 112kbps ceil 512kbps burst 2k prio 1 tc class add dev imq parent 10:30 classid 10:31 htb rate 200kbps ceil 250kbps burst 2k prio 4 tc class add dev imq parent 10:30 classid 10:32 htb rate 50kbps ceil 250kbps burst 2k prio 1 tc qdisc add dev imq parent 10:10 sfq quantum 1514b perturb 15 tc qdisc add dev imq parent 10:21 sfq quantum 1514b perturb 15 tc qdisc add dev imq parent 10:22 sfq quantum 1514b perturb 15 tc qdisc add dev imq parent 10:31 sfq quantum 1514b perturb 15 tc qdisc add dev imq parent 10:32 sfq quantum 1514b perturb 15 tc filter add dev imq parent 10: protocol ip handle 31 fw flowid 10:10 tc filter add dev imq parent 10: protocol ip handle 11 fw flowid 10:32 tc filter add dev imq parent 10: protocol ip handle 14 fw flowid 10:31 tc filter add dev imq parent 10: protocol ip handle 21 fw flowid 10:22 tc filter add dev imq parent 10: protocol ip handle 24 fw flowid 10:21 Also a "tc -s class ls dev imq" shows rates that don''t match these rules ie: class htb 10:22 parent 10:20 leaf 803a: prio 1 rate 896Kbit ceil 4Mbit burst 2Kb cburst 6841b Sent 152 bytes 2 pkts (dropped 0, overlimits 0) lended: 2 borrowed: 0 giants: 0 injects: 0 tokens: 14115 ctokens: 10578 class htb 10:10 root leaf 8038: prio 3 rate 80Mbit ceil 80Mbit burst 2Kb cburst 106440b Sent 56598 bytes 583 pkts (dropped 0, overlimits 0) rate 719bps 5pps lended: 583 borrowed: 0 giants: 0 injects: 0 tokens: 153 ctokens: 8310 class htb 10:32 parent 10:30 leaf 803c: prio 1 rate 400Kbit ceil 2000Kbit burst 2Kb cburst 4159b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 injects: 0 tokens: 32768 ctokens: 13311 class htb 10:20 root prio 3 rate 4Mbit ceil 4Mbit burst 2Kb cburst 6841b Sent 152 bytes 2 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 injects: 0 tokens: 3087 ctokens: 10578 class htb 10:31 parent 10:30 leaf 803b: prio 3 rate 1600Kbit ceil 2000Kbit burst 2Kb cburst 4159b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 injects: 0 tokens: 8192 ctokens: 13311 class htb 10:30 root prio 3 rate 2Mbit ceil 2Mbit burst 2Kb cburst 4220b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 injects: 0 tokens: 6399 ctokens: 13189 class htb 10:21 parent 10:20 leaf 8039: prio 3 rate 3200Kbit ceil 4000Kbit burst 2Kb cburst 6719b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 injects: 0 tokens: 4096 ctokens: 10752 -- -- jonesn@hursley.ibm.com
> then the packets appear to get enqueued/dequeued via this new virtual > device imqyes that is supposed ;)> So I could create filters on the imq device > > So what rate do I specify to htb? 10Mbs? > Do I create a 10Mbs class & try to use this for eth0-eth1 comms? via > iptables --set-mark? > And a 512Mbs class as a wrapper for all ppp->eth0,1 traffic (adsl download) > And a 256Mbs class for all eth0,1->ppp (adsl upload)Exactly. The 10Mbit can be even higher (100Mbit to be sure for example). For htb2 you can all eth->eth traffic assign to 10:0 which is kind of magic here: it allows all packets to go at full speed. The you don''t need the first of classes above.> Then within each class subdivide as per my requirements. > > Is this the approach to take?you caught it :) It is the idea. The script you attached seems ok but I did only fast sweep over it. devik