drew einhorn
2006-Nov-20 23:43 UTC
Fwd: Traffic Shaping on a Transparent Bridge not working!
I''m trying to shape traffic on a Devil-Linux box. This note was originally sent to their maillist, because the LARTC list appears to have been down for the past few days. My mailbox was just flooded with a half dozen or so confirmation requests in response to my repeated attempts to subscribe to this list. ---------- Forwarded message ---------- From: drew einhorn <drew.einhorn@gmail.com> Date: Nov 19, 2006 11:51 PM Subject: Traffic Shaping on a Transparent Bridge not working! To: devil-linux-discuss@lists.sourceforge.net My first DL project was going well. Then I ran into problems attempting to shape my bandwidth. First I''ll describe the parts that I believe are working correctly. I have a DL 1.2.11 box running the default kernel, 2.4.33.3-grsec I have br0 bridging all four ports eth0, eth1, eth2, eth3 on a quad port pci card. The bridge has not been assigned an ip number on the theory that this makes it much more difficult to attack. The bridge connects four devices on the 3bit public static ip block from my ISP. I have a single port ethernet pci card, eth4 with a static ip, on my internal private ip network. It is used for remote managent of the DL box from anywhere on my internal network. eth0 is connected to my ISP''s router via the ethernet port on my ISDN modem. I know ISDN is a nearly dead technology, but it''s the best thing my crappy telco offers. Tried a satellite ISP, but that''s another long and sad story. eth1 is connected to a hardened publicly accessible host. eth2 and eth3 are connected to the WAN ports on a couple of Linksys Cable/DSL routers. Eventually most of their functions will migrate to the DL box, but that is more than I wanted to bite off in my first DL project. The first Linksys box NATs one of my public ips to my internal private ip network. The second Linksys box is newer and includes a wireless access point used by a couple neighbors. It NATs a second public ip to a separate private ip network. All of the above appears to be working as expected. After pondering the mysteries of traffic shaping I decided to start with wondershaper 1.1a from lartc.org, rather than starting from scratch. Tried both the cbq and htb versions without any success. RTFM time. The htb section of http://lartc.org/howto/index.html is easier reading than the cbq section. And the howto claims htb is better anyway. Let''s focus on the htb version of wondershaper. OK, First we edit wshaper.htb and configure the shell variables. Then we run: sh -x wshaper.htb to echo the commands as they are executed. Then we start pinging the router at the other end of the ISDN line. Then we start downloading a file to generate some traffic that really needs to be shaped. Then we run: sh -x wshaper.htb status to gather some statistics then we kill the download. then we sh -x wshaper.htb stop to shut down the malfunctioning shaper. Here''s the output from the ping: $ ping 67.0.192.10 PING 67.0.192.10 (67.0.192.10) 56(84) bytes of data. Link is idle, normal ping times. 64 bytes from 67.0.192.10: icmp_seq=0 ttl=254 time=48.5 ms 64 bytes from 67.0.192.10: icmp_seq=1 ttl=254 time=48.4 ms 64 bytes from 67.0.192.10: icmp_seq=2 ttl=254 time=48.4 ms 64 bytes from 67.0.192.10: icmp_seq=3 ttl=254 time=48.4 ms 64 bytes from 67.0.192.10: icmp_seq=4 ttl=254 time= 48.5 ms 64 bytes from 67.0.192.10: icmp_seq=5 ttl=254 time=67.8 ms 64 bytes from 67.0.192.10: icmp_seq=6 ttl=254 time=48.3 ms 64 bytes from 67.0.192.10: icmp_seq=7 ttl=254 time=48.2 ms Download starts. Shaping is not working! Queues in router and/or ISDN modem grow, and ping times rapidly become huge. 64 bytes from 67.0.192.10: icmp_seq=8 ttl=254 time=184 ms 64 bytes from 67.0.192.10: icmp_seq=9 ttl=254 time=1080 ms 64 bytes from 67.0.192.10: icmp_seq=10 ttl=254 time=2025 ms 64 bytes from 67.0.192.10: icmp_seq=11 ttl=254 time=1551 ms 64 bytes from 67.0.192.10: icmp_seq=12 ttl=254 time=1078 ms 64 bytes from 67.0.192.10: icmp_seq=13 ttl=254 time=896 ms 64 bytes from 67.0.192.10: icmp_seq=14 ttl=254 time=1088 ms 64 bytes from 67.0.192.10: icmp_seq=15 ttl=254 time=1171 ms 64 bytes from 67.0.192.10: icmp_seq=16 ttl=254 time=1272 ms 64 bytes from 67.0.192.10: icmp_seq=17 ttl=254 time=1280 ms 64 bytes from 67.0.192.10: icmp_seq=18 ttl=254 time=1101 ms 64 bytes from 67.0.192.10: icmp_seq=19 ttl=254 time=1258 ms 64 bytes from 67.0.192.10: icmp_seq=20 ttl=254 time=1211 ms 64 bytes from 67.0.192.10: icmp_seq=21 ttl=254 time=1259 ms 64 bytes from 67.0.192.10: icmp_seq=22 ttl=254 time=1373 ms 64 bytes from 67.0.192.10: icmp_seq=23 ttl=254 time=1424 ms 64 bytes from 67.0.192.10: icmp_seq=24 ttl=254 time=1461 ms 64 bytes from 67.0.192.10: icmp_seq=25 ttl=254 time=1277 ms 64 bytes from 67.0.192.10: icmp_seq=26 ttl=254 time=1521 ms 64 bytes from 67.0.192.10: icmp_seq=27 ttl=254 time=1467 ms 64 bytes from 67.0.192.10: icmp_seq=28 ttl=254 time=1335 ms 64 bytes from 67.0.192.10: icmp_seq=29 ttl=254 time=1329 ms 64 bytes from 67.0.192.10: icmp_seq=30 ttl=254 time=1386 ms 64 bytes from 67.0.192.10: icmp_seq=31 ttl=254 time=1360 ms 64 bytes from 67.0.192.10: icmp_seq=32 ttl=254 time=1416 ms 64 bytes from 67.0.192.10: icmp_seq=33 ttl=254 time=1480 ms 64 bytes from 67.0.192.10: icmp_seq=34 ttl=254 time=1345 ms 64 bytes from 67.0.192.10: icmp_seq=35 ttl=254 time=1356 ms 64 bytes from 67.0.192.10: icmp_seq=36 ttl=254 time=1370 ms 64 bytes from 67.0.192.10: icmp_seq=37 ttl=254 time=1278 ms 64 bytes from 67.0.192.10: icmp_seq=38 ttl=254 time=1612 ms 64 bytes from 67.0.192.10: icmp_seq=39 ttl=254 time=1520 ms 64 bytes from 67.0.192.10: icmp_seq=40 ttl=254 time=1322 ms 64 bytes from 67.0.192.10: icmp_seq=41 ttl=254 time=1545 ms Kill the download. queues drain and ping times return to normal 64 bytes from 67.0.192.10 : icmp_seq=42 ttl=254 time=975 ms 64 bytes from 67.0.192.10: icmp_seq=43 ttl=254 time=67.4 ms 64 bytes from 67.0.192.10: icmp_seq=44 ttl=254 time= 73.6 ms 64 bytes from 67.0.192.10: icmp_seq=45 ttl=254 time=45.2 ms 64 bytes from 67.0.192.10: icmp_seq=46 ttl=254 time=45.2 ms 64 bytes from 67.0.192.10: icmp_seq=47 ttl=254 time=44.8 ms And, here''s the shell commands and their output: root@Devil:~ # sh -x wshaper.htb + DOWNLINK=100 + UPLINK=100 + DEV=eth0 + NOPRIOHOSTSRC + NOPRIOHOSTDST+ NOPRIOPORTSRC+ NOPRIOPORTDST+ ''['' '''' = status '']'' + tc qdisc del dev eth0 root + tc qdisc del dev eth0 ingress + ''['' '''' = stop '']'' + tc qdisc add dev eth0 root handle 1: htb default 20 + tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbit burst 6k + tc class add dev eth0 parent 1:1 classid 1:10 htb rate 100kbit burst 6k prio 1 + tc class add dev eth0 parent 1:1 classid 1:20 htb rate 90kbit burst 6k prio 2 + tc class add dev eth0 parent 1:1 classid 1:30 htb rate 80kbit burst 6k prio 2 + tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 + tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 + tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10 + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip tos 0x10 0xff flowid 1:10 + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip protocol 1 0xff flowid 1:10 + tc filter add dev eth0 parent 1: protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x10 0xff at 33 flowid 1:10 + tc filter add dev eth0 parent 1: protocol ip prio 18 u32 match ip dst 0.0.0.0/0 flowid 1:20 + tc qdisc add dev eth0 handle ffff: ingress + tc filter add dev eth0 parent ffff: protocol ip prio 50 u32 match ip src 0.0.0.0/0 police rate 100kbit burst 10k drop flowid :1 root@Devil:~ # sh -x wshaper.htb status + DOWNLINK=100 + UPLINK=100 + DEV=eth0 + NOPRIOHOSTSRC+ NOPRIOHOSTDST+ NOPRIOPORTSRC+ NOPRIOPORTDST+ ''['' status = status '']'' + tc -s qdisc ls dev eth0 qdisc htb 1: r2q 10 default 20 direct_packets_stat 0 Sent 18649 bytes 191 pkts (dropped 0, overlimits 0) qdisc sfq 10: parent 1:10 limit 128p quantum 1514b perturb 10sec Sent 10582 bytes 147 pkts (dropped 0, overlimits 0) qdisc sfq 20: parent 1:20 limit 128p quantum 1514b perturb 10sec Sent 8067 bytes 44 pkts (dropped 0, overlimits 0) qdisc sfq 30: parent 1:30 limit 128p quantum 1514b perturb 10sec Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc ingress ffff: ---------------- Sent 0 bytes 0 pkts (dropped 0, overlimits 0) + tc -s class ls dev eth0 class htb 1:1 root rate 100000bit ceil 100000bit burst 6Kb cburst 1724b Sent 18649 bytes 191 pkts (dropped 0, overlimits 0) rate 1320bit 1pps lended: 0 borrowed: 0 giants: 0 tokens: 398459 ctokens: 108855 class htb 1:10 parent 1:1 leaf 10: prio 1 rate 100000bit ceil 100000bit burst 6Kb cburst 1724b Sent 10582 bytes 147 pkts (dropped 0, overlimits 0) rate 656bit 1pps lended: 147 borrowed: 0 giants: 0 tokens: 398459 ctokens: 108855 class htb 1:20 parent 1:1 leaf 20: prio 2 rate 90000bit ceil 90000bit burst 6Kb cburst 1711b Sent 8067 bytes 44 pkts (dropped 0, overlimits 0) rate 712bit lended: 44 borrowed: 0 giants: 0 tokens: 432284 ctokens: 109555 class htb 1:30 parent 1:1 leaf 30: prio 2 rate 80000bit ceil 80000bit burst 6Kb cburst 1699b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 503316 ctokens: 139264 + exit root@Devil:~ # sh -x wshaper.htb stop + DOWNLINK=100 + UPLINK=100 + DEV=eth0 + NOPRIOHOSTSRC+ NOPRIOHOSTDST+ NOPRIOPORTSRC+ NOPRIOPORTDST+ ''['' stop = status '']'' + tc qdisc del dev eth0 root + tc qdisc del dev eth0 ingress + ''['' stop = stop '']'' + exit root@Devil :~ # Don''t think we generated enough uplink traffic to exercise the htb qdiscs. But it doesn''t look like the ingress qdisc is working at all. I''m out of ideas for now. -- Drew Einhorn
Jaques le Roux
2006-Nov-21 05:38 UTC
Re: Fwd: Traffic Shaping on a Transparent Bridge not working!
I have also tried the Wondershaper script in the past when first getting into QoS etc.. This script only really helps for egress shaping for those who have DSL lines and a lot of uplink traffic, and want to bring down response times for gaming etc... Try FairNat <http://freshmeat.net/projects/fairnat/> , It doesn''t support multiple subnets, but since you have seperate external IP''s, it might just be able to help. I use(and modified mine), and currently working very well. Thinking of trying HSFC , instead of HTB in the future, but getting docs that make sense is my current problem... I hope this helps, since I am also quite new to all this myself. But sure is fun fiddling ;-). Jaques On 21/11/06, drew einhorn <drew.einhorn@gmail.com> wrote:> > I''m trying to shape traffic on a Devil-Linux box. > > This note was originally sent to their maillist, > because the LARTC list appears to have been down > for the past few days. My mailbox was just flooded > with a half dozen or so confirmation requests in response > to my repeated attempts to subscribe to this list. > > ---------- Forwarded message ---------- > From: drew einhorn <drew.einhorn@gmail.com> > Date: Nov 19, 2006 11:51 PM > Subject: Traffic Shaping on a Transparent Bridge not working! > To: devil-linux-discuss@lists.sourceforge.net > > > My first DL project was going well. Then I ran into problems attempting > to shape my bandwidth. > > First I''ll describe the parts that I believe are working correctly. > > I have a DL 1.2.11 box running the default kernel, 2.4.33.3-grsec > > I have br0 bridging all four ports eth0, eth1, eth2, eth3 on a quad port > pci card. The bridge has not been assigned an ip number on the theory > that this makes it much more difficult to attack. The bridge connects > four devices on the 3bit public static ip block from my ISP. > > I have a single port ethernet pci card, eth4 with a static ip, on my > internal private ip network. It is used for remote managent of the DL > box from anywhere on my internal network. > > eth0 is connected to my ISP''s router via the ethernet port on my > ISDN modem. I know ISDN is a nearly dead technology, but it''s the best > thing my crappy telco offers. Tried a satellite ISP, but that''s another > long and sad story. > > eth1 is connected to a hardened publicly accessible host. > > eth2 and eth3 are connected to the WAN ports on a couple of Linksys > Cable/DSL routers. Eventually most of their functions will migrate to the > DL box, but that is more than I wanted to bite off in my first DL project. > > The first Linksys box NATs one of my public ips to my internal private > ip network. The second Linksys box is newer and includes a wireless > access point used by a couple neighbors. It NATs a second public ip to > a separate private ip network. > > All of the above appears to be working as expected. > > After pondering the mysteries of traffic shaping I decided to start with > wondershaper 1.1a from lartc.org, rather than starting from scratch. > > Tried both the cbq and htb versions without any success. > > RTFM time. The htb section of http://lartc.org/howto/index.html is easier > reading than the cbq section. And the howto claims htb is better anyway. > Let''s focus on the htb version of wondershaper. > > OK, First we edit wshaper.htb and configure the shell variables. Then we > run: sh -x wshaper.htb > to echo the commands as they are executed. > > Then we start pinging the router at the other end of the ISDN line. > > Then we start downloading a file to generate some traffic that really > needs to be shaped. > > Then we run: sh -x wshaper.htb status > to gather some statistics > > then we kill the download. > > then we sh -x wshaper.htb stop to shut down the malfunctioning shaper. > > Here''s the output from the ping: > > $ ping 67.0.192.10 > PING 67.0.192.10 (67.0.192.10) 56(84) bytes of data. > > Link is idle, normal ping times. > > 64 bytes from 67.0.192.10: icmp_seq=0 ttl=254 time=48.5 ms > 64 bytes from 67.0.192.10: icmp_seq=1 ttl=254 time=48.4 ms > 64 bytes from 67.0.192.10: icmp_seq=2 ttl=254 time=48.4 ms > 64 bytes from 67.0.192.10: icmp_seq=3 ttl=254 time=48.4 ms > 64 bytes from 67.0.192.10: icmp_seq=4 ttl=254 time= 48.5 ms > 64 bytes from 67.0.192.10: icmp_seq=5 ttl=254 time=67.8 ms > 64 bytes from 67.0.192.10: icmp_seq=6 ttl=254 time=48.3 ms > 64 bytes from 67.0.192.10: icmp_seq=7 ttl=254 time=48.2 ms > > Download starts. Shaping is not working! Queues in > router and/or ISDN modem grow, and ping times rapidly > become huge. > > 64 bytes from 67.0.192.10: icmp_seq=8 ttl=254 time=184 ms > 64 bytes from 67.0.192.10: icmp_seq=9 ttl=254 time=1080 ms > 64 bytes from 67.0.192.10: icmp_seq=10 ttl=254 time=2025 ms > 64 bytes from 67.0.192.10: icmp_seq=11 ttl=254 time=1551 ms > 64 bytes from 67.0.192.10: icmp_seq=12 ttl=254 time=1078 ms > 64 bytes from 67.0.192.10: icmp_seq=13 ttl=254 time=896 ms > 64 bytes from 67.0.192.10: icmp_seq=14 ttl=254 time=1088 ms > 64 bytes from 67.0.192.10: icmp_seq=15 ttl=254 time=1171 ms > 64 bytes from 67.0.192.10: icmp_seq=16 ttl=254 time=1272 ms > 64 bytes from 67.0.192.10: icmp_seq=17 ttl=254 time=1280 ms > 64 bytes from 67.0.192.10: icmp_seq=18 ttl=254 time=1101 ms > 64 bytes from 67.0.192.10: icmp_seq=19 ttl=254 time=1258 ms > 64 bytes from 67.0.192.10: icmp_seq=20 ttl=254 time=1211 ms > 64 bytes from 67.0.192.10: icmp_seq=21 ttl=254 time=1259 ms > 64 bytes from 67.0.192.10: icmp_seq=22 ttl=254 time=1373 ms > 64 bytes from 67.0.192.10: icmp_seq=23 ttl=254 time=1424 ms > 64 bytes from 67.0.192.10: icmp_seq=24 ttl=254 time=1461 ms > 64 bytes from 67.0.192.10: icmp_seq=25 ttl=254 time=1277 ms > 64 bytes from 67.0.192.10: icmp_seq=26 ttl=254 time=1521 ms > 64 bytes from 67.0.192.10: icmp_seq=27 ttl=254 time=1467 ms > 64 bytes from 67.0.192.10: icmp_seq=28 ttl=254 time=1335 ms > 64 bytes from 67.0.192.10: icmp_seq=29 ttl=254 time=1329 ms > 64 bytes from 67.0.192.10: icmp_seq=30 ttl=254 time=1386 ms > 64 bytes from 67.0.192.10: icmp_seq=31 ttl=254 time=1360 ms > 64 bytes from 67.0.192.10: icmp_seq=32 ttl=254 time=1416 ms > 64 bytes from 67.0.192.10: icmp_seq=33 ttl=254 time=1480 ms > 64 bytes from 67.0.192.10: icmp_seq=34 ttl=254 time=1345 ms > 64 bytes from 67.0.192.10: icmp_seq=35 ttl=254 time=1356 ms > 64 bytes from 67.0.192.10: icmp_seq=36 ttl=254 time=1370 ms > 64 bytes from 67.0.192.10: icmp_seq=37 ttl=254 time=1278 ms > 64 bytes from 67.0.192.10: icmp_seq=38 ttl=254 time=1612 ms > 64 bytes from 67.0.192.10: icmp_seq=39 ttl=254 time=1520 ms > 64 bytes from 67.0.192.10: icmp_seq=40 ttl=254 time=1322 ms > 64 bytes from 67.0.192.10: icmp_seq=41 ttl=254 time=1545 ms > > Kill the download. queues drain and ping times return to normal > > 64 bytes from 67.0.192.10 : icmp_seq=42 ttl=254 time=975 ms > 64 bytes from 67.0.192.10: icmp_seq=43 ttl=254 time=67.4 ms > 64 bytes from 67.0.192.10: icmp_seq=44 ttl=254 time= 73.6 ms > 64 bytes from 67.0.192.10: icmp_seq=45 ttl=254 time=45.2 ms > 64 bytes from 67.0.192.10: icmp_seq=46 ttl=254 time=45.2 ms > 64 bytes from 67.0.192.10: icmp_seq=47 ttl=254 time=44.8 ms > > > And, here''s the shell commands and their output: > > root@Devil:~ # sh -x wshaper.htb > + DOWNLINK=100 > + UPLINK=100 > + DEV=eth0 > + NOPRIOHOSTSRC> + NOPRIOHOSTDST> + NOPRIOPORTSRC> + NOPRIOPORTDST> + ''['' '''' = status '']'' > + tc qdisc del dev eth0 root > + tc qdisc del dev eth0 ingress > + ''['' '''' = stop '']'' > + tc qdisc add dev eth0 root handle 1: htb default 20 > + tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbit burst 6k > + tc class add dev eth0 parent 1:1 classid 1:10 htb rate 100kbit burst 6k > prio 1 > + tc class add dev eth0 parent 1:1 classid 1:20 htb rate 90kbit burst 6k > prio 2 > + tc class add dev eth0 parent 1:1 classid 1:30 htb rate 80kbit burst 6k > prio 2 > + tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 > + tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 > + tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10 > + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip > tos 0x10 0xff flowid 1:10 > + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip > protocol 1 0xff flowid 1:10 > + tc filter add dev eth0 parent 1: protocol ip prio 10 u32 match ip > protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 > match u8 0x10 0xff at 33 flowid 1:10 > + tc filter add dev eth0 parent 1: protocol ip prio 18 u32 match ip > dst 0.0.0.0/0 flowid 1:20 > + tc qdisc add dev eth0 handle ffff: ingress > + tc filter add dev eth0 parent ffff: protocol ip prio 50 u32 match ip > src 0.0.0.0/0 police rate 100kbit burst 10k drop flowid :1 > > > root@Devil:~ # sh -x wshaper.htb status > + DOWNLINK=100 > + UPLINK=100 > + DEV=eth0 > + NOPRIOHOSTSRC> + NOPRIOHOSTDST> + NOPRIOPORTSRC> + NOPRIOPORTDST> + ''['' status = status '']'' > + tc -s qdisc ls dev eth0 > qdisc htb 1: r2q 10 default 20 direct_packets_stat 0 > Sent 18649 bytes 191 pkts (dropped 0, overlimits 0) > qdisc sfq 10: parent 1:10 limit 128p quantum 1514b perturb 10sec > Sent 10582 bytes 147 pkts (dropped 0, overlimits 0) > qdisc sfq 20: parent 1:20 limit 128p quantum 1514b perturb 10sec > Sent 8067 bytes 44 pkts (dropped 0, overlimits 0) > qdisc sfq 30: parent 1:30 limit 128p quantum 1514b perturb 10sec > Sent 0 bytes 0 pkts (dropped 0, overlimits 0) > qdisc ingress ffff: ---------------- > Sent 0 bytes 0 pkts (dropped 0, overlimits 0) > + tc -s class ls dev eth0 > class htb 1:1 root rate 100000bit ceil 100000bit burst 6Kb cburst 1724b > Sent 18649 bytes 191 pkts (dropped 0, overlimits 0) > rate 1320bit 1pps > lended: 0 borrowed: 0 giants: 0 > tokens: 398459 ctokens: 108855 > > class htb 1:10 parent 1:1 leaf 10: prio 1 rate 100000bit ceil > 100000bit burst 6Kb cburst 1724b > Sent 10582 bytes 147 pkts (dropped 0, overlimits 0) > rate 656bit 1pps > lended: 147 borrowed: 0 giants: 0 > tokens: 398459 ctokens: 108855 > > class htb 1:20 parent 1:1 leaf 20: prio 2 rate 90000bit ceil 90000bit > burst 6Kb cburst 1711b > Sent 8067 bytes 44 pkts (dropped 0, overlimits 0) > rate 712bit > lended: 44 borrowed: 0 giants: 0 > tokens: 432284 ctokens: 109555 > > class htb 1:30 parent 1:1 leaf 30: prio 2 rate 80000bit ceil 80000bit > burst 6Kb cburst 1699b > Sent 0 bytes 0 pkts (dropped 0, overlimits 0) > lended: 0 borrowed: 0 giants: 0 > tokens: 503316 ctokens: 139264 > > + exit > root@Devil:~ # sh -x wshaper.htb stop > + DOWNLINK=100 > + UPLINK=100 > + DEV=eth0 > + NOPRIOHOSTSRC> + NOPRIOHOSTDST> + NOPRIOPORTSRC> + NOPRIOPORTDST> + ''['' stop = status '']'' > + tc qdisc del dev eth0 root > + tc qdisc del dev eth0 ingress > + ''['' stop = stop '']'' > + exit > > root@Devil :~ # > > Don''t think we generated enough uplink traffic to exercise the htb qdiscs. > > But it doesn''t look like the ingress qdisc is working at all. > > I''m out of ideas for now. > > -- > Drew Einhorn > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc >_______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Andy Furniss
2006-Nov-22 21:57 UTC
Re: Fwd: Traffic Shaping on a Transparent Bridge not working!
drew einhorn wrote:> RTFM time. The htb section of http://lartc.org/howto/index.html is easier > reading than the cbq section. And the howto claims htb is better anyway. > Let''s focus on the htb version of wondershaper.Yes HTB/HFSC should br better for slow links, unfortunatly wondershaper is flawed as noted below. This may not be your problem here, though.> Then we start downloading a file to generate some traffic that really > needs to be shaped.Shaping from the wrong end of the bottleneck is not nice and the slower the link the harder it is. It''s better than not shaping (policing in this case).> root@Devil:~ # sh -x wshaper.htb > + DOWNLINK=100 > + UPLINK=100 > + DEV=eth0 > + NOPRIOHOSTSRC> + NOPRIOHOSTDST> + NOPRIOPORTSRC> + NOPRIOPORTDST> + ''['' '''' = status '']'' > + tc qdisc del dev eth0 root > + tc qdisc del dev eth0 ingress > + ''['' '''' = stop '']'' > + tc qdisc add dev eth0 root handle 1: htb default 20It''s not a good idea to use default on eth, unless you explicitly handle arp. IIRC WS was tested on ppp so I guess thats why. Not specifying default lets unclassified through unshaped and you can, and do make a catchall ip filter later for 20 anyway.> + tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbit burst 6k > + tc class add dev eth0 parent 1:1 classid 1:10 htb rate 100kbit burst > 6k prio 1 > + tc class add dev eth0 parent 1:1 classid 1:20 htb rate 90kbit burst 6k > prio 2 > + tc class add dev eth0 parent 1:1 classid 1:30 htb rate 80kbit burst 6k > prio 2Rates can''t add up to more than parent rate/ceil I guess the test case used didn''t expose this when WS was published. I would use something like - ... 1:10 htb rate 80kbit ceil 100kbit ... 1:20 htb rate 15kbit ceil 100kbit 1:30 htb rate 5kbit ceil 100kbit> + tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 > + tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 > + tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10 > + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip > tos 0x10 0xff flowid 1:10 > + tc filter add dev eth0 parent 1:0 protocol ip prio 10 u32 match ip > protocol 1 0xff flowid 1:10 > + tc filter add dev eth0 parent 1: protocol ip prio 10 u32 match ip > protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 > match u8 0x10 0xff at 33 flowid 1:10 > + tc filter add dev eth0 parent 1: protocol ip prio 18 u32 match ip > dst 0.0.0.0/0 flowid 1:20This filter should catch all IP so default not needed.> + tc qdisc add dev eth0 handle ffff: ingress > + tc filter add dev eth0 parent ffff: protocol ip prio 50 u32 match ip > src 0.0.0.0/0 police rate 100kbit burst 10k drop flowid :1I am suprised this did nothing - at low speeds you may need to back off a bit more. If I were shaping 128kbit link I would be tempted to mss clamp/set mtus lower as 1500byte packets have long bitrate latency - depends on your requirememts and I am not sure you can mss clamp with this bridge setup.> + tc -s qdisc ls dev eth0 > qdisc htb 1: r2q 10 default 20 direct_packets_stat 0 > Sent 18649 bytes 191 pkts (dropped 0, overlimits 0) > qdisc sfq 10: parent 1:10 limit 128p quantum 1514b perturb 10sec > Sent 10582 bytes 147 pkts (dropped 0, overlimits 0) > qdisc sfq 20: parent 1:20 limit 128p quantum 1514b perturb 10sec > Sent 8067 bytes 44 pkts (dropped 0, overlimits 0) > qdisc sfq 30: parent 1:30 limit 128p quantum 1514b perturb 10sec > Sent 0 bytes 0 pkts (dropped 0, overlimits 0)Looks OK, we are testing ingress anyway. I would use limit XX on sfqs as 128 default is a very long time @ low bitrates.> qdisc ingress ffff: ---------------- > Sent 0 bytes 0 pkts (dropped 0, overlimits 0)0 bytes - something wrong here. Filter looks OK, but it''s not seeing traffic. I haven''t got a 2.4 box, I do have a br on a 2.6 box and just tested on eth0 - works OK with those rules. Counters on eth0 egress look OK so I assume all traffic is IP - tcpdump. I wonder if it''s something to do with bridging (I don''t understand some behavior of mine), maybe ingress on eth0 has a different ethertype at that point. Try this instead - tc qdisc add dev eth0 handle ffff: ingress tc filter add dev eth0 parent ffff: protocol arp prio 1 u32 match u32 0 0 flowid :1 tc filter add dev eth0 parent ffff: protocol all prio 2 u32 match u32 0 0 police rate 100kbit burst 10k drop flowid :2 Aggh just thought of something else - tempted to delete above, but will leave incase it works. The thing is 2.4 and 2.6(default config) use different policers. On 2.4 it hooks after PREROUTING and on 2.6 before. Maybe old policer + bridge isn''t going to work for that reason. Andy.