Hi all. I''m going on my 3rd week trying to get a simple traffic shapping to work the right way :( !! My goal it to shape the traffic coming from one machine (pc1) to another machine (pc2) throught the "eth0" interface. My test configuration is as follows: PC1 IP: 192.168.105.237 Mask: 255.255.255.0 OS: Red Hat Linux Kernel 2.4.20-8 Rules: ======================================================================== interface="eth0" #Delete previous stored configuration tc qdisc del dev $interface root #Creating the root Qdisc (Queueing Disk) tc qdisc add dev $interface root handle 1: htb default 12 #Definition of classes tc class add dev $interface parent 1: classid 1:1 htb rate 100kbit ceil 100kbit tc class add dev $interface parent 1:1 classid 1:10 htb rate 30kbit ceil 100kbit tc class add dev $interface parent 1:1 classid 1:11 htb rate 10kbit ceil 100kbit tc class add dev $interface parent 1:1 classid 1:12 htb rate 60kbit ceil 100kbit tc qdisc add dev $interface parent 1:10 handle 20: sfq perturb 10 tc qdisc add dev $interface parent 1:11 handle 30: sfq perturb 10 tc qdisc add dev $interface parent 1:12 handle 40: sfq perturb 10 #Definition of the filters tc filter add dev $interface protocol ip parent 1:0 prio 1 u32 match ip src 192.168.105.237 match ip dport 20000 0xffff flowid 1:10 tc filter add dev $interface protocol ip parent 1:0 prio 1 u32 match ip src 192.168.105.237 match ip dport 20001 0xffff flowid 1:11 ======================================================================== PC2: IP: 192.168.105.211 Mask: 255.255.255.0 OS: Red Hat Linux Kernel 2.4.20-8 I''m using the TG (www.postel.org/tg) as a TCP traffic generator, to establish three 90kbits/s TCP flows from PC1(any port) to PC2(port 20000, 20001 and 20002), with different durations and pause times, which as can be shown in the next files: ======================================================================== on 0:2 tcp 192.168.105.211.20000 at 3 setup at 5 arrival constant 0.0888889 length constant 1000 time 3 wait 3 at 11 arrival constant 0.0888889 length constant 1000 time 15 ======================================================================== ======================================================================== on 0:2 tcp 192.168.105.211.20001 at 3 setup at 5 arrival constant 0.0888889 length constant 1000 time 9 wait 3 at 17 arrival constant 0.0888889 length constant 1000 time 6 wait 2 ======================================================================== ======================================================================== on 0:2 tcp 192.168.105.211.20002 at 3 setup at 5 arrival constant 0.0888889 length constant 1000 time 15 wait 10 ======================================================================== I''m also using the TCPTRACE program, so that I can get some statistics from a captured file made with the help of ETHEREAL (capture filter="ip src 192.168.105.237 && port 20000 || port 20001 || port 20002"), installed on PC2. Before adding the TC rules to the system, I''ve runned my test system and got the following output from the ethereal captured file using the command: "tcptrace -l ": ================================================================================================================================================ 886 packets seen, 886 TCP packets traced elapsed wallclock time: 0:00:00.024537, 36108 pkts/sec analyzed trace file elapsed time: 0:00:23.034820 TCP connection info: 3 TCP connections traced: TCP connection 1: host a: 192.168.105.237:36069 host b: 192.168.105.211:20000 complete conn: no (SYNs: 1) (FINs: 1) first packet: Sun Oct 16 16:21:21.332143 2005 last packet: Sun Oct 16 16:21:44.366963 2005 elapsed time: 0:00:23.034820 total packets: 205 filename: linksharing.ethereal a->b: b->a: total packets: 205 total packets: 0 ack pkts sent: 204 ack pkts sent: 0 pure acks sent: 2 pure acks sent: 0 sack pkts sent: 0 sack pkts sent: 0 dsack pkts sent: 0 dsack pkts sent: 0 max sack blks/ack: 0 max sack blks/ack: 0 unique bytes sent: 201000 unique bytes sent: 0 actual data pkts: 201 actual data pkts: 0 actual data bytes: 201000 actual data bytes: 0 rexmt data pkts: 0 rexmt data pkts: 0 rexmt data bytes: 0 rexmt data bytes: 0 zwnd probe pkts: 0 zwnd probe pkts: 0 zwnd probe bytes: 0 zwnd probe bytes: 0 outoforder pkts: 0 outoforder pkts: 0 pushed data pkts: 200 pushed data pkts: 0 SYN/FIN pkts sent: 1/1 SYN/FIN pkts sent: 0/0 req 1323 ws/ts: Y/Y req 1323 ws/ts: N/N adv wind scale: 0 adv wind scale: 0 req sack: Y req sack: N sacks sent: 0 sacks sent: 0 urgent data pkts: 0 pkts urgent data pkts: 0 pkts urgent data bytes: 0 bytes urgent data bytes: 0 bytes mss requested: 1460 bytes mss requested: 0 bytes max segm size: 1448 bytes max segm size: 0 bytes min segm size: 552 bytes min segm size: 0 bytes avg segm size: 999 bytes avg segm size: 0 bytes max win adv: 5840 bytes max win adv: 0 bytes min win adv: 5840 bytes min win adv: 0 bytes zero win adv: 0 times zero win adv: 0 times avg win adv: 5840 bytes avg win adv: 0 bytes initial window: 201000 bytes initial window: 0 bytes initial window: 201 pkts initial window: 0 pkts ttl stream length: 201000 bytes ttl stream length: NA missed data: 0 bytes missed data: NA truncated data: 0 bytes truncated data: 0 bytes truncated packets: 0 pkts truncated packets: 0 pkts data xmit time: 20.909 secs data xmit time: 0.000 secs idletime max: 3109.6 ms idletime max: NA ms throughput: 8726 Bps throughput: 0 Bps =============================== TCP connection 2: host c: 192.168.105.237:36070 host d: 192.168.105.211:20001 complete conn: yes first packet: Sun Oct 16 16:21:21.332520 2005 last packet: Sun Oct 16 16:21:41.332416 2005 elapsed time: 0:00:19.999896 total packets: 340 filename: linksharing.ethereal c->d: d->c: total packets: 172 total packets: 168 ack pkts sent: 171 ack pkts sent: 168 pure acks sent: 2 pure acks sent: 166 sack pkts sent: 0 sack pkts sent: 0 dsack pkts sent: 0 dsack pkts sent: 0 max sack blks/ack: 0 max sack blks/ack: 0 unique bytes sent: 168000 unique bytes sent: 0 actual data pkts: 168 actual data pkts: 0 actual data bytes: 168000 actual data bytes: 0 rexmt data pkts: 0 rexmt data pkts: 0 rexmt data bytes: 0 rexmt data bytes: 0 zwnd probe pkts: 0 zwnd probe pkts: 0 zwnd probe bytes: 0 zwnd probe bytes: 0 outoforder pkts: 0 outoforder pkts: 0 pushed data pkts: 166 pushed data pkts: 0 SYN/FIN pkts sent: 1/1 SYN/FIN pkts sent: 1/1 req 1323 ws/ts: Y/Y req 1323 ws/ts: Y/Y adv wind scale: 0 adv wind scale: 0 req sack: Y req sack: Y sacks sent: 0 sacks sent: 0 urgent data pkts: 0 pkts urgent data pkts: 0 pkts urgent data bytes: 0 bytes urgent data bytes: 0 bytes mss requested: 1460 bytes mss requested: 1460 bytes max segm size: 1448 bytes max segm size: 0 bytes min segm size: 552 bytes min segm size: 0 bytes avg segm size: 999 bytes avg segm size: 0 bytes max win adv: 5840 bytes max win adv: 64000 bytes min win adv: 5840 bytes min win adv: 8000 bytes zero win adv: 0 times zero win adv: 0 times avg win adv: 5840 bytes avg win adv: 60610 bytes initial window: 1000 bytes initial window: 0 bytes initial window: 1 pkts initial window: 0 pkts ttl stream length: 168000 bytes ttl stream length: 0 bytes missed data: 0 bytes missed data: 0 bytes truncated data: 0 bytes truncated data: 0 bytes truncated packets: 0 pkts truncated packets: 0 pkts data xmit time: 17.847 secs data xmit time: 0.000 secs idletime max: 3104.7 ms idletime max: 3019.2 ms throughput: 8400 Bps throughput: 0 Bps =============================== TCP connection 3: host e: 192.168.105.237:36071 host f: 192.168.105.211:20002 complete conn: yes first packet: Sun Oct 16 16:21:21.332771 2005 last packet: Sun Oct 16 16:21:38.378520 2005 elapsed time: 0:00:17.045748 total packets: 341 filename: linksharing.ethereal e->f: f->e: total packets: 172 total packets: 169 ack pkts sent: 171 ack pkts sent: 169 pure acks sent: 2 pure acks sent: 167 sack pkts sent: 0 sack pkts sent: 0 dsack pkts sent: 0 dsack pkts sent: 0 max sack blks/ack: 0 max sack blks/ack: 0 unique bytes sent: 168000 unique bytes sent: 0 actual data pkts: 168 actual data pkts: 0 actual data bytes: 168000 actual data bytes: 0 rexmt data pkts: 0 rexmt data pkts: 0 rexmt data bytes: 0 rexmt data bytes: 0 zwnd probe pkts: 0 zwnd probe pkts: 0 zwnd probe bytes: 0 zwnd probe bytes: 0 outoforder pkts: 0 outoforder pkts: 0 pushed data pkts: 167 pushed data pkts: 0 SYN/FIN pkts sent: 1/1 SYN/FIN pkts sent: 1/1 req 1323 ws/ts: Y/Y req 1323 ws/ts: Y/Y adv wind scale: 0 adv wind scale: 0 req sack: Y req sack: Y sacks sent: 0 sacks sent: 0 urgent data pkts: 0 pkts urgent data pkts: 0 pkts urgent data bytes: 0 bytes urgent data bytes: 0 bytes mss requested: 1460 bytes mss requested: 1460 bytes max segm size: 1448 bytes max segm size: 0 bytes min segm size: 552 bytes min segm size: 0 bytes avg segm size: 999 bytes avg segm size: 0 bytes max win adv: 5840 bytes max win adv: 64000 bytes min win adv: 5840 bytes min win adv: 8000 bytes zero win adv: 0 times zero win adv: 0 times avg win adv: 5840 bytes avg win adv: 60630 bytes initial window: 1000 bytes initial window: 0 bytes initial window: 1 pkts initial window: 0 pkts ttl stream length: 168000 bytes ttl stream length: 0 bytes missed data: 0 bytes missed data: 0 bytes truncated data: 0 bytes truncated data: 0 bytes truncated packets: 0 pkts truncated packets: 0 pkts data xmit time: 14.919 secs data xmit time: 0.000 secs idletime max: 2112.8 ms idletime max: 2113.0 ms throughput: 9856 Bps throughput: 0 Bps ================================================================================================================================================ After adding the refered TC rules I''ve runned the configuration again, I can see that the number of captured packets has decreased a lot (almost half) !! So I''ve type the following TC command to find out what was effectively sent from PC1 to PC2, "tc -d -s class show dev eth0" and got the following output: ======================================================================== class htb 1:11 parent 1:1 leaf 30: prio 0 quantum 1000 rate 10Kbit ceil 100Kbit burst 1611b/8 mpu 0b cburst 1727b/8 mpu 0b level 0 Sent 42108 bytes 34 pkts (dropped 0, overlimits 0) lended: 30 borrowed: 4 giants: 0 tokens: -758272 ctokens: 106496 class htb 1:1 root rate 100Kbit ceil 100Kbit burst 1727b/8 mpu 0b cburst 1727b/8 mpu 0b level 7 Sent 370941 bytes 295 pkts (dropped 0, overlimits 0) lended: 12 borrowed: 0 giants: 0 tokens: 108032 ctokens: 108032 class htb 1:10 parent 1:1 leaf 20: prio 0 quantum 1000 rate 30Kbit ceil 100Kbit burst 1637b/8 mpu 0b cburst 1727b/8 mpu 0b level 0 Sent 122770 bytes 89 pkts (dropped 0, overlimits 0) lended: 81 borrowed: 8 giants: 0 tokens: -415481 ctokens: 100352 class htb 1:12 parent 1:1 leaf 40: prio 0 quantum 1000 rate 60Kbit ceil 100Kbit burst 1675b/8 mpu 0b cburst 1727b/8 mpu 0b level 0 Sent 206063 bytes 172 pkts (dropped 0, overlimits 0) lended: 172 borrowed: 0 giants: 0 tokens: 174507 ctokens: 108032 ======================================================================== So we can see that before the TC rules, I Had about 537kbytes transfered (201k+168k+168k), after having applied the TC rules I''ve only got a total of 370.941 bytes !!!!!!!! Where have they gone ? I''ve also plotted a graph with gnuplot, showing me that (somehow) my rules were "correct", I''ve got an average value of 30kbits/s, a 10kbits/s and a 60kbits/s. Is it normal that some packets get dropped by the rules or not (taking into account my test configuration) ? This is my last researched subject in my master thesis (www.fe.up.pt/si/teses_posgrad.tese?p_sigla=MEEC&P_ALU_NUMERO=030553029), and this is the only thing that has kept me from finishing it !!! Thanks lot for your answers !! Best regards. Paulo Augusto _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Andy Furniss
2005-Oct-17 21:29 UTC
Re: Lost packets and strange "behaviour" of my TC rules
Paulo Augusto wrote:> > I''m using the TG (www.postel.org/tg <http://www.postel.org/tg>) as a TCP traffic > generator, to establish three 90kbits/s TCP flows from PC1(any port) to PC2(port > 20000, 20001 and 20002), with different durations and pause times, which as can > be shown in the next files:Usually tcp won''t be like this, netperf may be better to test with, as it''s more normal for bulk tcp to try to go as fast as it can.> > I''ve also plotted a graph with gnuplot, showing me that (somehow) my rules were > "correct", I''ve got an average value of 30kbits/s, a 10kbits/s and a 60kbits/s. > > Is it normal that some packets get dropped by the rules or not (taking into > account my test configuration) ?The packets are not dropped in this case as the default queue length for sfq is 128, which can hold a rwin worth of data. The missing packets just didn''t get sent because htb slowed the packets down to the rates specified and the sender will only send more once the ones already sent are acked. Andy.
Paulo Augusto
2005-Oct-23 20:09 UTC
Re: Lost packets and strange "behaviour" of my TC rules
Hi Andy. Thanks !! Thanks a lot for your help !!!! Finally I''m seeing something working in the right way... ;) Ive tried another sender/receiver program (I find the netperf a little difficult to operate), so Ive tried with iperf. Now Im getting all the flows with the correct limitation for its corresponding class. But also now, Im getting ONLY the RATE limitation, not the CEIL one !!! :( Ive done a smaller test configuration, with the same 2 PCs, connected the same way and added the following rules: =====================================================#interface="eth0" #interface="lo" interface="ppp0" #Delete any previous stored configuration tc qdisc del dev $interface root #Creating the root Qdisc (Queueing Disk) tc qdisc add dev $interface root handle 1: htb default 20 #Definition of classes tc class add dev $interface parent 1: classid 1:1 htb rate 28kbit ceil 28kbit tc class add dev $interface parent 1:2 classid 1:20 htb rate 10kbit ceil 28kbit prio 0 tc class add dev $interface parent 1:3 classid 1:30 htb rate 15kbit ceil 28kbit prio 1 tc qdisc add dev $interface parent 1:20 handle 30: sfq tc qdisc add dev $interface parent 1:30 handle 40: sfq #Definition of the filters tc filter add dev $interface protocol ip parent 1:0 u32 match ip dport 20001 0xffff flowid 1:20 tc filter add dev $interface protocol ip parent 1:0 u32 match ip dport 20002 0xffff flowid 1:30 ========================================================= This is my actual version of HTB: ========================================================= [root@EdenRH9 htb]# tc qdisc add htb help What is "help"? Usage: ... qdisc add ... htb [default N] [r2q N] default minor id of class to which unclassified packets are sent {0} r2q DRR quantums are computed as rate in Bps/r2q {10} debug string of 16 numbers each 0-3 {0} ... class add ... htb rate R1 burst B1 [prio P] [slot S] [pslot PS] [ceil R2] [cburst B2] [mtu MTU] [quantum Q] rate rate allocated to this class (class can still borrow) burst max bytes burst which can be accumulated during idle period {computed} ceil definite upper class rate (no borrows) {rate} cburst burst but for ceil {computed} mtu max packet size we create rate map for {1600} prio priority of leaf; lower are served first {0} quantum how much bytes to serve from leaf at once {use r 2q} TC HTB version 3.3 [root@EdenRH9 htb]# ========================================================= When I did a test run with the traffic generator Iperf, utilizing the following command I got the following results: =========================================================[root@EdenRH9 htb]# iperf -c 192.168.7.100 -p 20001 -n 1 -l 50000 ------------------------------------------------------------ Client connecting to 192.168.7.100, TCP port 20001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.7.200 port 35550 connected with 192.168.7.100 port 20001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-37.4 sec 48.8 KBytes 10.7 Kbits/sec [root@EdenRH9 htb]# ========================================================= You can already see that the rate didnt went up to the 28kbit, and there was no other traffic on that interface or other class. So Ive printed the output of the TC command regarding its QDISC, CLASS and FILTER, before the test run and after the test run and got the following answers: ========================================================= [root@EdenRH9 htb]# tc -s -d class show dev ppp0 class htb 1:1 root prio 0 quantum 1000 rate 28Kbit ceil 28Kbit burst 1634b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 373714 ctokens: 373714 class htb 1:20 root leaf 30: prio 0 quantum 1000 rate 10Kbit ceil 28Kbit burst 1611b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 1031680 ctokens: 373714 class htb 1:30 root leaf 40: prio 1 quantum 1000 rate 15Kbit ceil 28Kbit burst 1618b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 690773 ctokens: 373714 [root@EdenRH9 htb]# tc -s -d filter show dev ppp0 filter parent 1: protocol ip pref 49151 u32 filter parent 1: protocol ip pref 49151 u32 fh 801: ht divisor 1 filter parent 1: protocol ip pref 49151 u32 fh 801::800 order 2048 key ht 801 bkt 0 flowid 1:30 match 00004e22/0000ffff at 20 filter parent 1: protocol ip pref 49151 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 49151 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:20 match 00004e21/0000ffff at 20 filter parent 1: protocol ip pref 49152 u32 filter parent 1: protocol ip pref 49152 u32 fh 801: ht divisor 1 filter parent 1: protocol ip pref 49152 u32 fh 801::800 order 2048 key ht 801 bkt 0 flowid 1:30 match 00004e22/0000ffff at 20 filter parent 1: protocol ip pref 49152 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 49152 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:20 match 00004e21/0000ffff at 20 [root@EdenRH9 htb]# tc -s -d qdisc show dev ppp0 qdisc sfq 40: quantum 1500b limit 128p flows 128/1024 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 30: quantum 1500b limit 128p flows 128/1024 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc htb 1: r2q 10 default 20 direct_packets_stat 0 ver 3.7 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# [root@EdenRH9 htb]# tc -s -d qdisc show dev ppp0 qdisc sfq 40: quantum 1500b limit 128p flows 128/1024 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 30: quantum 1500b limit 128p flows 128/1024 Sent 52060 bytes 39 pkts (dropped 0, overlimits 0) qdisc htb 1: r2q 10 default 20 direct_packets_stat 0 ver 3.7 Sent 52060 bytes 39 pkts (dropped 0, overlimits 65) [root@EdenRH9 htb]# tc -s -d filter show dev ppp0 filter parent 1: protocol ip pref 49151 u32 filter parent 1: protocol ip pref 49151 u32 fh 801: ht divisor 1 filter parent 1: protocol ip pref 49151 u32 fh 801::800 order 2048 key ht 801 bkt 0 flowid 1:30 match 00004e22/0000ffff at 20 filter parent 1: protocol ip pref 49151 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 49151 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:20 match 00004e21/0000ffff at 20 filter parent 1: protocol ip pref 49152 u32 filter parent 1: protocol ip pref 49152 u32 fh 801: ht divisor 1 filter parent 1: protocol ip pref 49152 u32 fh 801::800 order 2048 key ht 801 bkt 0 flowid 1:30 match 00004e22/0000ffff at 20 filter parent 1: protocol ip pref 49152 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 49152 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:20 match 00004e21/0000ffff at 20 [root@EdenRH9 htb]# tc -s -d class show dev ppp0 class htb 1:1 root prio 0 quantum 1000 rate 28Kbit ceil 28Kbit burst 1634b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 373714 ctokens: 373714 class htb 1:20 root leaf 30: prio 0 quantum 1000 rate 10Kbit ceil 28Kbit burst 1611b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 52060 bytes 39 pkts (dropped 0, overlimits 0) rate 131bps lended: 39 borrowed: 0 giants: 0 tokens: -25088 ctokens: 362744 class htb 1:30 root leaf 40: prio 1 quantum 1000 rate 15Kbit ceil 28Kbit burst 1618b/8 mpu 0b cburst 1634b/8 mpu 0b level 0 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 690773 ctokens: 373714 [root@EdenRH9 htb]# ========================================================= Also made a printout from the TCPTRACE output, captured on the other PC with ETHEREAL with the following content: ===============================================1 arg remaining, starting with ''/opt/teste_regras20001.ethereal'' Ostermann''s tcptrace -- version 6.6.7 -- Thu Nov 4, 2004 76 packets seen, 76 TCP packets traced elapsed wallclock time: 0:00:00.006178, 12301 pkts/sec analyzed trace file elapsed time: 0:00:39.289997 TCP connection info: 1 TCP connection traced: TCP connection 1: host a: 192.168.7.200:35550 host b: 192.168.7.100:20001 complete conn: yes first packet: Sun Oct 23 20:26:51.304505 2005 last packet: Sun Oct 23 20:27:30.594503 2005 elapsed time: 0:00:39.289997 total packets: 76 filename: /opt/teste_regras20001.ethereal a->b: b->a: total packets: 39 total packets: 37 ack pkts sent: 38 ack pkts sent: 37 pure acks sent: 2 pure acks sent: 35 sack pkts sent: 0 sack pkts sent: 0 dsack pkts sent: 0 dsack pkts sent: 0 max sack blks/ack: 0 max sack blks/ack: 0 unique bytes sent: 50024 unique bytes sent: 0 actual data pkts: 36 actual data pkts: 0 actual data bytes: 50024 actual data bytes: 0 rexmt data pkts: 0 rexmt data pkts: 0 rexmt data bytes: 0 rexmt data bytes: 0 zwnd probe pkts: 0 zwnd probe pkts: 0 zwnd probe bytes: 0 zwnd probe bytes: 0 outoforder pkts: 0 outoforder pkts: 0 pushed data pkts: 10 pushed data pkts: 0 SYN/FIN pkts sent: 1/1 SYN/FIN pkts sent: 1/1 req 1323 ws/ts: Y/Y req 1323 ws/ts: Y/Y adv wind scale: 0 adv wind scale: 0 req sack: Y req sack: Y sacks sent: 0 sacks sent: 0 urgent data pkts: 0 pkts urgent data pkts: 0 pkts urgent data bytes: 0 bytes urgent data bytes: 0 bytes mss requested: 1460 bytes mss requested: 1460 bytes max segm size: 1448 bytes max segm size: 0 bytes min segm size: 24 bytes min segm size: 0 bytes avg segm size: 1389 bytes avg segm size: 0 bytes max win adv: 5840 bytes max win adv: 63712 bytes min win adv: 5840 bytes min win adv: 5792 bytes zero win adv: 0 times zero win adv: 0 times avg win adv: 5840 bytes avg win adv: 46818 bytes initial window: 24 bytes initial window: 0 bytes initial window: 1 pkts initial window: 0 pkts ttl stream length: 50024 bytes ttl stream length: 0 bytes missed data: 0 bytes missed data: 0 bytes truncated data: 0 bytes truncated data: 0 bytes truncated packets: 0 pkts truncated packets: 0 pkts data xmit time: 37.430 secs data xmit time: 0.000 secs idletime max: 2370.0 ms idletime max: 2374.2 ms throughput: 1273 Bps throughput: 0 Bps ============================================================ Why are the rules for RATE being the only ones that are taken into account for bandwidth limitation and HTB does not care about the CEIL in class (1:20) and borrows tokens from the other class (1:30) thats not being utilized? Can I still have a problem with software versions (Linux, HTB or TC) ? Can there be a problem of correct shaping in HTB for smaller bitrates ? Thanks in advance. Best regards. Paulo AUgusto From: Andy Furniss Reply-To: andy.furniss@dsl.pipex.com To: Paulo Augusto CC: lartc@mailman.ds9a.nl Subject: Re: [LARTC] Lost packets and strange "behaviour" of my TC rules Date: Mon, 17 Oct 2005 22:29:12 +0100 >Paulo Augusto wrote: > > >> >>I''m using the TG (www.postel.org/tg ) as >>a TCP traffic generator, to establish three 90kbits/s TCP flows >>from PC1(any port) to PC2(port 20000, 20001 and 20002), with >>different durations and pause times, which as can be shown in the >>next files: > >Usually tcp won''t be like this, netperf may be better to test with, >as it''s more normal for bulk tcp to try to go as fast as it can. > >> >>I''ve also plotted a graph with gnuplot, showing me that (somehow) >>my rules were "correct", I''ve got an average value of 30kbits/s, a >>10kbits/s and a 60kbits/s. >> >>Is it normal that some packets get dropped by the rules or not >>(taking into account my test configuration) ? > >The packets are not dropped in this case as the default queue length >for sfq is 128, which can hold a rwin worth of data. The missing >packets just didn''t get sent because htb slowed the packets down to >the rates specified and the sender will only send more once the ones >already sent are acked. > >Andy. > --===============1453468169=Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc --===============1453468169==--