Hi. The configuration script is at the bottom. My configuration looks similar to this: imq0 | 1:1 (12mbit) | 1:2 (10mbit) | 3:0 | 3:1 (256kbit) / \ 3:2 3:3 icmp (rest) both 3:2 and 3:3 have rate 1kbit ceil 256kbit Icmp bucklet have priority 1 (better), "rest" bucklet have prio 2 (worse) I tested the script with heavy download from machine 192.168.1.2 in the background. While the download was going I pinged 192.168.1.1. All works fine, except pings... They should be low, and they almost are :) Pings are fine (0.4ms) for about 120 seconds, then i got pings similar to this: 64 bytes from 192.168.1.1: icmp_seq=90 ttl=64 time=0.3 ms 64 bytes from 192.168.1.1: icmp_seq=91 ttl=64 time=0.4 ms 64 bytes from 192.168.1.1: icmp_seq=92 ttl=64 time=84.0 ms 64 bytes from 192.168.1.1: icmp_seq=93 ttl=64 time=23.0 ms 64 bytes from 192.168.1.1: icmp_seq=94 ttl=64 time=57.0 ms 64 bytes from 192.168.1.1: icmp_seq=95 ttl=64 time=89.0 ms 64 bytes from 192.168.1.1: icmp_seq=96 ttl=64 time=28.0 ms 64 bytes from 192.168.1.1: icmp_seq=97 ttl=64 time=61.0 ms 64 bytes from 192.168.1.1: icmp_seq=98 ttl=64 time=93.0 ms 64 bytes from 192.168.1.1: icmp_seq=99 ttl=64 time=32.0 ms 64 bytes from 192.168.1.1: icmp_seq=100 ttl=64 time=65.0 ms 64 bytes from 192.168.1.1: icmp_seq=101 ttl=64 time=4.0 ms 64 bytes from 192.168.1.1: icmp_seq=102 ttl=64 time=36.0 ms 64 bytes from 192.168.1.1: icmp_seq=103 ttl=64 time=69.0 ms 64 bytes from 192.168.1.1: icmp_seq=104 ttl=64 time=8.0 ms 64 bytes from 192.168.1.1: icmp_seq=105 ttl=64 time=40.0 ms 64 bytes from 192.168.1.1: icmp_seq=106 ttl=64 time=73.0 ms 64 bytes from 192.168.1.1: icmp_seq=107 ttl=64 time=12.0 ms 64 bytes from 192.168.1.1: icmp_seq=108 ttl=64 time=0.3 ms 64 bytes from 192.168.1.1: icmp_seq=109 ttl=64 time=0.3 ms Then it is stable (0.4ms) for some time. The same situation repeats in about 60 second delay. Maybe someone solved this problem? Thanks for any advice. Yours sincerly Maciek PS. Please don''t ask me why this configuration is so strange, I have some reasons to use schemes like this one. PPS. Is it possible to create a filter that will match all packets? ##### script iptables -t mangle -F PREROUTING iptables -t mangle -F POSTROUTING iptables -t mangle -A PREROUTING -i ! lo -j IMQ --todev 0 iptables -t mangle -A POSTROUTING -o ! lo -j IMQ --todev 1 tc qdisc del root dev imq0 tc qdisc del root dev imq1 tc qdisc add dev imq0 root handle 1 htb default 2 tc qdisc add dev imq1 root handle 1 htb default 2 tc class add dev imq0 parent 1:0 classid 1:1 htb rate 12mbit burst 2k prio 1 quantum 2048 tc class add dev imq1 parent 1:0 classid 1:1 htb rate 12mbit burst 2k prio 1 quantum 2048 tc class add dev imq0 parent 1:1 classid 1:2 htb rate 10mbit burst 2k prio 1 quantum 2048 tc class add dev imq1 parent 1:1 classid 1:2 htb rate 10mbit burst 2k prio 1 quantum 2048 tc qdisc add dev imq0 parent 1:2 handle 3 htb default 3 tc qdisc add dev imq1 parent 1:2 handle 3 htb default 3 tc class add dev imq0 parent 3:0 classid 3:1 htb rate 256kbit burst 2k prio 1 quantum 2048 tc class add dev imq1 parent 3:0 classid 3:1 htb rate 64kbit burst 2k prio 1 quantum 2048 tc class add dev imq0 parent 3:1 classid 3:2 htb rate 1kbit ceil 256kbit burst 2k prio 1 quantum 2048 tc class add dev imq1 parent 3:1 classid 3:2 htb rate 1kbit ceil 64kbit burst 2k prio 1 quantum 2048 tc class add dev imq0 parent 3:1 classid 3:3 htb rate 1kbit ceil 256kbit burst 2k prio 2 quantum 2048 tc class add dev imq1 parent 3:1 classid 3:3 htb rate 1kbit ceil 64kbit burst 2k prio 2 quantum 2048 tc qdisc add dev imq0 parent 3:2 handle 12:0 pfifo limit 4 tc qdisc add dev imq1 parent 3:2 handle 12:0 pfifo limit 4 tc qdisc add dev imq0 parent 3:3 handle 13:0 pfifo limit 4 tc qdisc add dev imq1 parent 3:3 handle 13:0 pfifo limit 4 tc filter add dev imq0 protocol ip parent 3:0 prio 1 u32 match ip protocol 1 0xFF flowid 3:2 tc filter add dev imq1 protocol ip parent 3:0 prio 1 u32 match ip protocol 1 0xFF flowid 3:2
tc@forest.one.pl wrote:> 64 bytes from 192.168.1.1: icmp_seq=95 ttl=64 time=89.0 msWhich is about 2 packets @ 256kbit. I tested and got the same behaviour with a simple setup, but max ping about 45 because htb dequeues in pairs by default. If you change #define hysteresis from 1 to 0 in net/sched/sch_htb.c then it''s more accurate. Setting Quantum to your MTU may also help.> Then it is stable (0.4ms) for some time. > > The same situation repeats in about 60 second delay. > > Maybe someone solved this problem?Give interactive class more rate than it needs. Latency was OK for me with ICMP class rate 255kbit ceil 256kbit.> > PPS. Is it possible to create a filter that will match all packets?I don''t know about all, but all per protocol like - .. protocol ip prio 10 u32 match u32 0 0 .. .. protocol arp prio 11 u32 match u32 0 0 .. Andy.
On Friday 22 of April 2005 01:34, you wrote:> Which is about 2 packets @ 256kbit. I tested and got the same behaviour > with a simple setup, but max ping about 45 because htb dequeues in pairs > by default. If you change #define hysteresis from 1 to 0 in > net/sched/sch_htb.c then it''s more accurate. Setting Quantum to your MTU > may also help. > > .. protocol ip prio 10 u32 match u32 0 0 .. > .. protocol arp prio 11 u32 match u32 0 0 .. > > Andy.Thank you Andy. I''ll try your tips in few days. Yours sincerly Maciek
On Friday 22 of April 2005 01:34, you wrote:> tc@forest.one.pl wrote: > > > 64 bytes from 192.168.1.1: icmp_seq=95 ttl=64 time=89.0 ms > > Which is about 2 packets @ 256kbit. I tested and got the same behaviour > with a simple setup, but max ping about 45 because htb dequeues in pairs > by default. If you change #define hysteresis from 1 to 0 in > net/sched/sch_htb.c then it''s more accurate. Setting Quantum to your MTU > may also help.ok. I changed hysteresis and I changed quantum.> > > Then it is stable (0.4ms) for some time. > > > > The same situation repeats in about 60 second delay. > > > > Maybe someone solved this problem? > > Give interactive class more rate than it needs. Latency was OK for me > with ICMP class rate 255kbit ceil 256kbit. >I can''t give 255 kbit to icmp traffic :) But for me 4kbit seems to be enough. On testing environment it works fine. But on "real" configuration the pings are still too long. I wonder if one-level configuration (with just one root qdisc) would give better latency.> > > > PPS. Is it possible to create a filter that will match all packets? > > I don''t know about all, but all per protocol like - > > .. protocol ip prio 10 u32 match u32 0 0 .. > .. protocol arp prio 11 u32 match u32 0 0 .. >It works ok for me, but only with prio 0. Thanks for help. Maciek.
tc@forest.one.pl wrote:> I can''t give 255 kbit to icmp traffic :) But for me 4kbit seems to be enough. > On testing environment it works fine. But on "real" configuration > the pings are still too long. > I wonder if one-level configuration (with just one root qdisc) would give > better latency.I don''t know - but I tested with just one root.> > >>>PPS. Is it possible to create a filter that will match all packets? >> >>I don''t know about all, but all per protocol like - >> >>.. protocol ip prio 10 u32 match u32 0 0 .. >>.. protocol arp prio 11 u32 match u32 0 0 .. >> > > It works ok for me, but only with prio 0.FWIW prio 1 is the highest for filters 0 is highest for htb. If you set 0 you end up with a really high pref - have a look with tc -s filter ls dev .... What prio to use depends on what you want, the 10 and 11 are meaningless really unless you have other filters and order matters so the match all can become match all the packets that didn''t match a higher prio filter. Andy.
Andy Furniss wrote:>> PPS. Is it possible to create a filter that will match all packets? > > > I don''t know about all, but all per protocol like - > > .. protocol ip prio 10 u32 match u32 0 0 .. > .. protocol arp prio 11 u32 match u32 0 0 ..You can use .. protocol all prio 1 u32 match u32 0 0 .. for some reason when I tried, it gave an error - I must have made a mistake as I have just run a script which uses it and it''s OK. Andy