Hi, all I''ve got problems with tc qdisc ingress. I''m using vanillia kernel 2.6.14.4 patched with http://www.ssi.bg/~ja/routes-2.6.14-12.diff, and iproute2-2.6.14-051107. i am using ingress to limit incoming traffic : (DEV is eth1 / DOWNLINK is 7700) # attach ingress policer: tc qdisc add dev $DEV handle ffff: ingress # filter *everything* to it (0.0.0.0/0), drop everything that''s # coming in too fast: tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \lm 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 This does limit traffic but to ~32KB/s !! #tc -s qdisc show dev eth1 [...] qdisc ingress ffff: ---------------- Sent 37001411 bytes 51120 pkt (dropped 3422, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 It''s is normal to have dropped packets without overlimits ?? Could it be related to CPU performance (overload), i''m using a wrap2 board (geode sc1100 at 266Mhz) ? Running top during a big download, it appears that cpu is 95% idle... Thanks Laurent Haond
Laurent Haond a écrit :>Hi, all > >I''ve got problems with tc qdisc ingress. >I''m using vanillia kernel 2.6.14.4 patched with >http://www.ssi.bg/~ja/routes-2.6.14-12.diff, and iproute2-2.6.14-051107. > >i am using ingress to limit incoming traffic : >(DEV is eth1 / DOWNLINK is 7700) > ># attach ingress policer: >tc qdisc add dev $DEV handle ffff: ingress > ># filter *everything* to it (0.0.0.0/0), drop everything that''s ># coming in too fast: >tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \lm > 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 > >This does limit traffic but to ~32KB/s !! > >#tc -s qdisc show dev eth1 >[...] >qdisc ingress ffff: ---------------- > Sent 37001411 bytes 51120 pkt (dropped 3422, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > >It''s is normal to have dropped packets without overlimits ?? > >Could it be related to CPU performance (overload), i''m using a wrap2 >board (geode sc1100 at 266Mhz) ? >Running top during a big download, it appears that cpu is 95% idle... > >Thanks > >Laurent Haond > >_______________________________________________ >LARTC mailing list >LARTC@mailman.ds9a.nl >http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartcen > >Finally, I''ve found the solution, on this hardware it seems that NET_SCH clock cannot rely on CPU clock, I recompiled a kernel with : CONFIG_NET_SCH_CLK_GETTIMEOFDAY=y instead of CONFIG_NET_SCH_CLK_CPU=y and now everything seems to be OK. Laurent
Hi all, I have a problem with htb and wonder if anybody has encountered this. On my LAN I have more than 1000 clients, and I am using htb to shape the incoming trafic. The problem is that I am experiencing packet loss (about 4%) in the qos server. The server is droping packets even if my trafic is relatively moderate. I tried everithing estimator, senting the quantum etc etc but it doesn''t seem to improve. my script is relatively simple: tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb default 10 #root class tc class add dev eth0 parent 1:0 classid 1:1 htb rate 50000kbit ceil 50000kbit #default class tc class add dev eth0 parent 1:1 classid 1:10 htb rate 512kbit ceil 512kbit #and each client IP has a class asocieted tc class add dev eth0 parent 1:1 classid 1:$COUNTER htb rate 5kbit ceil 400kbit tc filter add dev eth0protocol ip parent 1:0 prio 2 u32 match ip dst $IP flowid 1:$COUNTER # and counter increments by 1 for each rule added What could I do ? Are there some kernel parameters that I could modify in order to obtain a better performance ? Thanks --------------------------------- Yahoo! Autos. Looking for a sweet ride? Get pricing, reviews, & more on new and used cars. _______________________________________________ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Calin Ilis wrote:> Hi all, > > I have a problem with htb and wonder if anybody has encountered this. > On my LAN I have more than 1000 clients, and I am using htb to shape the incoming trafic. The problem is that I am experiencing packet loss (about 4%) in the qos server. The server is droping packets even if my trafic is relatively moderate. > > I tried everithing estimator, senting the quantum etc etc but it doesn''t seem to improve.Do you see the dropped packets counted with tc -s class ls dev eth0 ? Packet loss is normal for shaping tcp. Andy.