Hello *! My trafficshaping (tc-htb) drops packets very early, at least i suspect this. It drops about 30% fo the packets. The traffic-generating appication is running locally on the shaping host. I think i can lower this rate by increasing the packet-buffer, because the locally application will slow down with increasing buffer (tcp/ip). But i can´t find any options for that, i cycled the manual several times. Is there a solution? see output of "tc -s -d class show dev ppp0" (main class): class htb 1:10 parent 1:1 prio 1 quantum 1800 rate 144000bit ceil 480000bit burst 1779b/8 mpu 0b overhead 0b cburst 2199b/8 mpu 0b overhead 0b level 0 Sent 57567157 bytes 40609 pkts (dropped 16033, overlimits 0) rate 460328bit 40pps lended: 12845 borrowed: 27764 giants: 0 tokens: -74623 ctokens: -28826 This class is the only one with packet drops. Does tc use the kernels network packet queue? Great thanks in advance Alvo
John Smith wrote:> Hello *! > My trafficshaping (tc-htb) drops packets very early, at least i suspect > this. It drops about 30% fo the packets. The traffic-generating appication > is running locally on the shaping host. I think i can lower this rate by > increasing the packet-buffer, because the locally application will slow down > with increasing buffer (tcp/ip). But i can´t find any options for that, i > cycled the manual several times. Is there a solution? > > > > > see output of "tc -s -d class show dev ppp0" (main class): > class htb 1:10 parent 1:1 prio 1 quantum 1800 rate 144000bit > ceil 480000bit burst 1779b/8 mpu 0b overhead 0b cburst 2199b/8 > mpu 0b overhead 0b level 0 > Sent 57567157 bytes 40609 pkts (dropped 16033, overlimits 0) > rate 460328bit 40pps > lended: 12845 borrowed: 27764 giants: 0 > tokens: -74623 ctokens: -28826 > > This class is the only one with packet drops. Does tc use the kernels > network packet queue?HTB uses the txqueuelength of the interface if you don''t add a queue to the leaf class. For my ppp0 that''s 3 which is a bit short - even so I just did a test and only got 10% loss so maybe your generator app/kernel version of tcp is a bit over aggressive. I did 1 tcp stream with netperf on 2.6.12-rc1. So either add a queue to the leaf and specify a length or before you start htb do ifconfig ppp0 txqueuelen 30 or whatever. Andy.
John Smith
2005-May-31 06:47 UTC
Question2: can an mpu be specified using htb by appending an tc-tbf queue?
Hello! tc-htb is great because its easy, but you cannot specify an mpu. Thats bad, because with the mpu you can specify the physical characteristics of the underlying connection. I have a braodband dsl-connection. I have the problem that the proportion of small packets to big packets changes a lot. So without specifing the mpu the connection is not to full capacity (big packets) or the buffer of the modem will be filled (small packets). Can i append an tc-tbf qdisc (qdisc 30:) to an tc-htb class with specifiying an mpu? I already did so for increasing the queue for big packets (qdisc 20:), but when i tried this for small packets i could not direct data with an tc filter to it, it alway go to qdisc 20:. tc tree Configuration: root qdisc htb class 1 htb class 1:10 htb qdisc 20: tbf class 1:11 htb qdisc 30: tbf tc filter: tc filter add dev $dev protocol ip handle 10 \ fw flowid 20: tc filter add dev $dev protocol ip handle 11 \ fw flowid 30: I don´t know how those queue work, if you specify the mpu in 20: or 30: will the parent classes work with the calculated size of the packets? Anyway, traffic was not directed in 30: :( any suggestions? Great thanks in advance Alvo
Andy Furniss
2005-May-31 20:54 UTC
Re: Question2: can an mpu be specified using htb by appending an tc-tbf queue?
John Smith wrote:> Hello! > tc-htb is great because its easy, but you cannot specify an mpu.You can specify mpu and overhead with htb - well you can with the recent 2.6 I use. Thats bad,> because with the mpu you can specify the physical characteristics of the > underlying connection. I have a braodband dsl-connection. I have the problem > that the proportion of small packets to big packets changes a lot. So > without specifing the mpu the connection is not to full capacity (big > packets) or the buffer of the modem will be filled (small packets).If you can find your overhead for your dsl type then you can patch htb and tc to do it perfectly. Have a look at the thesis on http://www.adsl-optimizer.dk/ there is a section that shows different overheads. Andy.