Hello, Could somebody explain following issue ? I set up htb class on outgoing external interface to shape p2p upload traffic. I limited it to 4Mbit/s. I also set up iptables counters chains in FORWARD chain to calculate traffic generated by p2p and others. While tc stats show that p2p shaping class keep defined 4Mbit traffic, iptables counters show me that p2p traffic exceed traffic to 150% and is about 6Mbit/s. Ofcourse I revised twice all configs but no mistake found. Whereas tc stats show some strange thing [class for p2p]: class htb 1:18 parent 1:10 leaf 18: prio 3 rate 1500Kbit ceil 4Mbit burst 3474b cburst 6599b Sent 17192942388 bytes 13525239 pkts (dropped 6936086, overlimits 0) rate 501604bps 392pps backlog 90p lended: 5070369 borrowed: 8454780 giants: 0 tokens: -4830 ctokens: -14550 I mean that dropped packets is about half of total sent pkts. So I realize that''s why iptables counters, which are "before" outgoing traffic shaping, show just 6Mbit [100%+50%(dropped) traffic] instead of defined 4Mbit. OK, but why traffic doesn''t slow down due to shaping and still assail upload ? [I tried sqf and red qdisc attached to class]. Regards tw --
On Tuesday 03 May 2005 23:34, Tomasz Wrona wrote:> OK, but why traffic doesn''t slow down due to shaping and still assail > upload ? [I tried sqf and red qdisc attached to class].You already found out that iptables counts packets which get dropped later by HTB. You kind of answered your own question with that observation. What I don''t understand is what you mean now by ''traffic slow down''. You have a 1500kbit class with a ceil of 4MBit. So if you''re missing a slow down, does that mean that this class sends more than 4MBit, or does it borrow too much bandwidth from other classes? What other classes do you have? Regards, Andreas
>> OK, but why traffic doesn''t slow down due to shaping and still assail >> upload ? [I tried sqf and red qdisc attached to class].>You already found out that iptables counts packets which get dropped later >by HTB. You kind of answered your own question with that observation. What >I don''t understand is what you mean now by ''traffic slow down''.I suppose that traffic slow down to defined bandwidth on "all route" - I mean before TC queue. When some queue limits speed, all hosts that do traffic [remote shaped] should slow down due to TCP features, but comparing TC and iptables counters it looks that it doesn''t happen precisely.>You have a 1500kbit class with a ceil of 4MBit. So if you''re missing a slow >down, does that mean that this class sends more than 4MBit, or does it >borrow too much bandwidth from other classes?TC do good job but as mentioned above, traffic doesn''t slow down before queue and a lot of packets [~30% of total packets] are dropped by queue.> What other classes do you >have?Others doesn''t have much to do but if You like to see: #------------------------- class htb 1:11 parent 1:10 leaf 80bc: prio 0 rate 300Kbit ceil 400Kbit burst 1974b cburst 2099b Sent 146796 bytes 1336 pkts (dropped 0, overlimits 0) lended: 1336 borrowed: 0 giants: 0 tokens: 48687 ctokens: 39076 class htb 1:1 root rate 100Mbit ceil 100Mbit burst 126587b cburst 126587b Sent 61202927558 bytes 153038476 pkts (dropped 0, overlimits 0) rate 618726bps 1581pps lended: 0 borrowed: 0 giants: 0 tokens: 9760 ctokens: 9760 class htb 1:10 parent 1:1 rate 14200Kbit ceil 14200Kbit burst 19347b cburst 19347b Sent 61202927558 bytes 153038476 pkts (dropped 0, overlimits 0) rate 618726bps 1581pps lended: 22825987 borrowed: 0 giants: 0 tokens: 6810 ctokens: 6810 class htb 1:13 parent 1:10 leaf 80be: prio 2 rate 300Kbit ceil 1700Kbit burst 1974b cburst 3724b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) lended: 0 borrowed: 0 giants: 0 tokens: 53929 ctokens: 17949 class htb 1:12 parent 1:10 leaf 80bd: prio 1 rate 1200Kbit ceil 1500Kbit burst 3099b cburst 3474b Sent 7030787084 bytes 103264002 pkts (dropped 761, overlimits 0) rate 68754bps 1109pps lended: 103186192 borrowed: 77810 giants: 0 tokens: 20834 ctokens: 18715 class htb 1:17 parent 1:10 leaf 80bf: prio 3 rate 3Mbit ceil 9Mbit burst 5349b cburst 12848b Sent 15792105008 bytes 19364386 pkts (dropped 2, overlimits 0) rate 50435bps 114pps lended: 15626484 borrowed: 3737902 giants: 0 tokens: 13080 ctokens: 11188 class htb 1:18 parent 1:10 leaf 18: prio 3 rate 1500Kbit ceil 4Mbit burst 3474b cburst 6599b Sent 38380012693 bytes 30408825 pkts (dropped 13642848, overlimits 0) rate 501190bps 370pps backlog 73p lended: 11398477 borrowed: 19010275 giants: 0 tokens: -24850 ctokens: -15351 #------------------------- 1:17 is default traffic class 1:18 is p2p traffic class. -- tw
On Wednesday 04 May 2005 11:00, tw@gd.home.pl wrote:> TC do good job but as mentioned above, traffic doesn''t slow down > before queue and a lot of packets [~30% of total packets] are dropped > by queue.Yeah. The ''dropping packets to slow down connections'' isn''t the best approach to begin with, and it works only on a per-connection basis. Peer-to-peer traffic tends to have lots of connections, all of which are rather short-lived. Every new connection will try to push out as much data as possible, so HTB is busy beating the little bugger down right from the beginning. Some people use iptables to generally drop a few packets of any newly established connection to avoid this sort of problem. HTH Andreas
AK> Every new connection will try to push out as AK> much data as possible, so HTB is busy beating the little bugger down AK> right from the beginning. Some people use iptables to generally drop AK> a few packets of any newly established connection to avoid this sort AK> of problem. To avoid congestion I attached RED queue to p2p class but without any visible effect. I doesn''t respond on any kind of shaping :/ -- tw
Tomasz Wrona wrote:> AK> Every new connection will try to push out as > AK> much data as possible, so HTB is busy beating the little bugger down > AK> right from the beginning. Some people use iptables to generally drop > AK> a few packets of any newly established connection to avoid this sort > AK> of problem. > > To avoid congestion I attached RED queue to p2p class but without any visible effect. > I doesn''t respond on any kind of shaping :/ > >Maybe you could try further seperating the P2P traffic into bulk and network udp/syns/acks etc. Depends on what type of P2P clients are,to some extent what tcp there OS is running and number of connections. Maybe PRIO would be better with bulk going to SFQ and small packets getting priority. You could also play around with queue lengths and see if that helps. Does it actually matter anyway - as it''s egress you are shaping and I assume the 2Mbit extra doesn''t really hurt LAN speeds. Andy.
AF> Maybe you could try further seperating the P2P traffic into bulk and AF> network udp/syns/acks etc. In fact it''s done. UDP and ACKs goes extra prio queue. Also statistics say that dropped packets are the same size as other packets. AF> Depends on what type of P2P clients are,to AF> some extent what tcp there OS is running and number of connections. AF> Maybe PRIO would be better with bulk going to SFQ and small packets AF> getting priority. AF> You could also play around with queue lengths and see AF> if that helps. OK, I will try. AF> Does it actually matter anyway - as it''s egress you are shaping and I AF> assume the 2Mbit extra doesn''t really hurt LAN speeds. I doesn''t play a role for LAN speed but it shouldn''t occur anyway. It means that something doesn''t work as expected. If it happens on download all your shaping is useless at all. You have to leave ie. spare 2Mbit [30% !!!] from your leased line not to overload it. BTW. I though if it could be caused ie. by hacked TCP stack of some hosts or turned off windows_scaling feature of tcp... ? -- tw
Tomasz Wrona wrote:> AF> Maybe you could try further seperating the P2P traffic into bulk and > AF> network udp/syns/acks etc. > > In fact it''s done. UDP and ACKs goes extra prio queue. > Also statistics say that dropped packets are the same size as other > packets.Ahh - OK> > > AF> Depends on what type of P2P clients are,to > AF> some extent what tcp there OS is running and number of connections. > AF> Maybe PRIO would be better with bulk going to SFQ and small packets > AF> getting priority. > > > AF> You could also play around with queue lengths and see > AF> if that helps. > > OK, I will try.Do you know roughly how many active connections you have? I think SFQ should be better than HTB default FIFO.> > > AF> Does it actually matter anyway - as it''s egress you are shaping and I > AF> assume the 2Mbit extra doesn''t really hurt LAN speeds. > > I doesn''t play a role for LAN speed but it shouldn''t occur anyway. It > means that something doesn''t work as expected. If it happens on > download all your shaping is useless at all. You have to leave ie. > spare 2Mbit [30% !!!] from your leased line not to overload it.Yes shaping from the wrong end is hard anyway and P2P is always worse. Saying that the version of BIC tcp that kernel.org was running last time I looked is really over aggressive - I hope and suspect that Linux BIC has been fixed so this will go away in time.> > BTW. I though if it could be caused ie. by hacked TCP stack of some > hosts or turned off windows_scaling feature of tcp... ?I think the fact it''s on by default in Linux now hurts - It''s off by default in Windows AFAIK. Are the senders on your LAN using Linux or Windows? Andy.
AF> Do you know roughly how many active connections you have? Up to 30K. AF> I think SFQ should be better than HTB default FIFO. None works... [read ahead]>> BTW. I though if it could be caused ie. by hacked TCP stack of some >> hosts or turned off windows_scaling feature of tcp... ?AF> I think the fact it''s on by default in Linux now hurts - It''s off by AF> default in Windows AFAIK. Are the senders on your LAN using Linux or AF> Windows? 99% windows users. However I just found propably reason of issue... As p2p is both direction transfer most of it sends large ACK packets. After short investigation there are some facts: 1) 40% of p2p ACK packets have payload larger than 1KB, other are regular ACKs 2) 15% of all other traffic ACKs are longer than 1KB, 85% are regular ACKs. I have priority class only for regular short ACKs, so other goes to custom p2p class. Cause ACKs have to be send after receiving some data propably that''s why I can''t slow down traffic to defined value. Every time customer receive p2p data, sends LARGE ACK. Outcome is that shaping p2p upload have impact on p2p download and vice versa or in other words, to shape upload, shape download also. Giving extra prio for large ACKs would be suicide. I suppose the only way to shape p2p is to dominate it by hard limits [up/down]. Please correct me if it''s fake [and give an advise ;)]. -- Regards Tomasz Wrona
Tomasz Wrona wrote:> AF> Do you know roughly how many active connections you have? > > Up to 30K.I think this could be the problem - If 30k connections all tried to send through 4mbit then a quick prod at xcalc tells me that each would get to send 1 1500 byte packet every 90 seconds.> > However I just found propably reason of issue... > > As p2p is both direction transfer most of it sends large ACK packets. > After short investigation there are some facts: > > 1) 40% of p2p ACK packets have payload larger than 1KB, other are regular > ACKs > > 2) 15% of all other traffic ACKs are longer than 1KB, 85% are regular > ACKs.Maybe - depends how you are testing, remember that all tcp packets have ack set after the initial handshake. I know p2p like bittorrent does use a single connection in full duplex , but I never noticed it hurting upstream shaping - it does make downstream more bursty as the empty acks can get sent ahead of the piggybacked ones stuck in the queue and ack a large chunk of data.> > I have priority class only for regular short ACKs, so other goes to > custom p2p class. > Cause ACKs have to be send after receiving some data propably that''s why I can''t > slow down traffic to defined value. Every time customer receive p2p data, > sends LARGE ACK. Outcome is that shaping p2p upload have impact on > p2p download and vice versa or in other words, to shape upload, shape > download also. > Giving extra prio for large ACKs would be suicide. I suppose the only > way to shape p2p is to dominate it by hard limits [up/down]. > > Please correct me if it''s fake [and give an advise ;)]. >Maybe limiting the number of connections per user would be best - if you can. Andy.
>> AF> Do you know roughly how many active connections you have? >> Up to 30K.AF> I think this could be the problem - If 30k connections all tried to send AF> through 4mbit then a quick prod at xcalc tells me that each would get to AF> send 1 1500 byte packet every 90 seconds. Not all of them are active. It''s total upstream and downstream conntrack entries. AF> Maybe limiting the number of connections per user would be best - if you AF> can. All is done. I tested today p2p traffic with limited p2p allowed users, no more then few dozens sessions [few hundred total conntrack]. When shaped upload to 3Mbit, iptables counted up to 3,5Mbit traffic. You are right, more sessions active [and more bandwidth allocated], this difference grows. However, I am not still sure if it''s only reason of this upload case. Regards Tomasz Wrona PS. Andy, thank''s a lot for your time :)