Hi, I have a typical configuration for my firewall/gateway box: single network card, with a pppoe connection to the DSL modem. I''m already successfully shaping the uplink (how come that the wondershaper.htb doesn''t use the ceil parameter? It should implement bandwidth borrowing!) but i found the ingress policy a little bit rough. I''d like to keep the traffic categories i have in the uplink: ssh, web and batch. The goal is to discard packets of the lowest class first, then the middle, and so on. I''ve implemented a simmetrical downlink version of the uplink shaping on eth0, the other interface. However, i get this error (warning?) in the log: pppoe[29606]: send (sendPacket): No buffer space available It looks like the interface queue is complaining that it cannot deliver a packet (the uplink queue being full) and therefore discarding some packet. This is not the behaviour i meant. Anybody has ideas/suggestions/comments? As an alternative, it would be a pretty good solution to tell the polishing filter to discard only packets where the source port is !22. But i''d rather stay away of a ingress policer, because i had problems with it. Thanks in advance, MatB _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Matteo Brusa <miagi@tiscali.it> writes:> Hi, > I have a typical configuration for my firewall/gateway box: single network > card, with a pppoe connection to the DSL modem. > I''m already successfully shaping the uplink (how come that the > wondershaper.htb doesn''t use the ceil parameter? It should implement bandwidth > borrowing!) but i found the ingress policy a little bit rough. > I''d like to keep the traffic categories i have in the uplink: ssh, web and batch. > The goal is to discard packets of the lowest class first, then the middle, and so on. > I''ve implemented a simmetrical downlink version of the uplink shaping on eth0, > the other interface. However, i get this error (warning?) in the log: > pppoe[29606]: send (sendPacket): No buffer space availableHm, I just did exactly the same thing here. I don''t see this. What do your filters look like? I do see HTB: dequeue bug (8), report it please ! which I think is because i have too low burst/cburst settings. My settings currently look like: tc class add dev $DEV parent 1: classid 1:1 htb rate ${BW}kbit burst 8k cburst 8k tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${BW80}kbit ceil ${BW}kbit burst 8k cburst 8k prio 1 tc class add dev $DEV parent 1:1 classid 1:20 htb rate ${BW10}kbit ceil ${BW}kbit burst 4k cburst 1k prio 2 tc class add dev $DEV parent 1:1 classid 1:30 htb rate ${BW10}kbit ceil ${BW}kbit burst 2k cburst 1k prio 3 (where ${BWxx} == xx% of the total bandwidth I want on that link) -- greg _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Greg Stark wrote:> Hm, I just did exactly the same thing here. I don''t see this. > > What do your filters look like?Thanks for your quick answer. My config doesn''t look so different: tc qdisc add dev eth0 root handle 1: htb default 20 tc class add dev eth0 parent 1: classid 1:1 htb rate 2000kbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 2000kbit prio 1 tc class add dev eth0 parent 1:1 classid 1:20 htb rate 50kbit ceil 2000kbit prio 2 tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 The packets are marked in the POSTROUTING chain of the mangle table, as usual. I''m running kernel 2.4.20-30.9 custom and iptables v1.2.7a. HTB is reported as 3.10. Note that the problem raises when i hit the ceil throuput. Thanks, MatB _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Matteo Brusa <miagi@tiscali.it> writes:> tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 > tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10I''m using sfq as well. But I''m wondering if I wouldn''t be better off with pfifo with a short queue. One of the entries in the HTB faq suggests using sfq can make it hard to limit bandwidth precisely because it requires enough memory that tcp_wmem kicks in. Or is that only for locally generated traffic?> tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 > tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 > > The packets are marked in the POSTROUTING chain of the mangle table, as usual. > I''m running kernel 2.4.20-30.9 custom and iptables v1.2.7a. HTB is reported as 3.10. > Note that the problem raises when i hit the ceil throuput.I''m still unsure whether I want to be using iptables to mark packets or stick with the tc filters I inherited from wshaper. Marking packets in iptables has the advantage that it knows which packets were natted and what the host on the far side of the NAT is. It also has some more flexible methods for matching. One thing I''m wondering, is it possible in iptables to mark all packets after some amount of traffic? Like, for example I want port 80 traffic to be higher priority than ftp-data and bittorrent, but only for regular browsing. If I download something over, say, 200k I want to to get downgraded to the same group as ftp-data and bittorrent. Also, bittorrent has a habit of occasionally using random ports and doesn''t set TOS. So if iptables knew that that flow had already transfered more than some threshold of data it could downgrade it. Actually another possibility is any flow open for more than, say, 30s could be downgraded. I don''t think the qdiscs handle things at this granularity, but iptables sure does. I don''t recall seeing any of these features but it doesn''t seem like it would be much of a stretch for it. -- greg _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Jason Boxman
2004-Jun-10 20:30 UTC
Re: Re: Shaping incoming traffic on the other interface
On Thursday 10 June 2004 16:07, Greg Stark wrote: <snip>> I''m using sfq as well. But I''m wondering if I wouldn''t be better off with > pfifo with a short queue. One of the entries in the HTB faq suggests using > sfq can make it hard to limit bandwidth precisely because it requires > enough memory that tcp_wmem kicks in. Or is that only for locally generated > traffic?I tried sticking a limit 10 pfifo on my PRIO n:1 to no avail. ICMP which I add to that class wasn''t any better off. The only sure thing I found was to continually reduce my rate. I find for best performance I''m stuck with 62.5% of my 256Kbps upstream. (We really need some kind of realtime PPPoATM scheduler.) I can run at 75%, but the lag is noticeable when browsing the Web, for example. As for SFQ, you can use ESFQ which lets you specify the actual queue limit I believe, or you can recompile SFQ and redefine[1] the queue length in the source code. [1] http://www.docum.org/stef.coene/qos/faq/cache/21.html <snip>> I''m still unsure whether I want to be using iptables to mark packets or > stick with the tc filters I inherited from wshaper.I''d suggest going with IPTables/Netfilter. You can''t really go wrong. If you''re running a firewall with IPTables already, then you''re good to go. Plus, you cannot deal with most p2p traffic using straight `tc` filters. As always, I suggest IPP2P and L7-Filter for 2.4 and 2.6 respectively. L7 seems to catch 99 - 100% of my ''edonkey'' traffic.> Marking packets in iptables has the advantage that it knows which packets > were natted and what the host on the far side of the NAT is. It also has > some more flexible methods for matching. > > One thing I''m wondering, is it possible in iptables to mark all packets > after some amount of traffic? Like, for example I want port 80 traffic to > be higher priority than ftp-data and bittorrent, but only for regular > browsing. If I download something over, say, 200k I want to to get > downgraded to the same group as ftp-data and bittorrent. Also, bittorrent > has a habit of occasionally using random ports and doesn''t set TOS. So if > iptables knew that that flow had already transfered more than some > threshold of data it could downgrade it.Yes, you can do that. I have been meaning to do it for my HTTP traffic but I keep forgetting. You might try something like this[2]. Or maybe this[3]. [2] http://www.docum.org/stef.coene/qos/faq/cache/49.html [3] http://www.netfilter.org/patch-o-matic/pom-extra.html#pom-extra-connbytes> Actually another possibility is any flow open for more than, say, 30s could > be downgraded.I just read a paper somewhere that mentioned research into that approach. Wish I could remember where I found it now. It was suggested that giving short flows a higher priority over longer lived flows would result in better performance for both.> I don''t think the qdiscs handle things at this granularity, but iptables > sure does. I don''t recall seeing any of these features but it doesn''t seem > like it would be much of a stretch for it.You might check out some of the other patch-o-matic extensions at the above URL. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Andreas Klauer
2004-Jun-10 20:33 UTC
Re: Re: Shaping incoming traffic on the other interface
Am Thursday 10 June 2004 22:07 schrieb Greg Stark:> One thing I''m wondering, is it possible in iptables to mark all packets > after some amount of traffic?Can probably be done with connbytes.> bittorrent has a habit of occasionally using random portsI think there was a patch on the BT mailing list a few weeks ago that solves this random port problem (on your side). Other clients of course can choose whatever ports they like. If that isn''t possible, you probably need IPP2P or l7-filter and CONNMARK to identify BT traffic.> So if iptables knew that that flow had already transfered more > than some threshold of data it could downgrade it.connbytes again, but that won''t work well for BT, since it opens many connections all the time. HTH Andreas _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Andreas Klauer <Andreas.Klauer@metamorpher.de> writes:> I think there was a patch on the BT mailing list a few weeks ago > that solves this random port problem (on your side). Other clients > of course can choose whatever ports they like.Why does bittorrent need to use more than one port in the first place? Does SO_REUSEADDR not work properly on Windows? -- greg _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/