Hi, The problem is: I have PPPoE (ppp over eth) server and 100 clients. For each client new pppX interface is created. Now I want to limit max speed on each interface to different values. For example ppp0 - downstream 256kbit, upstream 128kbit ppp1 - downstream 512kbit, upstream 512kbit ... Downstream is easy - I just add htb rule on user pppX interface and that''s all. 100 rules for 100 clients. Upstream seems be big problem because AFAIK htb (cbq, too) must be attached to outgoing interface which means that in my case for each client there is 99 potential outgoing interfaces. This means that I need to setup 10k rules for only 100 clients (100 rules for each pppX interface) ! I''m not sure but 10k rules (u32 filter) is rather big number for typical PC (or maybe I''m wrong and 10k rules is small thing to process for ie. single duron 800MHz, 256MB RAM?). Any ideas how to do such limiting in better way? -- Arkadiusz Miśkiewicz IPv6 ready PLD Linux at http://www.pld.org.pl misiek(at)pld.org.pl AM2-6BONE, 1024/3DB19BBD, arekm(at)ircnet, PWr _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Arkadiusz Miskiewicz
2002-Aug-11 10:41 UTC
Re: bnig number of interfaces and upstream limit
Arkadiusz Miskiewicz <misiek@pld.ORG.PL> writes:> Hi, > > The problem is: I have PPPoE (ppp over eth) server and 100 clients. > For each client new pppX interface is created. > > Now I want to limit max speed on each interface to different > values. For example > ppp0 - downstream 256kbit, upstream 128kbit > ppp1 - downstream 512kbit, upstream 512kbit > ... > > Downstream is easy - I just add htb rule on user pppX interface > and that''s all. 100 rules for 100 clients. > > Upstream seems be big problem because AFAIK htb (cbq, too) must > be attached to outgoing interface which means that in my > case for each client there is 99 potential outgoing interfaces.Uhm, it was so easy ;) part of my python script now os.system("tc qdisc del root dev %s 2> /dev/null" % ppp_iface) os.system("tc qdisc del dev %s ingress 2> /dev/null" % ppp_iface) # downstream os.system("tc qdisc add dev %s root tbf rate %skbit latency 50ms burst 1540" % (ppp_iface, speed_down)) # upstream os.system("tc qdisc add dev %s handle ffff: ingress" % ppp_iface) os.system("tc filter add dev %s parent ffff: protocol ip prio 50 u32 match ip src 0.0.0.0/0 police rate %skbit burst 10k drop flowid :1" % (ppp_iface, speed_up)) -- Arkadiusz Miśkiewicz IPv6 ready PLD Linux at http://www.pld.org.pl misiek(at)pld.org.pl AM2-6BONE, 1024/3DB19BBD, arekm(at)ircnet, PWr _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> Downstream is easy - I just add htb rule on user pppX interface > and that''s all. 100 rules for 100 clients. > > Upstream seems be big problem because AFAIK htb (cbq, too) must > be attached to outgoing interface which means that in my > case for each client there is 99 potential outgoing interfaces. > > This means that I need to setup 10k rules for only 100 clients > (100 rules for each pppX interface) ! > > I''m not sure but 10k rules (u32 filter) is rather big number > for typical PC (or maybe I''m wrong and 10k rules is small > thing to process for ie. single duron 800MHz, 256MB RAM?). > > Any ideas how to do such limiting in better way?You can use the imq device to catch all incoming traffic. You can create 10 subclasses and put each client data in 1 subclass. Or you can use the ingress qdisc and use the policers in the filters to throttle incoming traffic per interface. Stef -- stef.coene@docum.org "Using Linux as bandwidth manager" http://www.docum.org/ #lartc @ irc.oftc.net _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/