Hi ! There is a script called "The Wonder Shaper" at http://lartc.org/wondershaper/ with the goal to improve latency with cable/adsl connection by reducing a little the upload/download bandwith. The problem is that shorewall has also Trafic Shaping/Control and TOS stuff so wondershaper and shorewall cannot work together. So my questio is : Can we have the fonctionality of wondershaper in Shorewall ? and How ? I am attaching 3 files from wondershaper : 1) the README 2) and 3) the CBQ and HTB versions of the scripts.
>From a very quick look, I think that if you stick the wondershaper script in/etc/shorewall/tcstart, it should "just work". -Tom -- Tom Eastep \ Shorewall - iptables made easy AIM: tmeastep \ http://www.shorewall.net ICQ: #60745924 \ teastep@shorewall.net ----- Original Message ----- From: "Yaacov Akiba Slama" <ya@slamail.org> To: <shorewall-users@shorewall.net> Sent: Wednesday, March 06, 2002 2:05 AM Subject: [Shorewall-users] Wondershaper> Hi ! > > There is a script called "The Wonder Shaper" at > http://lartc.org/wondershaper/ with the goal to improve latency with > cable/adsl connection by reducing a little the upload/download bandwith. > The problem is that shorewall has also Trafic Shaping/Control and TOS > stuff so wondershaper and shorewall cannot work together. > So my questio is : Can we have the fonctionality of wondershaper in > Shorewall ? and How ? > > I am attaching 3 files from wondershaper : 1) the README 2) and 3) the > CBQ and HTB versions of the scripts. >---------------------------------------------------------------------------- ----> The Wonder Shaper 1.0 > bert hubert <ahu@ds9a.nl> > http://lartc.org/wondershaper > (c) Copyright 2002 > Licenced under the GPL - see ''COPYING'' > > This document is a bit long, I''ll split it up later. > The very short summary is: edit the first few lines of ''wshaper'' and runit.> > GOALS > ----- > > I attempted to create the holy grail: > > * Maintain low latency for interfactive traffic at all times > > This means that downloading or uploading files should not disturb SSH or > even telnet. These are the most important things, even 200ms latency is > sluggish to work over. > > * Allow ''surfing'' at reasonable speeds while up or downloading > > Even though http is ''bulk'' traffic, other traffic should not drown it out > too much. > > * Make sure uploads don''t harm downloads, and the other way around > > This is a much observed phenomenon where upstream traffic simply destroys > download speed. It turns out that all this is possible, at the cost of a > tiny bit of bandwidth. The reason that uploads, downloads and ssh hurt > eachother is the presence of large queues in many domestic access devices > like cable or DSL modems. > > The next section explains in depth what causes the delays, and how we can > fix them. You can safely skip it and head straight for the script if you > don''t care how the magic is performed. > > Why it doesn''t work well by default > ----------------------------------- > > ISPs know that they are benchmarked solely on how fast people candownload.> Besides available bandwidth, download speed is influenced heavily bypacket> loss, which seriously hampers TCP/IP performance. Large queues can help > prevent packetloss, and speed up downloads. So ISPs configure largequeues.> > These large queues however damage interactivity. A keystroke must first > travel the upstream queue, which may be seconds (!) long and go to your > remote host. It is then displayed, which leads to a packet coming back, > which must then traverse the downstream queue, located at your ISP, before > it appears on your screen. > > This HOWTO teaches you how to mangle and process the queue in many ways,but> sadly, not all queues are accessible to us. The queue over at the ISP is > completely off-limits, whereas the upstream queue probably lives insideyour> cable modem or DSL device. You may or may not be able to configure it.Most> probably not. > > So, what next? As we can''t control either of those queues, they must be > eliminated, and moved to your Linux router. Luckily this is possible. > > Limit upload speed somewhat > --------------------------- > > By limiting our upload speed to slightly less than the truly availablerate,> no queues are built up in our modem. The queue is now moved to Linux. > > Limit download speed > -------------------- > > This is slightly trickier as we can''t really influence how fast theinternet> ships us data. We can however drop packets that are coming in too fast, > which causes TCP/IP to slow down to just the rate we want. Because wedon''t> want to drop traffic unnecessarily, we configure a ''burst'' size we allowat> higher speed. > > Now, once we have done this, we have eliminated the downstream queuetotally> (except for short bursts), and gain the ability to manage the upstreamqueue> with all the power Linux offers. > > Let interactive traffic skip the queue > -------------------------------------- > > What remains to be done is to make sure interactive traffic jumps to the > front of the upstream queue. To make sure that uploads don''t hurtdownloads,> we also move ACK packets to the front of the queue. This is what normally > causes the huge slowdown observed when generating bulk traffic both ways. > The ACKnowledgements for downstream traffic must compete with upstream > traffic, and get delayed in the process. > > Results > ------- > > If we do all this we get the following measurements using an excellentADSL> connection from xs4all in the Netherlands: > > Baseline latency: > round-trip min/avg/max = 14.4/17.1/21.7 ms > > Without traffic conditioner, while downloading: > round-trip min/avg/max = 560.9/573.6/586.4 ms > > Without traffic conditioner, while uploading: > round-trip min/avg/max = 2041.4/2332.1/2427.6 ms > > With conditioner, during 220kbit/s upload: > round-trip min/avg/max = 15.7/51.8/79.9 ms > > With conditioner, during 850kbit/s download: > round-trip min/avg/max = 20.4/46.9/74.0 ms > > When uploading, downloads proceed at ~80% of the available speed. Uploads > at around 90%. Latency then jumps to 850 ms, still figuring out why. > > What you can expect from this script depends a lot on your actual uplink > speed. When uploading at full speed, there will always be a single packet > ahead of your keystroke. That is the lower limit to the latency you can > achieve - divide your MTU by your upstream speed to calculate. Typical > values will be somewhat higher than that. Lower your MTU for bettereffects!> > A small table: > > Uplink speed | Expected latency due to upload > -------------------------------------------------- > 32 | 234ms > 64 | 117ms > 128 | 58ms > 256 | 29ms > > So to calculate your effective latency, take a baseline measurement (pingon> an unloaded link), and look up the number in the table, and add it. Thatis> about the best you can expect. This number comes from a calculation that > assumes that your upstream keystroke will have at most half a full sized > packet ahead of it. > > This boils down to: > > mtu * 0.5 * 10 > -------------- + baseline_latency > kbit > > The factor 10 is not quite correct but works well in practice. > > Your kernel > ----------- > > If you run a recent distribution, everything should be ok. You need 2.4with> QoS options turned on. > > If you compile your own kernel, it must have some options enabled. Most > notably, in the Networking Options menu, QoS and/or Fair Queueing, turn at > least CBQ, PRIO, SFQ, Ingress, Traffic Policing, QoS support, Rate > Estimator, QoS classifier, U32 classifier, fwmark classifier. > > In practice, I (and most distributions) just turn on everything. > > The scripts > ----------- > > The script comes in two versions, one which works on standard kernels andis> implemented using CBQ. The other one uses the excellent HTB qdisc which is > not in the default kernel. The CBQ version is more tested than the HTBone!> > See ''wshaper'' and ''wshaper.htb''. > > Tuning > ------ > > These scripts need to know the ''real'' rate of your ISP connection. This is > hard to determine upfront as different ISPs use different kinds of bits it > appears. People report success using the following technique: > > Estimate both your upstream and downstream at half the rate your ISP > specifies. Now verify if the script is functioning - check interactivity > while uploading and while downloading. This should deliver the latency as > calculated above. If not, check if the script executed without errors. > > Now slowly increase the upstream & downstream numbers in the script until > the latency comes back. This way you can find optimum values for your > connection. If you are happy, please report to me so I can make a list of > numbers that work well. Please let me know which ISP you use and the nameof> your subscription, and its reputed specifications, so I can list you here > and save others the trouble. > > Installation > ------------ > > If you dial in, you can copy the script to /etc/ppp/ip-up.d and it will be > run at each connect. > > If you want to remove the shaper from an interface, run ''wshaper stop''. To > see status information, run ''wshaper status''. > > Known Shortcomings > ------------------ > > Most windows SSH clients do not set TOS flags, so the wondershaper has a > hard time recognizing interactive ssh traffic from ''putty''. This appearsto> be due to a shortcoming in windows. > > The solution is to ''putty'' to your linux gateway and ssh onwards fromthere.> > There is another solution (prioritize small ssh packets) but that is for a > later version. > > PROBLEMS > -------- > > If you get errors, add an -x to the first line, as follows: > > #!/bin/bash -x > > And retry. This will show you which line gives an error. Before contacting > me, make sure that you are running a recent version of iproute! > > Recent versions can be found at your Linux distributor, or if you prefer > compiling, here: ftp://ftp.inr.ac.ru/ip-routing/iproute2-current.tar.gz > > More information > ---------------- > > Information on how this all works can be found on http://lartc.org > The Linux Advanced Routing & Traffic Control HOWTO site. > > >---------------------------------------------------------------------------- ----> #!/bin/bash > > # Wonder Shaper > # please read the README before filling out these values > # > # Set the following values to somewhat less than your actual download > # and uplink speed. In kilobits. Also set the device that is to be shaped. > DOWNLINK=800 > UPLINK=220 > DEV=eth0 > > # Now remove the following two lines :-) > > echo Please read the documentation in ''README'' first > exit > > if [ "$1" = "status" ] > then > tc -s qdisc ls dev $DEV > tc -s class ls dev $DEV > exit > fi > > > # clean existing down- and uplink qdiscs, hide errors > tc qdisc del dev $DEV root 2> /dev/null > /dev/null > tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null > > if [ "$1" = "stop" ] > then > exit > fi > > > > > ###### uplink > > # install root CBQ > > tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit > > # shape everything at $UPLINK speed - this prevents huge queues in your > # DSL modem which destroy latency: > # main class > > tc class add dev $DEV parent 1: classid 1:1 cbq rate ${UPLINK}kbit \ > allot 1500 prio 5 bounded isolated > > # high prio class 1:10: > > tc class add dev $DEV parent 1:1 classid 1:10 cbq rate ${UPLINK}kbit \ > allot 1600 prio 1 avpkt 1000 > > # bulk and default class 1:20 - gets slightly less traffic, > # and a lower priority: > > tc class add dev $DEV parent 1:1 classid 1:20 cbq rate $[9*$UPLINK/10]kbit\> allot 1600 prio 2 avpkt 1000 > > # both get Stochastic Fairness: > tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 > tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 > > # start filters > # TOS Minimum Delay (ssh, NOT scp) in 1:10: > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip tos 0x10 0xff flowid 1:10 > > # ICMP (ip protocol 1) in the interactive class 1:10 so we > # can do measurements & impress our friends: > tc filter add dev $DEV parent 1:0 protocol ip prio 11 u32 \ > match ip protocol 1 0xff flowid 1:10 > > # To speed up downloads while an upload is going on, put ACK packets in > # the interactive class: > > tc filter add dev $DEV parent 1: protocol ip prio 12 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u16 0x0000 0xffc0 at 2 \ > match u8 0x10 0xff at 33 \ > flowid 1:10 > > # rest is ''non-interactive'' ie ''bulk'' and ends up in 1:20 > > tc filter add dev $DEV parent 1: protocol ip prio 13 u32 \ > match ip dst 0.0.0.0/0 flowid 1:20 > > ########## downlink ############# > # slow downloads down to somewhat less than the real speed to prevent > # queuing at our ISP. Tune to see how high you can set it. > # ISPs tend to have *huge* queues to make sure big downloads are fast > # > # attach ingress policer: > > tc qdisc add dev $DEV handle ffff: ingress > > # filter *everything* to it (0.0.0.0/0), drop everything that''s > # coming in too fast: > > tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ > 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 > >---------------------------------------------------------------------------- ----> #!/bin/bash > # Wonder Shaper > # please read the README before filling out these values > # > # Set the following values to somewhat less than your actual download > # and uplink speed. In kilobits. Also set the device that is to be shaped. > > DOWNLINK=800 > UPLINK=220 > DEV=ppp0 > > # Now remove the following two lines :-) > > echo Please read the documentation in ''README'' first > exit > > if [ "$1" = "status" ] > then > tc -s qdisc ls dev $DEV > tc -s class ls dev $DEV > exit > fi > > > # clean existing down- and uplink qdiscs, hide errors > tc qdisc del dev $DEV root 2> /dev/null > /dev/null > tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null > > if [ "$1" = "stop" ] > then > exit > fi > > > ###### uplink > > # install root HTB, point default traffic to 1:20: > > tc qdisc add dev $DEV root handle 1: htb default 20 > > # shape everything at $UPLINK speed - this prevents huge queues in your > # DSL modem which destroy latency: > > tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst6k> > # high prio class 1:10: > > tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \ > burst 6k prio 1 > > # bulk & default class 1:20 - gets slightly less traffic, > # and a lower priority: > > tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[9*$UPLINK/10]kbit\> burst 6k prio 2 > > # both get Stochastic Fairness: > tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 > tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 > > # TOS Minimum Delay (ssh, NOT scp) in 1:10: > > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip tos 0x10 0xff flowid 1:10 > > # ICMP (ip protocol 1) in the interactive class 1:10 so we > # can do measurements & impress our friends: > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip protocol 1 0xff flowid 1:10 > > # To speed up downloads while an upload is going on, put ACK packets in > # the interactive class: > > tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u16 0x0000 0xffc0 at 2 \ > match u8 0x10 0xff at 33 \ > flowid 1:10 > > # rest is ''non-interactive'' ie ''bulk'' and ends up in 1:20 > > > ########## downlink ############# > # slow downloads down to somewhat less than the real speed to prevent > # queuing at our ISP. Tune to see how high you can set it. > # ISPs tend to have *huge* queues to make sure big downloads are fast > # > # attach ingress policer: > > tc qdisc add dev $DEV handle ffff: ingress > > # filter *everything* to it (0.0.0.0/0), drop everything that''s > # coming in too fast: > > tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ > 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 > > >
Or rather, you should just invoke the attached script in /etc/shorewall/tcstart. -Tom -- Tom Eastep \ Shorewall - iptables made easy AIM: tmeastep \ http://www.shorewall.net ICQ: #60745924 \ teastep@shorewall.net ----- Original Message ----- From: "Yaacov Akiba Slama" <ya@slamail.org> To: <shorewall-users@shorewall.net> Sent: Wednesday, March 06, 2002 2:05 AM Subject: [Shorewall-users] Wondershaper> Hi ! > > There is a script called "The Wonder Shaper" at > http://lartc.org/wondershaper/ with the goal to improve latency with > cable/adsl connection by reducing a little the upload/download bandwith. > The problem is that shorewall has also Trafic Shaping/Control and TOS > stuff so wondershaper and shorewall cannot work together. > So my questio is : Can we have the fonctionality of wondershaper in > Shorewall ? and How ? > > I am attaching 3 files from wondershaper : 1) the README 2) and 3) the > CBQ and HTB versions of the scripts. >---------------------------------------------------------------------------- ----> The Wonder Shaper 1.0 > bert hubert <ahu@ds9a.nl> > http://lartc.org/wondershaper > (c) Copyright 2002 > Licenced under the GPL - see ''COPYING'' > > This document is a bit long, I''ll split it up later. > The very short summary is: edit the first few lines of ''wshaper'' and runit.> > GOALS > ----- > > I attempted to create the holy grail: > > * Maintain low latency for interfactive traffic at all times > > This means that downloading or uploading files should not disturb SSH or > even telnet. These are the most important things, even 200ms latency is > sluggish to work over. > > * Allow ''surfing'' at reasonable speeds while up or downloading > > Even though http is ''bulk'' traffic, other traffic should not drown it out > too much. > > * Make sure uploads don''t harm downloads, and the other way around > > This is a much observed phenomenon where upstream traffic simply destroys > download speed. It turns out that all this is possible, at the cost of a > tiny bit of bandwidth. The reason that uploads, downloads and ssh hurt > eachother is the presence of large queues in many domestic access devices > like cable or DSL modems. > > The next section explains in depth what causes the delays, and how we can > fix them. You can safely skip it and head straight for the script if you > don''t care how the magic is performed. > > Why it doesn''t work well by default > ----------------------------------- > > ISPs know that they are benchmarked solely on how fast people candownload.> Besides available bandwidth, download speed is influenced heavily bypacket> loss, which seriously hampers TCP/IP performance. Large queues can help > prevent packetloss, and speed up downloads. So ISPs configure largequeues.> > These large queues however damage interactivity. A keystroke must first > travel the upstream queue, which may be seconds (!) long and go to your > remote host. It is then displayed, which leads to a packet coming back, > which must then traverse the downstream queue, located at your ISP, before > it appears on your screen. > > This HOWTO teaches you how to mangle and process the queue in many ways,but> sadly, not all queues are accessible to us. The queue over at the ISP is > completely off-limits, whereas the upstream queue probably lives insideyour> cable modem or DSL device. You may or may not be able to configure it.Most> probably not. > > So, what next? As we can''t control either of those queues, they must be > eliminated, and moved to your Linux router. Luckily this is possible. > > Limit upload speed somewhat > --------------------------- > > By limiting our upload speed to slightly less than the truly availablerate,> no queues are built up in our modem. The queue is now moved to Linux. > > Limit download speed > -------------------- > > This is slightly trickier as we can''t really influence how fast theinternet> ships us data. We can however drop packets that are coming in too fast, > which causes TCP/IP to slow down to just the rate we want. Because wedon''t> want to drop traffic unnecessarily, we configure a ''burst'' size we allowat> higher speed. > > Now, once we have done this, we have eliminated the downstream queuetotally> (except for short bursts), and gain the ability to manage the upstreamqueue> with all the power Linux offers. > > Let interactive traffic skip the queue > -------------------------------------- > > What remains to be done is to make sure interactive traffic jumps to the > front of the upstream queue. To make sure that uploads don''t hurtdownloads,> we also move ACK packets to the front of the queue. This is what normally > causes the huge slowdown observed when generating bulk traffic both ways. > The ACKnowledgements for downstream traffic must compete with upstream > traffic, and get delayed in the process. > > Results > ------- > > If we do all this we get the following measurements using an excellentADSL> connection from xs4all in the Netherlands: > > Baseline latency: > round-trip min/avg/max = 14.4/17.1/21.7 ms > > Without traffic conditioner, while downloading: > round-trip min/avg/max = 560.9/573.6/586.4 ms > > Without traffic conditioner, while uploading: > round-trip min/avg/max = 2041.4/2332.1/2427.6 ms > > With conditioner, during 220kbit/s upload: > round-trip min/avg/max = 15.7/51.8/79.9 ms > > With conditioner, during 850kbit/s download: > round-trip min/avg/max = 20.4/46.9/74.0 ms > > When uploading, downloads proceed at ~80% of the available speed. Uploads > at around 90%. Latency then jumps to 850 ms, still figuring out why. > > What you can expect from this script depends a lot on your actual uplink > speed. When uploading at full speed, there will always be a single packet > ahead of your keystroke. That is the lower limit to the latency you can > achieve - divide your MTU by your upstream speed to calculate. Typical > values will be somewhat higher than that. Lower your MTU for bettereffects!> > A small table: > > Uplink speed | Expected latency due to upload > -------------------------------------------------- > 32 | 234ms > 64 | 117ms > 128 | 58ms > 256 | 29ms > > So to calculate your effective latency, take a baseline measurement (pingon> an unloaded link), and look up the number in the table, and add it. Thatis> about the best you can expect. This number comes from a calculation that > assumes that your upstream keystroke will have at most half a full sized > packet ahead of it. > > This boils down to: > > mtu * 0.5 * 10 > -------------- + baseline_latency > kbit > > The factor 10 is not quite correct but works well in practice. > > Your kernel > ----------- > > If you run a recent distribution, everything should be ok. You need 2.4with> QoS options turned on. > > If you compile your own kernel, it must have some options enabled. Most > notably, in the Networking Options menu, QoS and/or Fair Queueing, turn at > least CBQ, PRIO, SFQ, Ingress, Traffic Policing, QoS support, Rate > Estimator, QoS classifier, U32 classifier, fwmark classifier. > > In practice, I (and most distributions) just turn on everything. > > The scripts > ----------- > > The script comes in two versions, one which works on standard kernels andis> implemented using CBQ. The other one uses the excellent HTB qdisc which is > not in the default kernel. The CBQ version is more tested than the HTBone!> > See ''wshaper'' and ''wshaper.htb''. > > Tuning > ------ > > These scripts need to know the ''real'' rate of your ISP connection. This is > hard to determine upfront as different ISPs use different kinds of bits it > appears. People report success using the following technique: > > Estimate both your upstream and downstream at half the rate your ISP > specifies. Now verify if the script is functioning - check interactivity > while uploading and while downloading. This should deliver the latency as > calculated above. If not, check if the script executed without errors. > > Now slowly increase the upstream & downstream numbers in the script until > the latency comes back. This way you can find optimum values for your > connection. If you are happy, please report to me so I can make a list of > numbers that work well. Please let me know which ISP you use and the nameof> your subscription, and its reputed specifications, so I can list you here > and save others the trouble. > > Installation > ------------ > > If you dial in, you can copy the script to /etc/ppp/ip-up.d and it will be > run at each connect. > > If you want to remove the shaper from an interface, run ''wshaper stop''. To > see status information, run ''wshaper status''. > > Known Shortcomings > ------------------ > > Most windows SSH clients do not set TOS flags, so the wondershaper has a > hard time recognizing interactive ssh traffic from ''putty''. This appearsto> be due to a shortcoming in windows. > > The solution is to ''putty'' to your linux gateway and ssh onwards fromthere.> > There is another solution (prioritize small ssh packets) but that is for a > later version. > > PROBLEMS > -------- > > If you get errors, add an -x to the first line, as follows: > > #!/bin/bash -x > > And retry. This will show you which line gives an error. Before contacting > me, make sure that you are running a recent version of iproute! > > Recent versions can be found at your Linux distributor, or if you prefer > compiling, here: ftp://ftp.inr.ac.ru/ip-routing/iproute2-current.tar.gz > > More information > ---------------- > > Information on how this all works can be found on http://lartc.org > The Linux Advanced Routing & Traffic Control HOWTO site. > > >---------------------------------------------------------------------------- ----> #!/bin/bash > > # Wonder Shaper > # please read the README before filling out these values > # > # Set the following values to somewhat less than your actual download > # and uplink speed. In kilobits. Also set the device that is to be shaped. > DOWNLINK=800 > UPLINK=220 > DEV=eth0 > > # Now remove the following two lines :-) > > echo Please read the documentation in ''README'' first > exit > > if [ "$1" = "status" ] > then > tc -s qdisc ls dev $DEV > tc -s class ls dev $DEV > exit > fi > > > # clean existing down- and uplink qdiscs, hide errors > tc qdisc del dev $DEV root 2> /dev/null > /dev/null > tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null > > if [ "$1" = "stop" ] > then > exit > fi > > > > > ###### uplink > > # install root CBQ > > tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit > > # shape everything at $UPLINK speed - this prevents huge queues in your > # DSL modem which destroy latency: > # main class > > tc class add dev $DEV parent 1: classid 1:1 cbq rate ${UPLINK}kbit \ > allot 1500 prio 5 bounded isolated > > # high prio class 1:10: > > tc class add dev $DEV parent 1:1 classid 1:10 cbq rate ${UPLINK}kbit \ > allot 1600 prio 1 avpkt 1000 > > # bulk and default class 1:20 - gets slightly less traffic, > # and a lower priority: > > tc class add dev $DEV parent 1:1 classid 1:20 cbq rate $[9*$UPLINK/10]kbit\> allot 1600 prio 2 avpkt 1000 > > # both get Stochastic Fairness: > tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 > tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 > > # start filters > # TOS Minimum Delay (ssh, NOT scp) in 1:10: > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip tos 0x10 0xff flowid 1:10 > > # ICMP (ip protocol 1) in the interactive class 1:10 so we > # can do measurements & impress our friends: > tc filter add dev $DEV parent 1:0 protocol ip prio 11 u32 \ > match ip protocol 1 0xff flowid 1:10 > > # To speed up downloads while an upload is going on, put ACK packets in > # the interactive class: > > tc filter add dev $DEV parent 1: protocol ip prio 12 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u16 0x0000 0xffc0 at 2 \ > match u8 0x10 0xff at 33 \ > flowid 1:10 > > # rest is ''non-interactive'' ie ''bulk'' and ends up in 1:20 > > tc filter add dev $DEV parent 1: protocol ip prio 13 u32 \ > match ip dst 0.0.0.0/0 flowid 1:20 > > ########## downlink ############# > # slow downloads down to somewhat less than the real speed to prevent > # queuing at our ISP. Tune to see how high you can set it. > # ISPs tend to have *huge* queues to make sure big downloads are fast > # > # attach ingress policer: > > tc qdisc add dev $DEV handle ffff: ingress > > # filter *everything* to it (0.0.0.0/0), drop everything that''s > # coming in too fast: > > tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ > 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 > >---------------------------------------------------------------------------- ----> #!/bin/bash > # Wonder Shaper > # please read the README before filling out these values > # > # Set the following values to somewhat less than your actual download > # and uplink speed. In kilobits. Also set the device that is to be shaped. > > DOWNLINK=800 > UPLINK=220 > DEV=ppp0 > > # Now remove the following two lines :-) > > echo Please read the documentation in ''README'' first > exit > > if [ "$1" = "status" ] > then > tc -s qdisc ls dev $DEV > tc -s class ls dev $DEV > exit > fi > > > # clean existing down- and uplink qdiscs, hide errors > tc qdisc del dev $DEV root 2> /dev/null > /dev/null > tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null > > if [ "$1" = "stop" ] > then > exit > fi > > > ###### uplink > > # install root HTB, point default traffic to 1:20: > > tc qdisc add dev $DEV root handle 1: htb default 20 > > # shape everything at $UPLINK speed - this prevents huge queues in your > # DSL modem which destroy latency: > > tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst6k> > # high prio class 1:10: > > tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \ > burst 6k prio 1 > > # bulk & default class 1:20 - gets slightly less traffic, > # and a lower priority: > > tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[9*$UPLINK/10]kbit\> burst 6k prio 2 > > # both get Stochastic Fairness: > tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 > tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 > > # TOS Minimum Delay (ssh, NOT scp) in 1:10: > > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip tos 0x10 0xff flowid 1:10 > > # ICMP (ip protocol 1) in the interactive class 1:10 so we > # can do measurements & impress our friends: > tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ > match ip protocol 1 0xff flowid 1:10 > > # To speed up downloads while an upload is going on, put ACK packets in > # the interactive class: > > tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u16 0x0000 0xffc0 at 2 \ > match u8 0x10 0xff at 33 \ > flowid 1:10 > > # rest is ''non-interactive'' ie ''bulk'' and ends up in 1:20 > > > ########## downlink ############# > # slow downloads down to somewhat less than the real speed to prevent > # queuing at our ISP. Tune to see how high you can set it. > # ISPs tend to have *huge* queues to make sure big downloads are fast > # > # attach ingress policer: > > tc qdisc add dev $DEV handle ffff: ingress > > # filter *everything* to it (0.0.0.0/0), drop everything that''s > # coming in too fast: > > tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ > 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 > > >
First, I forgot to thank you for your work in shorewall ; so thanks a lot. Second, in fact I have a problem with wondershaper because after executing it manually (after shorewall was started), I had no access to Internet so I though it was a conflict between shorewall and wondershaper, but in fact even after a "/sbin/shorewall clear ; ./wshaper", I have no access to the net. Again thanks for shorewall, and thanks for your quick answer. yas Tom Eastep wrote:>Or rather, you should just invoke the attached script in >/etc/shorewall/tcstart. > >-Tom >-- >Tom Eastep \ Shorewall - iptables made easy >AIM: tmeastep \ http://www.shorewall.net >ICQ: #60745924 \ teastep@shorewall.net > > >----- Original Message ----- >From: "Yaacov Akiba Slama" <ya@slamail.org> >To: <shorewall-users@shorewall.net> >Sent: Wednesday, March 06, 2002 2:05 AM >Subject: [Shorewall-users] Wondershaper > > >>Hi ! >> >>There is a script called "The Wonder Shaper" at >>http://lartc.org/wondershaper/ with the goal to improve latency with >>cable/adsl connection by reducing a little the upload/download bandwith. >>The problem is that shorewall has also Trafic Shaping/Control and TOS >>stuff so wondershaper and shorewall cannot work together. >>So my questio is : Can we have the fonctionality of wondershaper in >>Shorewall ? and How ? >> >>I am attaching 3 files from wondershaper : 1) the README 2) and 3) the >>CBQ and HTB versions of the scripts. >> > >Stripped the attached scripts. >