Hello, I was attempting to throttle egress traffic to a specific rate using a tbf. As a starting point I used an example from the LARTC howto, which goes: tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 I then attempt a large fetch from another machine via wget (~40 megs) and the rate was clamped down to about 12Kbytes/s. As this seemed too much, I gradually increased the latency up to 200ms which then gave me the expected results (~34Kbytes/s). I then applied this queuing discipline on a machine acting as a gateway/router for a few VLANed subnets. The tbf was applied on interface eth1.615. From another workstation I attempted a wget, and so the traffic had to go through the gateway/router. The download rate went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much higher than what I''m trying to clamp it down to. Two questions: 1/ My main question. AFAIK, queuing disciplines affect egress traffic whether that traffic originates from the host or is being forwarded. Assuming that the fact the tbf is mostly meant to be applied to forwarded traffic is not an issue, *is there anything else that could cause the transfer rate not to be correctly clamped down?* What parameters should I be playing with? 2/ I''m assuming the first example I quoted must have worked as described when the HOWTO was initially written a few years ago. In any case, i am assuming with 50ms max latency outgoing packets could not be held long enough in the tbf and had to be droppd, correct? Thank you, sting
My first guess would be vlans being a problem. I know at least for class based queuing disciplines on vlans, you have to take care to define filters that funnel traffic through a class by selecting 802.1q traffic on the real interface, not the vlan interface. I know traffic shaping does work on vlans with the class based queues because I use it every day. But all my tc statements are applied on a real physical interface and not the vlan interface; I could never get tc to work on vlan interfaces directly. Just a guess, but I bet you''d get the rate limiting you expect on your vlan by applying the tbf rate limit on interface eth1 instead of the vlan interface. If so, and if your goal is to rate limit by vlan, then you will likely need to go with a class based queueing discipline like htb and then define traffic filters to limit each vlan to the rate you wish.> ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 21 Aug 2007 23:32:18 -0700 > From: sting <sting@bloodwolf.org> > Subject: [LARTC] simple tbf rate clamping issues > To: LARTC@mailman.ds9a.nl > Message-ID: <46CBD872.6060307@bloodwolf.org> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hello, > > I was attempting to throttle egress traffic to a specific rate using a > tbf. As a starting point I used an example from the LARTC howto, > which > goes: > > tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 > > I then attempt a large fetch from another machine via wget (~40 megs) > and the rate was clamped down to about 12Kbytes/s. As this seemed too > much, I gradually increased the latency up to 200ms which then gave me > the expected results (~34Kbytes/s). > > I then applied this queuing discipline on a machine acting as a > gateway/router for a few VLANed subnets. The tbf was applied on > interface eth1.615. From another workstation I attempted a wget, > and so > the traffic had to go through the gateway/router. The download rate > went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much > higher than what I''m trying to clamp it down to. > > Two questions: > 1/ My main question. AFAIK, queuing disciplines affect egress traffic > whether that traffic originates from the host or is being forwarded. > Assuming that the fact the tbf is mostly meant to be applied to > forwarded traffic is not an issue, *is there anything else that could > cause the transfer rate not to be correctly clamped down?* What > parameters should I be playing with? > > 2/ I''m assuming the first example I quoted must have worked as > described > when the HOWTO was initially written a few years ago. In any case, > i am > assuming with 50ms max latency outgoing packets could not be held long > enough in the tbf and had to be droppd, correct? > > Thank you, > sting >
> My first guess would be vlans being a problem. I know at least for > class based queuing disciplines on vlans, you have to take care to > define filters that funnel traffic through a class by selecting > 802.1q traffic on the real interface, not the vlan interface.Wow, why would that be though? If the VLAN is simply presented as an interface, and the queuing disciplines work on an interface basis, what is it that breaks it?> I know traffic shaping does work on vlans with the class based queues > because I use it every day. But all my tc statements are applied on a > real physical interface and not the vlan interface; I could never get > tc to work on vlan interfaces directly.For what''s it worth, I''ve been applying netem queuing disciplines to many different VLAN interfaces and have been getting exactly the expected results (the packet loss % is right on, etc). Could you think of anything different with a tbf that fails?> Just a guess, but I bet you''d get the rate limiting you expect on > your vlan by applying the tbf rate limit on interface eth1 instead of > the vlan interface. If so, and if your goal is to rate limit by vlan, > then you will likely need to go with a class based queueing > discipline like htb and then define traffic filters to limit each > vlan to the rate you wish.Yes the goal is to limit by VLAN. I will try what you suggested, i.e. limit the traffic on the physical interface instead and I''ll report back. But I hope that won''t be the solution! :)> > > > > > > > > >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Tue, 21 Aug 2007 23:32:18 -0700 >> From: sting <sting@bloodwolf.org> >> Subject: [LARTC] simple tbf rate clamping issues >> To: LARTC@mailman.ds9a.nl >> Message-ID: <46CBD872.6060307@bloodwolf.org> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> Hello, >> >> I was attempting to throttle egress traffic to a specific rate using a >> tbf. As a starting point I used an example from the LARTC howto, >> which >> goes: >> >> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 >> >> I then attempt a large fetch from another machine via wget (~40 megs) >> and the rate was clamped down to about 12Kbytes/s. As this seemed too >> much, I gradually increased the latency up to 200ms which then gave me >> the expected results (~34Kbytes/s). >> >> I then applied this queuing discipline on a machine acting as a >> gateway/router for a few VLANed subnets. The tbf was applied on >> interface eth1.615. From another workstation I attempted a wget, >> and so >> the traffic had to go through the gateway/router. The download rate >> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much >> higher than what I''m trying to clamp it down to. >> >> Two questions: >> 1/ My main question. AFAIK, queuing disciplines affect egress traffic >> whether that traffic originates from the host or is being forwarded. >> Assuming that the fact the tbf is mostly meant to be applied to >> forwarded traffic is not an issue, *is there anything else that could >> cause the transfer rate not to be correctly clamped down?* What >> parameters should I be playing with? >> >> 2/ I''m assuming the first example I quoted must have worked as >> described >> when the HOWTO was initially written a few years ago. In any case, >> i am >> assuming with 50ms max latency outgoing packets could not be held long >> enough in the tbf and had to be droppd, correct? >> >> Thank you, >> sting >> > >
So I did apply the tbf on the eth1 interface instead of the VLAN interface, and I saw the same results. Some rate limiting was definitely occuring, but not down to the rate (220kbit) I was expecting. It was still much higher (~1 Mbytes/s) with the unclamped rate being about 16 Mbytes/s. Has everyone else otherwise pretty much always obtained transfer rates to be clamped down to what they expected with the tbf? thanks.> >> My first guess would be vlans being a problem. I know at least for >> class based queuing disciplines on vlans, you have to take care to >> define filters that funnel traffic through a class by selecting >> 802.1q traffic on the real interface, not the vlan interface. > > Wow, why would that be though? If the VLAN is simply presented as an > interface, and the queuing disciplines work on an interface basis, what is > it that breaks it? > >> I know traffic shaping does work on vlans with the class based queues >> because I use it every day. But all my tc statements are applied on a >> real physical interface and not the vlan interface; I could never get >> tc to work on vlan interfaces directly. > > For what''s it worth, I''ve been applying netem queuing disciplines to many > different VLAN interfaces and have been getting exactly the expected > results (the packet loss % is right on, etc). Could you think of anything > different with a tbf that fails? > >> Just a guess, but I bet you''d get the rate limiting you expect on >> your vlan by applying the tbf rate limit on interface eth1 instead of >> the vlan interface. If so, and if your goal is to rate limit by vlan, >> then you will likely need to go with a class based queueing >> discipline like htb and then define traffic filters to limit each >> vlan to the rate you wish. > > Yes the goal is to limit by VLAN. I will try what you suggested, i.e. > limit the traffic on the physical interface instead and I''ll report back. > But I hope that won''t be the solution! :) > > >> >> >> >> >> >> >> >> >> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Tue, 21 Aug 2007 23:32:18 -0700 >>> From: sting <sting@bloodwolf.org> >>> Subject: [LARTC] simple tbf rate clamping issues >>> To: LARTC@mailman.ds9a.nl >>> Message-ID: <46CBD872.6060307@bloodwolf.org> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> Hello, >>> >>> I was attempting to throttle egress traffic to a specific rate using a >>> tbf. As a starting point I used an example from the LARTC howto, >>> which >>> goes: >>> >>> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 >>> >>> I then attempt a large fetch from another machine via wget (~40 megs) >>> and the rate was clamped down to about 12Kbytes/s. As this seemed too >>> much, I gradually increased the latency up to 200ms which then gave me >>> the expected results (~34Kbytes/s). >>> >>> I then applied this queuing discipline on a machine acting as a >>> gateway/router for a few VLANed subnets. The tbf was applied on >>> interface eth1.615. From another workstation I attempted a wget, >>> and so >>> the traffic had to go through the gateway/router. The download rate >>> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much >>> higher than what I''m trying to clamp it down to. >>> >>> Two questions: >>> 1/ My main question. AFAIK, queuing disciplines affect egress traffic >>> whether that traffic originates from the host or is being forwarded. >>> Assuming that the fact the tbf is mostly meant to be applied to >>> forwarded traffic is not an issue, *is there anything else that could >>> cause the transfer rate not to be correctly clamped down?* What >>> parameters should I be playing with? >>> >>> 2/ I''m assuming the first example I quoted must have worked as >>> described >>> when the HOWTO was initially written a few years ago. In any case, >>> i am >>> assuming with 50ms max latency outgoing packets could not be held long >>> enough in the tbf and had to be droppd, correct? >>> >>> Thank you, >>> sting >>> >> >> > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc >
On Wed, 2007-08-22 at 14:01 -0400, sting wrote:> > My first guess would be vlans being a problem. I know at least for > > class based queuing disciplines on vlans, you have to take care to > > define filters that funnel traffic through a class by selecting > > 802.1q traffic on the real interface, not the vlan interface. > > Wow, why would that be though? If the VLAN is simply presented as an > interface, and the queuing disciplines work on an interface basis, what is > it that breaks it? >It can depend on where tc hooks into the network stack, where vlan headers get messed with, hooked in, etc. I''m no hacker here, but I suspect that it can depend on whether your network card is handling some of the vlan tagging work or if it''s being handled by the OS somewhere. I have noticed different behavior with different network cards. #on one server i use... /sbin/tc filter add dev eth1 protocol ip prio 2 parent 1: [insert appropriate filter statement here] flowid 1:123 #on another server I use (same kernel, just different NIC )... /sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: [insert appropriate filter statement here] flowid 1:123 Adding vlan information can change where some data is kept in a packet. Can''t explain in exact detail why I ran into problems, just what I''ve discovered.> > I know traffic shaping does work on vlans with the class based queues > > because I use it every day. But all my tc statements are applied on a > > real physical interface and not the vlan interface; I could never get > > tc to work on vlan interfaces directly. > > For what''s it worth, I''ve been applying netem queuing disciplines to many > different VLAN interfaces and have been getting exactly the expected > results (the packet loss % is right on, etc). Could you think of anything > different with a tbf that fails? >Not sure on that one. tbf does have a lot of "nobs" to turn in its configuration, though, and I''ve not used netem.> > Just a guess, but I bet you''d get the rate limiting you expect on > > your vlan by applying the tbf rate limit on interface eth1 instead of > > the vlan interface. If so, and if your goal is to rate limit by vlan, > > then you will likely need to go with a class based queueing > > discipline like htb and then define traffic filters to limit each > > vlan to the rate you wish. > > Yes the goal is to limit by VLAN. I will try what you suggested, i.e. > limit the traffic on the physical interface instead and I''ll report back. > But I hope that won''t be the solution! :) >Limiting on the physical interface will allow you to group vlans under a common rate limit. Can be useful.> > > > > > > > > > > > > > > > > > > > >> ---------------------------------------------------------------------- > >> > >> Message: 1 > >> Date: Tue, 21 Aug 2007 23:32:18 -0700 > >> From: sting <sting@bloodwolf.org> > >> Subject: [LARTC] simple tbf rate clamping issues > >> To: LARTC@mailman.ds9a.nl > >> Message-ID: <46CBD872.6060307@bloodwolf.org> > >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >> > >> Hello, > >> > >> I was attempting to throttle egress traffic to a specific rate using a > >> tbf. As a starting point I used an example from the LARTC howto, > >> which > >> goes: > >> > >> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 > >> > >> I then attempt a large fetch from another machine via wget (~40 megs) > >> and the rate was clamped down to about 12Kbytes/s. As this seemed too > >> much, I gradually increased the latency up to 200ms which then gave me > >> the expected results (~34Kbytes/s). > >> > >> I then applied this queuing discipline on a machine acting as a > >> gateway/router for a few VLANed subnets. The tbf was applied on > >> interface eth1.615. From another workstation I attempted a wget, > >> and so > >> the traffic had to go through the gateway/router. The download rate > >> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much > >> higher than what I''m trying to clamp it down to. > >> > >> Two questions: > >> 1/ My main question. AFAIK, queuing disciplines affect egress traffic > >> whether that traffic originates from the host or is being forwarded. > >> Assuming that the fact the tbf is mostly meant to be applied to > >> forwarded traffic is not an issue, *is there anything else that could > >> cause the transfer rate not to be correctly clamped down?* What > >> parameters should I be playing with? > >> > >> 2/ I''m assuming the first example I quoted must have worked as > >> described > >> when the HOWTO was initially written a few years ago. In any case, > >> i am > >> assuming with 50ms max latency outgoing packets could not be held long > >> enough in the tbf and had to be droppd, correct? > >> > >> Thank you, > >> sting > >> > > > > > >-- Bryan Schenker Director ResTech Services www.restechservices.net 608-663-3868
try this: #makes sure you''ve deleted anything old #you might wanna try running /sbin/tc -s qdisc show dev eth1 to verify your current config. #deletes all qdisc stuff just in case /sbin/tc qdisc del dev eth1 root #define root qdisc /sbin/tc qdisc add dev eth1 root handle 1: htb default 2 #define the default rate class--where everything goes that doesn''t match one of your filters /sbin/tc class add dev eth1 parent 1:1 classid 1:2 htb prio 2 rate 1000kbit ceil 1000kbit burst 15k #define the rate you wish to limit the vlan to /sbin/tc class add dev eth1 parent 1:1 classid 1:20 htb prio 2 rate 220kbit burst 15k #now create the filter that puts traffic from that vlan into class 20. 1.2.3.4/24 is a range of IPs, but the filter capabilities are extraordinarily capable if you need to classify traffic some other way. Try replacing "802.1q" with "ip" if it doesn''t work /sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: match ip dst 1.2.3.4/24 flowid 1:20 #now run the following command--very useful to confirm traffic is matching your filters since it will tell you how many packets match each filter rule you make: tc -s filter show dev eth1 On Wed, 2007-08-22 at 14:55 -0400, sting wrote:> So I did apply the tbf on the eth1 interface instead of the VLAN > interface, and I saw the same results. Some rate limiting was definitely > occuring, but not down to the rate (220kbit) I was expecting. It was > still much higher (~1 Mbytes/s) with the unclamped rate being about 16 > Mbytes/s. > > Has everyone else otherwise pretty much always obtained transfer rates to > be clamped down to what they expected wir that puts traffic from that vlan into class 20/sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: ip src 1.2.3.0/24 flowid 1:69 th the tbf?> > thanks. > > > > >> My first guess would be vlans being a problem. I know at least for > >> class based queuing disciplines on vlans, you have to take care to > >> define filters that funnel traffic through a class by selecting > >> 802.1q traffic on the real interface, not the vlan interface. > > > > Wow, why would that be though? If the VLAN is simply presented as an > > interface, and the queuing disciplines work on an interface basis, what is > > it that breaks it? > > > >> I know traffic shaping does work on vlans with the class based queues > >> because I use it every day. But all my tc statements are applied on a > >> real physical interface and not the vlan interface; I could never get > >> tc to work on vlan interfaces directly. > > > > For what''s it worth, I''ve been applying netem queuing disciplines to many > > different VLAN interfaces and have been getting exactly the expected > > results (the packet loss % is right on, etc). Could you think of anything > > different with a tbf that fails? > > > >> Just a guess, but I bet you''d get the rate limiting you expect on > >> your vlan by applying the tbf rate limit on interface eth1 instead of > >> the vlan interface. If so, and if your goal is to rate limit by vlan, > >> then you will likely need to go with a class based queueing > >> discipline like htb and then define traffic filters to limit each > >> vlan to the rate you wish. > > > > Yes the goal is to limit by VLAN. I will try what you suggested, i.e. > > limit the traffic on the physical interface instead and I''ll report back. > > But I hope that won''t be the solution! :) > > > > > >> > >> > >> > >> > >> > >> > >> > >> > >> > >>> ---------------------------------------------------------------------- > >>> > >>> Message: 1 > >>> Date: Tue, 21 Aug 2007 23:32:18 -0700 > >>> From: sting <sting@bloodwolf.org> > >>> Subject: [LARTC] simple tbf rate clamping issues > >>> To: LARTC@mailman.ds9a.nl > >>> Message-ID: <46CBD872.6060307@bloodwolf.org> > >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >>> > >>> Hello, > >>> > >>> I was attempting to throttle egress traffic to a specific rate using a > >>> tbf. As a starting point I used an example from the LARTC howto, > >>> which > >>> goes: > >>> > >>> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 > >>> > >>> I then attempt a large fetch from another machine via wget (~40 megs) > >>> and the rate was clamped down to about 12Kbytes/s. As this seemed too > >>> much, I gradually increased the latency up to 200ms which then gave me > >>> the expected results (~34Kbytes/s). > >>> > >>> I then applied this queuing discipline on a machine acting as a > >>> gateway/router for a few VLANed subnets. The tbf was applied on > >>> interface eth1.615. From another workstation I attempted a wget, > >>> and so > >>> the traffic had to go through the gateway/router. The download rate > >>> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much > >>> higher than what I''m trying to clamp it down to. > >>> > >>> Two questions: > >>> 1/ My main question. AFAIK, queuing disciplines affect egress traffic > >>> whether that traffic originates from the host or is being forwarded. > >>> Assuming that the fact the tbf is mostly meant to be applied to > >>> forwarded traffic is not an issue, *is there anything else that could > >>> cause the transfer rate not to be correctly clamped down?* What > >>> parameters should I be playing with? > >>> > >>> 2/ I''m assuming the first example I quoted must have worked as > >>> described > >>> when the HOWTO was initially written a few years ago. In any case, > >>> i am > >>> assuming with 50ms max latency outgoing packets could not be held long > >>> enough in the tbf and had to be droppd, correct? > >>> > >>> Thank you, > >>> sting > >>> > >> > >> > > > > > > _______________________________________________ > > LARTC mailing list > > LARTC@mailman.ds9a.nl > > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc > > > >-- Bryan Schenker Director ResTech Services www.restechservices.net 608-663-3868
sting wrote:> Hello, > > I was attempting to throttle egress traffic to a specific rate using a > tbf. As a starting point I used an example from the LARTC howto, which > goes: > > tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540It''s not the best example as latency is a way of setting buffer length(limit) and 50ms @ 220kbit is < 1500 bytes. If you set < 1514/1518 explicitly with limit you would not pass bulk packets at all. I guess it rounds it up a bit if you use latency.> > I then attempt a large fetch from another machine via wget (~40 megs) > and the rate was clamped down to about 12Kbytes/s. As this seemed too > much, I gradually increased the latency up to 200ms which then gave me > the expected results (~34Kbytes/s).I would expect that, tcp doesn''t like one packet/short buffers, and it''s even worse on a lan than a wan as (linux?)tcp behaves differently when it detects low latenccy.> > I then applied this queuing discipline on a machine acting as a > gateway/router for a few VLANed subnets. The tbf was applied on > interface eth1.615. From another workstation I attempted a wget, and so > the traffic had to go through the gateway/router. The download rate > went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much > higher than what I''m trying to clamp it down to.I just tested a tbf on a vlan and it seems OK - if you see 1.6 Mbytes and tbf is 220kbit maybe you are shaping in the wrong direction and just getting the acks? (OK I am just guessing here) What does tc -s qdisc ls dev eth1.615 say?> > Two questions: > 1/ My main question. AFAIK, queuing disciplines affect egress traffic > whether that traffic originates from the host or is being forwarded. > Assuming that the fact the tbf is mostly meant to be applied to > forwarded traffic is not an issue, *is there anything else that could > cause the transfer rate not to be correctly clamped down?* What > parameters should I be playing with?One possible difference, though it''s probably not your problem. If you have a nic that does tcp segmentation offload, then locally generated traffic may go through as supersize "packets" which makes htb go over rate. I am not sure what tbf would do - maybe just drop them if the buffer is not long enough.> > 2/ I''m assuming the first example I quoted must have worked as described > when the HOWTO was initially written a few years ago. In any case, i am > assuming with 50ms max latency outgoing packets could not be held long > enough in the tbf and had to be droppd, correct?Yep, also that example was on a ppp wan IIRC. If you put anything on the root of eth/vlan you need to remember that you are going to be catching arp aswell as ip traffic. Andy.