Displaying 20 results from an estimated 4000 matches similar to: "tc filter matching anything"
2007 Aug 22
6
simple tbf rate clamping issues
Hello,
I was attempting to throttle egress traffic to a specific rate using a
tbf. As a starting point I used an example from the LARTC howto, which
goes:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
I then attempt a large fetch from another machine via wget (~40 megs)
and the rate was clamped down to about 12Kbytes/s. As this seemed too
much, I gradually increased
2007 Jul 30
17
tc n00b
Hi everyone,
I''m new to tc but I need to use it to set up shaping on a new NAT box.
In short:
Each user must have their upload limited to 128kbit and downlink limited
to 256kbit.
Global bandwidth to be limited to 100Mbit
Interactive packets to have higher priority
200+ users, so need to match packets fast
So far I have managed to get the download limits working. However I need
to
2004 Dec 28
1
Newb question: tc shedulers on 2 interfaces
Hi all! I''m new to this list, and hope for some clarity in this matter:
I have a home-gateway with linux-2.6.9 and iproute2 (ver:2.6.9). My
following tc syntaxes.
# eth0 internet scheduleing are:
tc qdisc add dev eth0 root handle 1: htb default 20
tc class add dev eth0 parent 1: classid 1:1 htb rate 512kbit burst 6k
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 512kbit burst
2006 Mar 28
2
prio, kernel 2.6: patch?
hi to all,
I''m studying traffic shaping using kernel 2.6.8 (debian sarge).
well, I have this problem: priority doesn''t work.
I try with:
- qdisc prio:
tc qdisc add dev eth1 root handle 12: prio bands 3
tc qdisc add dev eth1 parent 12:1 handle 13: tbf rate 10Mbit buffer
1600 limit 3000
2007 Feb 28
1
Xen and tc problems
Hi,
I am trying to shape traffic to two VMs hosted in Xen. There seems to be
very little information regarding this. I found this web page
http://www.ioncannon.net/system-administration/57/limiting-bandwidth-usa
ge-on-xen-linux-setup/ and followed the instructions. But, the real
bandwidth experienced from clients always seems to exceed the set rate.
Part of the problem may be because of the way
2004 Jun 25
1
TBF maximum bucket size
I''m trying to fill a token bucket with enough tokens to burst several gigs
of data. However, it doesn''t seem to get any higher than ~3.9GB:
>tc qdisc add dev eth0 root tbf rate 1440kbit latency 50ms \
burst 16000000000
>tc qdisc show dev eth0
qdisc tbf 800b: rate 1440Kbit burst 3908420240b lat 2197.8s
A smaller attempt of ~1.6 gigs works just fine:
>tc qdisc
2007 Jan 17
1
restricting bandwidth using TC
Hello,
I am trying to get the TC command to work on our debian box to limit
traffic in and out to 12 Meg. The command I am using is:
tc qdisc add dev eth0 root tbf rate 12000kbit latency 25ms burst 1600
tc qdisc add dev eth1 root tbf rate 12000kbit latency 25ms burst 1600
The problem I am having is that the bandwidth exceeds the 12 Meg by
almost 5 Meg.
Any help is appreciated.
2005 Nov 06
1
tc qdisc replace failing
Hi,
Having issues getting a replace command working correctly. The error reported
is "RTNETLINK answers: Invalid argument" which isn''t descriptive or helpful.
The command i''m running is:
tc qdisc replace dev ppp0 parent 8001:D handle D: tbf rate 5Kbit burst 5kb
latency 70ms
The idea being to replace an sfq with handle D and hopefully limit a certain
user in my
2007 Aug 11
1
tc and multiple ip on a device
Hi,
i''m sort of testing a configuration and things are not working sa i
planned.
i have the following network diagram: PC1 to 7 cnneced on the same
ethernet hub.
PC1 PC2 PC 3 PC4 PC5 PC6 on network 192.168.5.0
PC6 and PC7 on network 192.168.1.0
so PC6 work as a router. in addition, PC6 is connected to both
network on the same device eth0.
now on PC6, put a tbf on dev eth0 root
2003 Jul 13
1
slowing down traffic to a certain port
This is my first attempt at understanding lartc:
I want to throttle outgoing bandwidth fo a certain tcp port and leave
other traffic the way it was.
so I put a prio qdisc at the root of eth0 (dummy priomap since i want to use
filters to switch bands):
$ tc qdisc add dev eth0 root handle 1: prio bands 2 priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
then attach a tbf qdisc at 1:2 :
$ tc qdisc add dev
2005 Nov 14
1
Using TBF to throttle a PC to 5kbps
Hi Everyone,
This is a simple question but I don''t understand why the below tbf is not
working as expected by throttling traffic to 5kbps
If I throttle a PC''s traffic using the below when traffic exceeds 5kbps
packets start getting dropped (as they should) but all traffic gets dropped.
not just the bit over 5kbps.
TC="tc add dev ppp0"
$TC parent 8001:2 handle 2:
2004 Oct 13
2
Resetting traffic history
I''m a tc newbie, and I think I am close to being able to use it to
control one of the virtual web sites on our Gentoo Linux server. The
site has it''s own IP address. I have a bit of a problem in that the way
I originally configured tc, the busy site grabbed all the bandwidth,
leaving none for the other (and more important) sites. Here is how I
had configured it:
tc
2005 May 30
3
Question: traffic shaping (tc-htb)
Hello *!
My trafficshaping (tc-htb) drops packets very early, at least i suspect
this. It drops about 30% fo the packets. The traffic-generating appication
is running locally on the shaping host. I think i can lower this rate by
increasing the packet-buffer, because the locally application will slow down
with increasing buffer (tcp/ip). But i can´t find any options for that, i
cycled the manual
2003 Apr 03
6
tc problem
Hello..
I have a linux box and I want to make priority on traffic generated by my
LAN''s computers..
I don''t have a guaranted bandwidth, so I wanna use sfq...
I want to make traffic to port 80 , 443 , 25 & 110 PRIORITY 1
Traffic src or dest 192.168.0.2 to make priority 2
And the rest to put it in proiority 3..
I did the following :
tc qdisc add dev eth0 root handle 1:
2002 Dec 10
2
tbf : rate and effective speed (newbie)
Probably this is an old question, but i''m not able to find nothing about...
So, i''ve just started to play with tc to limit the transfer speed to my
hdsl connection. I''m using the tbf and the command
# tc qdisc add dev eth0 root tbf rate 10kbit latency 50ms burst 1000
Then i''ve tried to transfer a big (20 Mbyte) file onto my lan, using ftp
and the client
2004 Dec 28
2
Simple case here!
Hi All,
I want to setup a machine to connect to internet at a limited rate of 64
kbps.
That machine is connected to a switch. so my LAN and Internet both comes
from the same eth0.
How can I limit only the internet access from this machine to 64kbps and
still using 100mbps for LAN
I am trying to implement this Please guide me If i am wrong.
I mark all the packets going out to LAN.
Then I can
2005 Apr 06
3
tbf latency problems!
Hi i have found a problem related with tbf and the
latency that the tbf calculates.. I have used the
following parameters for burst and limit
burst 100Kbit limit 500Kbit lat81.8ms
burst 6Kbit limit 6Kbit lat 0us
burst 200Kbit limit 100Kbit lat 4294.9s
As u can see in the 3rd column the latency for 100Kbit
burst and 500Kbit limit is 81.8ms but for 200Kbit and
limit 100Kbit is 4294.9s!!! How
2007 May 10
6
PRIO and TBF is much better than HTB??
Hello mailing list,
i stand bevor a mystery and cannot explain it J. I want to do shaping and
prioritization and I have done these following configurations and
simulations. I can´t explain, that the combination of PRIO and TBF is much
better than the HTB (with the prio parameter) alone or in combination with
the SFQ.
Here are my example configurations: 2 Traffic Classes http (80 = 0x50) and
2005 Apr 08
2
About sockets in "CLOSING" state
Hi,
I have met the problem: when I use the shaping discipline
tc qdisc add dev ppp0 parent 1:2 tbf latency 50ms burst 1450 rate 50kbit
one of my application (namely, "aMule") starts leaving sockets in
"CLOSING" state. These sockets accumulate and do not disappear.
Eventually I have so many of these dead sockets that the kernel warns
"Out of socket memory" in
2006 Feb 23
1
1k: 1000 or 1024?
The docs[1][2] suggest it''s 1024, but tc says something else:
# tc qdisc add dev eth0 root tbf rate 1kbps latency 50ms burst 1500
# tc -s qdisc ls dev eth0
qdisc tbf 8009: rate 8000bit burst 1499b lat 48.8ms
^^^^^^^
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
If 1k were 1024, then I would have 8192bit above.