Hi,
I did a comprehensive testing with 2.3 and 2.4.20
kernels. I used an Ixia as a traffic generator for
pumping data to the Linux
box. In the Linux box I have two interfaces, one is
used for data input and
one is used for data output.
------- --------
| |1 ---> eth0 |Linux |
|Ixia |2 <--- eth1 | |
------- --------
I set the policing setting to eth1. The Linux box
forwarded every packet it
receives on interface eth0.
In my tests I tried different rate limits and
different packet sizes. I
found a linear relation between the burst and the
traffic, which give the
most accurate results.
I am trying to verify the same results using 2.5.68
kernel.
The reason it works for you is because you were using
a rate limit of
1000kbit and a packet size of 512. I verified your
test, and it looks like
it works fine. When I tried different packet sizes,
and different rates the
HTB scheduling doesn''t work well.
The following is a sample of test results:
Test 1:
Pumping: 74Mbit/s
Rate limit: 1000Kbit
Results:
Packet size Received traffic (eth1)
64 27Mbit/s
128 42Mbit/s
512 1.3Mbit/s
1500 1.1Mbit/s
Test 2:
Pumping: 74Mbit/s
Rate limit: 2000Kbit
Results:
Packet size Received traffic (eth1)
64 27Mbit/s
512 4Mbit/s
1500 2.4Mbit/s
Test 3:
Pumping: 74Mbit/s
Rate limit: 6000Kbit
Results:
Packet size Received traffic (eth1)
64 27Mbit/s
512 74Mbit/s
1500 6Mbit/s
Test 4:
Pumping: 74Mbit/s
Rate limit: 8000Kbit
Results:
Packet size Received traffic (eth1)
64 27Mbit/s
512 74Mbit/s
1500 12Mbit/s
As you can see from the results, the rate shaping
functionality doesn''t
works in most
Cases. As I maintained before, these tests works
very well in 2.4.20.
Yaron
--- Jos_Luis_Domingo_Lpez <lartc@24x7linux.com>
wrote:> On Saturday, 03 May 2003, at 19:20:21 -0700,
> Yaron Benita wrote:
>
> > I enabled the HTB scheduler option in the kernel.
> > Than I created an HTB qdisc on eth1 using "tc"
> tool. I
> > added a class with a rate limit of 30000kbits, and
> a
> > filter attached to this class.
> >
> Please send your mails first to the mailing list,
> and maybe even to
> certain people. You will get more and better
> answers, and everything
> will get "archived" and be searcheable through usual
> search engines.
>
> > I used a traffic generator to send data to eth0
> which
> > forwared the data to eth1 and than back to the
> traffic
> > generator. I sent a bandwidth of 70000kbits, and
> the
> > same traffic was forwarded through interface eth1.
>
> >
> > This test shows that the traffic was not shaped
> to
> > 30000kbits. The same test works greate in 2.4.20.
> >
> I have set up wondershaper 1.1a in my box.
> Configured it for an upload
> limit of 1000 Kbps, and used "netcat" on the server
> and client sides,
> measuring bandwidth with both "iptraf" and
> "gkrellm", and transfer rates
> are the ones configured.
>
> The configuration used in my test follows. Traffic
> was generated on the
> client side via "cat /large/archive | nc remote_ip
> 8000", and received
> on the server side with "nc -l -p 8000 > /dev/null".
> Hope it helps.
>
> #!/bin/bash
> DOWNLINK=1000
> UPLINK=1000
> DEV=eth0
> if [ "$1" = "status" ]
> then
> tc -s qdisc ls dev $DEV
> tc -s class ls dev $DEV
> exit
> fi
>
> # clean existing down- and uplink qdiscs, hide
> errors
> tc qdisc del dev $DEV root 2> /dev/null >
> /dev/null
> tc qdisc del dev $DEV ingress 2> /dev/null >
> /dev/null
>
> if [ "$1" = "stop" ]
> then
> exit
> fi
>
> ###### uplink
> # install root HTB, point default traffic to 1:20:
> tc qdisc add dev $DEV root handle 1: htb default 20
> # shape everything at $UPLINK speed - this prevents
> huge queues in your
> # DSL modem which destroy latency:
> tc class add dev $DEV parent 1: classid 1:1 htb rate
> ${UPLINK}kbit burst 6k
> # high prio class 1:10:
> tc class add dev $DEV parent 1:1 classid 1:10 htb
> rate ${UPLINK}kbit \
> burst 6k prio 1
> # bulk & default class 1:20 - gets slightly less
> traffic,
> # and a lower priority:
> tc class add dev $DEV parent 1:1 classid 1:20 htb
> rate $[9*$UPLINK/10]kbit \
> burst 6k prio 2
> tc class add dev $DEV parent 1:1 classid 1:30 htb
> rate $[8*$UPLINK/10]kbit \
> burst 6k prio 2
> # all get Stochastic Fairness:
> tc qdisc add dev $DEV parent 1:10 handle 10: sfq
> perturb 10
> tc qdisc add dev $DEV parent 1:20 handle 20: sfq
> perturb 10
> tc qdisc add dev $DEV parent 1:30 handle 30: sfq
> perturb 10
> # TOS Minimum Delay (ssh, NOT scp) in 1:10:
> tc filter add dev $DEV parent 1:0 protocol ip prio
> 10 u32 \
> match ip tos 0x10 0xff flowid 1:10
> # ICMP (ip protocol 1) in the interactive class 1:10
> so we
> # can do measurements & impress our friends:
> tc filter add dev $DEV parent 1:0 protocol ip prio
> 10 u32 \
> match ip protocol 1 0xff flowid 1:10
> # To speed up downloads while an upload is going on,
> put ACK packets in
> # the interactive class:
> tc filter add dev $DEV parent 1: protocol ip prio 10
> u32 \
> match ip protocol 6 0xff \
> match u8 0x05 0x0f at 0 \
> match u16 0x0000 0xffc0 at 2 \
> match u8 0x10 0xff at 33 \
> flowid 1:10
> # rest is ''non-interactive'' ie ''bulk''
and ends up in
> 1:20
>
> tc filter add dev $DEV parent 1: protocol ip prio 18
> u32 \
> match ip dst 0.0.0.0/0 flowid 1:20
>
>
> ########## downlink #############
> # slow downloads down to somewhat less than the real
> speed to prevent
> # queuing at our ISP. Tune to see how high you can
> set it.
> # ISPs tend to have *huge* queues to make sure big
> downloads are fast
> #
> # attach ingress policer:
>
> tc qdisc add dev $DEV handle ffff: ingress
>
> # filter *everything* to it (0.0.0.0/0), drop
> everything that''s
> # coming in too fast:
>
> tc filter add dev $DEV parent ffff: protocol ip prio
> 50 u32 match ip src \
> 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k
> drop flowid :1
>
> --
> Jose Luis Domingo Lopez
> Linux Registered User #189436 Debian Linux Sid
> (Linux 2.5.68)
__________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
http://search.yahoo.com
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/