Hello Thilo,
What did you find superior with CBQ-wondershaper over HTB-wondershaper? We have
not been using wondershaper specifically but our simple tests so far seem to
show that htb is much easier to configure for a given target shape (i.,e
accurate) compared to CBQ.
Torsten
-----Original Message-----
From: Thilo Schulz [mailto:arny@ats.s.bawue.de]
Sent: Saturday, June 14, 2003 8:55 AM
To: lartc@mailman.ds9a.nl
Subject: [LARTC] Low latency on large uploads - almost done but not
quite.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
The wondershaper did not work quite the way I wanted it to work, with the
CBQ-wondershaper even giving better results than the HTB-wondershaper. So I
assembled my own little traffic shaper script.
It does already very much of the dirty work and is successful in policing
non-bulk traffic down.
I have a 128 kbit/s uplink and a 768 kbit/s downlink. I do not have to care
much about downloads, as the downlink is fast enough. A more serious problem
is, when the uplink is spammed for example by a larger upload.
My demands are very high in these situations, as it may well be that I am
playing EliteForce, a game using the quake3 engine which needs low latencies.
I am using the HTB qdisc.
Generally, I have put up four classes, the first class is where interactive
traffic gets in and has guaranteed some bandwidth, the second one is for ACK
packets getting their share of uplink to ensure fast downloading while having
a big upload :), the third one is for web requests, like sending HTTP
requests to pages with not much bandwidth and priority and the last one is
the default queue. Every traffic that is not matched by the filter gets into
the default class which has by default almost no rate but may ceil up to 15
kbyte/s, maximum allowed for the link. Same goes for all queues.
This principle works quite well, I have managed to get the latency for the
interactive class well below 150 where it would be > 2000 without traffic
shaper. I regard this already for quite an accomplishment, though it is not
good enough yet for a gamer. I need stable pings, and the pings now are
around 60 and 150. To demonstrate what I mean I give you the ping output:
64 bytes from 2int.de (217.160.128.207): icmp_seq=347 ttl=56 time=128 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=348 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=349 ttl=56 time=133 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=350 ttl=56 time=143 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=351 ttl=56 time=60.4 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=352 ttl=56 time=63.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=353 ttl=56 time=60.9 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=354 ttl=56 time=60.7 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=355 ttl=56 time=65.4 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=356 ttl=56 time=64.5 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=357 ttl=56 time=64.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=358 ttl=56 time=72.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=359 ttl=56 time=82.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=360 ttl=56 time=99.6 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=361 ttl=56 time=99.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=362 ttl=56 time=107 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=363 ttl=56 time=127 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=364 ttl=56 time=128 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=365 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=366 ttl=56 time=136 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=367 ttl=56 time=59.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=368 ttl=56 time=61.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=369 ttl=56 time=63.3 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=370 ttl=56 time=62.0 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=371 ttl=56 time=91.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=372 ttl=56 time=90.1 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=373 ttl=56 time=87.2 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=374 ttl=56 time=86.7 ms
64 bytes from 2int.de (217.160.128.207): icmp_seq=375 ttl=56 time=86.5 ms
You get the idea - first it''s nice around 60 (this is the default ping
without
any upload), but with upload and traffic shaper the ping gets high to 140 and
after a while drops down to 60 again and again to 140 and so forth and so
forth.
Does anyone of you have an idea how I can minimize this effect, and let pings
be stable at 60 ms? stable 80ms delay are okay for me too, no question.
If I let the worst-priority bulkdownload class ceil up only to 10kbyte/s I
have the same effect, only when the max ceil class is put down under 6 i do
not have this changing ping effect.
Here''s still my script, if you are interested to look at it.
#!/bin/bash
DEV=ppp0
# delete any qdiscs or rule sets created so far.
tc qdisc del dev $DEV root 2> /dev/null > /dev/null
# tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
# create the root qdisc
tc qdisc add dev $DEV root handle 1: htb default 13
# install a root class, so that other clients can borrow from each other.
tc class add dev $DEV parent 1: classid 1:1 htb rate 15kbps ceil 15kbps
# now install 4 sub classes for different priorities
# highest priority for low latency games like quake3 and ssh / ftp control.
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 7kbps ceil 15kbps \
prio 0 burst 20000b cburst 22000b
# not as high but still high priority for ACK''s - useful for keeping
large
# d/l''s alive :)
tc class add dev $DEV parent 1:1 classid 1:11 htb rate 7kbps ceil 15kbps prio
1 burst 200b cburst 200b
# very few data allowed for HTTP requests, but still higher priority than bulk
uploads.
tc class add dev $DEV parent 1:1 classid 1:12 htb rate 2kbps ceil 15kbps prio
10 burst 1b cburst 1b
# bulk uploads have no prio :D
tc class add dev $DEV parent 1:1 classid 1:13 htb rate 1bps ceil 15kbps prio
20 burst 1b cburst 1b
# now make all qdiscs simple pfifo
# small queues for minimum latency
tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0
tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0
# larger queues for more latency.
tc qdisc add dev $DEV parent 1:12 handle 40: pfifo limit 5
tc qdisc add dev $DEV parent 1:13 handle 50: pfifo limit 20
#quake3-style udp games have been marked in iptables
tc filter add dev $DEV protocol ip parent 1: prio 0 handle 1 fw flowid 1:10
# icmp to get the response times.
tc filter add dev $DEV protocol ip parent 1: prio 1 u32 match ip protocol 1
0xff flowid 1:10
# ssh - not scp! scp is seperated by the TOS bits from ssh
tc filter add dev $DEV protocol ip parent 1: prio 2 u32 match ip dport 22
0xffff match ip tos 0x10 0xff flowid 1:10
# ftp
tc filter add dev $DEV protocol ip parent 1: prio 3 u32 match ip dport 21
0xffff match ip tos 0x10 0xff flowid 1:10
# ACK packets ..
tc filter add dev $DEV protocol ip parent 1: prio 4 u32 match ip protocol 6
0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x10 0xff
at 33 flowid 1:11
# HTTP requests
tc filter add dev $DEV protocol ip parent 1: prio 10 u32 match ip dport 80
0xffff flowid 1:12
# that''s it for now ...
- Thilo Schulz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE+60VLZx4hBtWQhl4RAgpkAKCZA9jGYSQoWojsKBbM6iX+FBMcPwCfb2ht
mNdHOs2WynRmRsizhoCKoBY=A4NF
-----END PGP SIGNATURE-----
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/