I run a small Linux webserver and NAT router from my cable modem at home. Whenever someone starts an http download, all other traffic from my LAN is starved. Bandwidth is not really an issue, but latency is particularly horrible -- pings that usually come back in 20ms can take up to 600ms while the web server is active! I set up QoS (netfilter+iproute2) on the NAT machine in an attempt to give priority to non-web traffic. At first I tried the "prio" packet scheduler, which maintains three outgoing queues. I put everything but web traffic in queue zero, and the http packets in queue two (the scheduler will only transmit packets from a queue if the lower queues are empty, thus giving precedence to the lower-numbered queues). I verified using printk''s in the kernel modules that packets were indeed being prioritized and queued in the manner I describe. HOWEVER, this QoS setup does not reduce my latency problems at all!! Despite the packet prioritization, pings still shoot up into the 500ms range and UDP round-trip latency still becomes awful during a long http upload. Was I wrong in assuming that priority-band scheduling would fix my problem? I looked at using full CBQ, but I have no idea what options would be correct for my setup. Does anyone else have experience with solving problems like this? I just wish I could see the same 20ms pings even during a modest amount of web traffic... Many thanks, Dan
Mario Giammarco
2002-Feb-15 18:23 UTC
Re: priority bands don''t reduce interactive latency?
Il mer, 2002-02-13 alle 16:11, Danny Lepage ha scritto:> You didn''t say if you were using DSL / Cable modem whatever but forOk, I will be more esplicit: main computer --------------- router(486dx2)----------- serial modem 56k 192.168.0.1 192.168.0.10 | dynamic htb max 50kbit| prio> In this case, you need to do shaping so that the traffic is queued in > linux, not in the modem. > > Have a look at > http://ds9a.nl/lartc/HOWTO//cvs/2.4routing/output/2.4routing-15.html#ss15.8 for further information. >I started reading the howto even before htb was programmed. I have done an network examination where I showed how to implement diffserv using cbq. I then discovered that cqb was not accurate. In new howto I discovered htb but it seems that latency goes very high. I used script from howto. I will publish it if you ask but it seems there is some serious problem in htb or prio. I am not alone as you can see.> > I have just read this message (of 2 years ago...) and I have the same > > problem: I have a similar setup (I have used prio and htb) and when link > > is full latency of classes with high priority raise from 200 to 2000ms. > > In htb tutorial it show that latency of classes with prio 0 goes down. > > Using prio does not work. Is there something missing? > > > > -- > >
post your conf. hard to say without it ..> I then discovered that cqb was not accurate. > In new howto I discovered htb but it seems that latency goes very high. > I used script from howto. I will publish it if you ask but it seems > there is some serious problem in htb or prio. I am not alone as you can > see. > > > > I have just read this message (of 2 years ago...) and I have the same > > > problem: I have a similar setup (I have used prio and htb) and when link > > > is full latency of classes with high priority raise from 200 to 2000ms. > > > In htb tutorial it show that latency of classes with prio 0 goes down. > > > Using prio does not work. Is there something missing?
Mario Giammarco
2002-Feb-16 12:47 UTC
Re: priority bands don''t reduce interactive latency?
Il ven, 2002-02-15 alle 19:27, Martin Devera ha scritto:> post your conf. hard to say without it ..Ok if I can, this is my conf: #USCITA DA PPP0 #classe root echo ppp0 classe root tc qdisc $1 dev ppp0 root handle 1: prio #sottoclassi echo sottoclassi #tc qdisc $1 dev ppp0 parent 1:1 handle 10: sfq #tc qdisc $1 dev ppp0 parent 1:2 handle 20: tbf rate 20kbit buffer 1600 limit 3000 #tc qdisc $1 dev ppp0 parent 1:2 handle 20: sfq #tc qdisc $1 dev ppp0 parent 1:3 handle 30: sfq tc qdisc $1 dev ppp0 parent 1:1 handle 10: red min 200 max 400 avpkt 50 \ burst 10 limit 600 tc qdisc $1 dev ppp0 parent 1:2 handle 20: red min 300 max 400 avpkt 150 \ burst 10 limit 700 tc qdisc $1 dev ppp0 parent 1:3 handle 30: red min 1500 max 8000 avpkt 250 \ burst 10 limit 20000 # filtri echo filtro ssh # ssh tc filter add dev ppp0 parent 1:0 protocol ip prio 11 u32 \ match ip tos 0x10 0xff classid 1:2 echo filtro icmp # icmp tc filter add dev ppp0 parent 1:0 protocol ip prio 12 u32 \ match ip protocol 1 0xff classid 1:2 echo filtro ack # ack tc filter add dev ppp0 parent 1: protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x34 0xff at 3 \ match u8 0x10 0xff at 33 \ classid 1:1 echo filtro resto # resto tc filter add dev ppp0 parent 1: protocol ip prio 14 u32 \ match ip dst 0.0.0.0/0 classid 1:3 echo filtro udp # udp iptables -A OUTPUT -t mangle -p udp -j MARK --set-mark 2 tc filter add dev ppp0 parent 1: protocol ip prio 13 handle 2 fw \ classid 1:2 #ENTRATA DA ETH0 echo eth0 classe root #classe root tc qdisc $1 dev eth0 root handle 1: htb default 13 tc class $1 dev eth0 parent 1: classid 1:1 htb rate 51kbit \ ceil 52kbit burst 3k echo sottoclassi #sottoclassi tc class $1 dev eth0 parent 1:1 classid 1:10 htb rate 4kbit burst 1k \ prio 1 ceil 50kbit tc class $1 dev eth0 parent 1:1 classid 1:11 htb rate 25kbit burst 3k \ prio 2 ceil 50kbit tc class $1 dev eth0 parent 1:1 classid 1:12 htb rate 7kbit burst 2k \ prio 3 ceil 50kbit tc class $1 dev eth0 parent 1:1 classid 1:13 htb rate 4kbit burst 1k \ prio 4 ceil 50kbit #tc qdisc $1 dev eth0 parent 1:10 handle 10: sfq #tc qdisc $1 dev eth0 parent 1:11 handle 20: sfq #tc qdisc $1 dev eth0 parent 1:12 handle 30: sfq #tc qdisc $1 dev eth0 parent 1:13 handle 40: sfq tc qdisc $1 dev eth0 parent 1:10 handle 10: red min 200 max 400 avpkt 50 \ burst 10 limit 600 tc qdisc $1 dev eth0 parent 1:11 handle 20: red min 300 max 1500 avpkt 150 \ burst 10 limit 700 tc qdisc $1 dev eth0 parent 1:12 handle 30: red min 1500 max 8000 avpkt 250 \ burst 20 limit 20000 tc qdisc $1 dev eth0 parent 1:13 handle 40: red min 1500 max 8000 avpkt 250 \ burst 10 limit 20000 # filtri echo ssh # ssh tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ match ip tos 0x10 0xff classid 1:11 echo icmp # icmp tc filter add dev eth0 parent 1: protocol ip prio 11 u32 \ match ip protocol 1 0xff classid 1:11 echo ack # ack tc filter add dev eth0 parent 1: protocol ip prio 13 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x34 0xff at 3 \ match u8 0x10 0xff at 33 \ classid 1:10 echo resto # resto tc filter add dev eth0 parent 1: protocol ip prio 15 u32 \ match ip dst 0.0.0.0/0 classid 1:13 echo www # www iptables -A PREROUTING -t mangle -p tcp --dport 8080 \ -j MARK --set-mark 1 iptables -A PREROUTING -t mangle -p tcp --sport 8080 \ -j MARK --set-mark 1 iptables -A PREROUTING -t mangle -p tcp --dport 80 \ -j MARK --set-mark 1 iptables -A PREROUTING -t mangle -p tcp --sport 80 \ -j MARK --set-mark 1 tc filter add dev eth0 parent 1: protocol ip prio 14 handle 1 fw \ classid 1:12 echo udp # udp iptables -A PREROUTING -t mangle -p udp -j MARK --set-mark 2 tc filter add dev eth0 parent 1: protocol ip prio 12 handle 2 fw \ classid 1:11
Ross Skaliotis
2002-Feb-16 16:13 UTC
Re: priority bands don''t reduce interactive latency?
Hmm, very interesting setup indeed. I have a few suggestions. Right now, you are making your root qdisc the prio qdisc. Instead of this, you might want to make it a class based qdisc with the ability to control the total upstream bandwidth of your ppp0 connection. I guess you''re doing it right now with RED, but RED isn''t quite the best solution here. Under the class based qdisc which shapes your bandwidth down to a little under your device''s upstream bandwidth (so that queuing occurs in linux and not in your device) is where you can place your prio qdisc. On the prios, you can then place SFQs, or whatever you wish. As a final step, you can limit the traffic coming into your ppp0 device with an ingress filter to decrease latency on that end too. There''s no real need to do much else in terms of controlling the ingress end, as it doesn''t work the same way as controling your upstream. You don''t have much control over what order people send you packets. So, a configuration/script like this might work for you: #Sets up root qdisc and limits all traffic to 100Kbit/s. Make sure to #change this to a little bit under whatever your link can support. tc qdisc del dev ppp0 root tc qdisc add dev ppp0 root handle 1: cbq bandwidth 10Mbit avpkt 1000 tc class add dev ppp0 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate \ 100Kbit allot 1514 weight 10Kbit prio 5 maxburst 0 avpkt 1400 bounded \ isolated #Sets up a prio qdisc with 2 bands, one for normal uploading traffic and #one for prioritized low latency traffic. tc qdisc add dev ppp0 parent 1:1 handle 2: prio bands 2 priomap 1 1 1 1 1 \ 1 1 1 1 1 1 1 1 1 1 1 tc qdisc add dev ppp0 parent 2:1 handle 10: pfifo limit 128 tc qdisc add dev ppp0 parent 2:2 handle 20: sfq perturb 5 quantum 1514b #Filters all priority traffic (in your case, icmp, ssh, and ack pacekts) #to the lower band in the prio qdisc. (also makes traffic go though the #cbq and prio qdiscs as it should) tc filter add dev ppp0 parent 1:0 protocol ip prio 5 u32 match u8 04 0x00 \ at 0 flowid 1:1 tc filter add dev ppp0 parent 1:1 protocol ip prio 5 u32 match u8 04 0x00 \ at 0 flowid 2:0 #ACK Packets: tc filter add dev ppp0 parent 2:0 protocol ip prio 2 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x34 0xff at 3 \ match u8 0x10 0xff at 33 \ flowid 2:1 #ICMP Packets: tc filter add dev ppp0 parent 2:0 protocol ip prio 2 u32 match ip protocol \ 1 0xff flowid 2:1 #SSH Packets: tc filter add dev ppp0 parent 2:0 protocol ip prio 3 u32 match ip sport 22 \ 0xffff flowid 2:1 tc filter add dev ppp0 parent 2:0 protocol ip prio 3 u32 match ip dport 22 \ 0xffff flowid 2:1 #Now as a final step, add a policing ingress filter. Make sure to set the #bandwidth to just under what your connection will support for downloads. tc qdisc del dev ppp0 ingress handle ffff: tc qdisc add dev ppp0 ingress handle ffff: tc filter add dev ppp0 parent ffff: protocol ip prio 10 u32 match ip src \ 0.0.0.0/0 police rate 500kbit buffer 3000 drop flowid :1 #End ----- I hope this works for you, -Ross Skaliotis On 16 Feb 2002, Mario Giammarco wrote:> Il ven, 2002-02-15 alle 19:27, Martin Devera ha scritto: > > post your conf. hard to say without it .. > > Ok if I can, this is my conf: > > #USCITA DA PPP0 > > > #classe root > echo ppp0 classe root > tc qdisc $1 dev ppp0 root handle 1: prio > > > #sottoclassi > echo sottoclassi > #tc qdisc $1 dev ppp0 parent 1:1 handle 10: sfq > #tc qdisc $1 dev ppp0 parent 1:2 handle 20: tbf rate 20kbit buffer 1600 limit 3000 > #tc qdisc $1 dev ppp0 parent 1:2 handle 20: sfq > #tc qdisc $1 dev ppp0 parent 1:3 handle 30: sfq > > tc qdisc $1 dev ppp0 parent 1:1 handle 10: red min 200 max 400 avpkt 50 \ > burst 10 limit 600 > tc qdisc $1 dev ppp0 parent 1:2 handle 20: red min 300 max 400 avpkt 150 \ > burst 10 limit 700 > tc qdisc $1 dev ppp0 parent 1:3 handle 30: red min 1500 max 8000 avpkt 250 \ > burst 10 limit 20000 > > # filtri > > echo filtro ssh > # ssh > tc filter add dev ppp0 parent 1:0 protocol ip prio 11 u32 \ > match ip tos 0x10 0xff classid 1:2 > > > echo filtro icmp > # icmp > tc filter add dev ppp0 parent 1:0 protocol ip prio 12 u32 \ > match ip protocol 1 0xff classid 1:2 > > > > echo filtro ack > # ack > tc filter add dev ppp0 parent 1: protocol ip prio 10 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u8 0x34 0xff at 3 \ > match u8 0x10 0xff at 33 \ > classid 1:1 > > > echo filtro resto > # resto > tc filter add dev ppp0 parent 1: protocol ip prio 14 u32 \ > match ip dst 0.0.0.0/0 classid 1:3 > > > echo filtro udp > # udp > > iptables -A OUTPUT -t mangle -p udp -j MARK --set-mark 2 > > tc filter add dev ppp0 parent 1: protocol ip prio 13 handle 2 fw \ > classid 1:2 > > > > > > > #ENTRATA DA ETH0 > > echo eth0 classe root > #classe root > > tc qdisc $1 dev eth0 root handle 1: htb default 13 > > tc class $1 dev eth0 parent 1: classid 1:1 htb rate 51kbit \ > ceil 52kbit burst 3k > > > > echo sottoclassi > #sottoclassi > > tc class $1 dev eth0 parent 1:1 classid 1:10 htb rate 4kbit burst 1k \ > prio 1 ceil 50kbit > > tc class $1 dev eth0 parent 1:1 classid 1:11 htb rate 25kbit burst 3k \ > prio 2 ceil 50kbit > > tc class $1 dev eth0 parent 1:1 classid 1:12 htb rate 7kbit burst 2k \ > prio 3 ceil 50kbit > > tc class $1 dev eth0 parent 1:1 classid 1:13 htb rate 4kbit burst 1k \ > prio 4 ceil 50kbit > > > #tc qdisc $1 dev eth0 parent 1:10 handle 10: sfq > #tc qdisc $1 dev eth0 parent 1:11 handle 20: sfq > #tc qdisc $1 dev eth0 parent 1:12 handle 30: sfq > #tc qdisc $1 dev eth0 parent 1:13 handle 40: sfq > > tc qdisc $1 dev eth0 parent 1:10 handle 10: red min 200 max 400 avpkt 50 \ > burst 10 limit 600 > tc qdisc $1 dev eth0 parent 1:11 handle 20: red min 300 max 1500 avpkt 150 \ > burst 10 limit 700 > tc qdisc $1 dev eth0 parent 1:12 handle 30: red min 1500 max 8000 avpkt 250 \ > burst 20 limit 20000 > tc qdisc $1 dev eth0 parent 1:13 handle 40: red min 1500 max 8000 avpkt 250 \ > burst 10 limit 20000 > > > > # filtri > > echo ssh > # ssh > tc filter add dev eth0 parent 1: protocol ip prio 10 u32 \ > match ip tos 0x10 0xff classid 1:11 > > echo icmp > # icmp > tc filter add dev eth0 parent 1: protocol ip prio 11 u32 \ > match ip protocol 1 0xff classid 1:11 > > > echo ack > # ack > tc filter add dev eth0 parent 1: protocol ip prio 13 u32 \ > match ip protocol 6 0xff \ > match u8 0x05 0x0f at 0 \ > match u8 0x34 0xff at 3 \ > match u8 0x10 0xff at 33 \ > classid 1:10 > > > echo resto > # resto > tc filter add dev eth0 parent 1: protocol ip prio 15 u32 \ > match ip dst 0.0.0.0/0 classid 1:13 > > > echo www > # www > iptables -A PREROUTING -t mangle -p tcp --dport 8080 \ > -j MARK --set-mark 1 > iptables -A PREROUTING -t mangle -p tcp --sport 8080 \ > -j MARK --set-mark 1 > iptables -A PREROUTING -t mangle -p tcp --dport 80 \ > -j MARK --set-mark 1 > iptables -A PREROUTING -t mangle -p tcp --sport 80 \ > -j MARK --set-mark 1 > > tc filter add dev eth0 parent 1: protocol ip prio 14 handle 1 fw \ > classid 1:12 > > > echo udp > # udp > > iptables -A PREROUTING -t mangle -p udp -j MARK --set-mark 2 > > tc filter add dev eth0 parent 1: protocol ip prio 12 handle 2 fw \ > classid 1:11 > > > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://ds9a.nl/lartc/ >
Mario, I looked at the conf. First don''t use prio 4 - it is the same as prio 3 (htb does prios 0..3). How do you measure the latency ? Ping ? How big packet, which ip is pinged ? You use prio and other schedulers on ppp ... Goes the ping packet through them ? Did you tested for sure that packets are enqueued into right class (tc -s class show dev eth0) ? There is no reason why it should not work - but you must be sure where is the packet delayed. Regarding your first mail I read - seems you are not alone having the problem .. who else ? I''ve read something about 2 year old post - the htb was not there at that time so that the problem might be related to prio+tbf+red chain ? If it could be htb problem I''m ready to look at it (although I''ve been solving almost ten reports like this and only first two were really htb bugs - other was mainly bad setup, bad expectations or bad measuring). I don''t want to be alibistic only I''m short of time as I work in new htb algorithm ;) devik
Mario Giammarco
2002-Feb-18 08:34 UTC
Re: priority bands don''t reduce interactive latency?
Il sab, 2002-02-16 alle 20:26, Martin Devera ha scritto:> Mario, >First I thank you all for your precious support! In diffserv mailing list there is less help!> I looked at the conf. First don''t use prio 4 - it is the same as > prio 3 (htb does prios 0..3).Ok I have corrected it.> How do you measure the latency ? Ping ? How big packet, which ip > is pinged ? You use prio and other schedulers on ppp ... Goes the > ping packet through them ?Sure testing is difficult. Please note that I have tried different configurations and not only the one I posted: sfq vs red, htb vs prio etc. etc. etc.> Did you tested for sure that packets are enqueued into right > class (tc -s class show dev eth0) ? >Unfortunately tc command shows to me that packets are in right class.> If it could be htb problem I''m ready to look at it (although I''ve > been solving almost ten reports like this and only first two were > really htb bugs - other was mainly bad setup, bad expectations or > bad measuring). > I don''t want to be alibistic only I''m short of time as I work in new > htb algorithm ;) >No, I do not want to accuse you of a bug. I have asked only because I do not want to spend a month on various tests and THEN discover that there is a KNOWN bug in htb. I rememeber when I had to implement diffserv with cbq for my teacher. I had to cheat a lot because cqb parameters do not do what they mean. If you say htb is stable I am happy. Please continue developing new algorithms ;-)
> > How do you measure the latency ? Ping ? How big packet, which ip > > is pinged ? You use prio and other schedulers on ppp ... Goes the > > ping packet through them ? > > Sure testing is difficult. Please note that I have tried different > configurations and not only the one I posted: sfq vs red, htb vs prio > etc. etc. etc.I believe you. The questions above was real they was not intended to show you that you are doing something bad. ;) I''m really interested what test showed you bad delay. If you can reply to them do it. I''ll be able to think more deep on it and possibly give you some hint.> If you say htb is stable I am happy.should be. However people often use class load balancing without priorities. I tested priorities but this was rather quick test. So that it is possible to have bug in priorization part :) You should to test it by ping to the router machine to be sure that it goes only thru htb qdisc. regards, devik
Mario Giammarco
2002-Feb-21 15:03 UTC
Re: priority bands don''t reduce interactive latency?
Il lun, 2002-02-18 alle 11:00, Martin Devera ha scritto:> should be. However people often use class load balancing without > priorities. I tested priorities but this was rather quick test. > So that it is possible to have bug in priorization part :) > You should to test it by ping to the router machine to be sure > that it goes only thru htb qdisc. >Hello, I made some tests and I have now 2 questions to ask you: 1) I use a 486dx2 66mhz as router, is the cpu enough powerful? 2) In your latency tests I have seen that you generate traffic to saturate link but you do not OVERLOAD it. Can you please redo your simulation with more traffic? Thank you very much for the interest and the patience.
> I made some tests and I have now 2 questions to ask you: > > 1) I use a 486dx2 66mhz as router, is the cpu enough powerful?Yes it should be. Depends on your rates of course. I''d expect about 1Mbit on such machine. in tc -s qdisc there is value deq_util 1/X (under load). The X should be larger than 20. If lower the qdisc is overloaded. Unfortunatel;y the value is computed using TSC which is available only on Pentium CPU ..> 2) In your latency tests I have seen that you generate traffic to > saturate link but you do not OVERLOAD it. Can you please redo your > simulation with more traffic?The link WAS overloaded during the test. You can''t see it from graphs as there is only already shaped figure .. But I just finished new algorithm implementation and will rerun all test with it. I''ll focus on this part. devik