Thilo Schulz
2003-Jun-14 15:54 UTC
Low latency on large uploads - almost done but not quite.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, The wondershaper did not work quite the way I wanted it to work, with the CBQ-wondershaper even giving better results than the HTB-wondershaper. So I assembled my own little traffic shaper script. It does already very much of the dirty work and is successful in policing non-bulk traffic down. I have a 128 kbit/s uplink and a 768 kbit/s downlink. I do not have to care much about downloads, as the downlink is fast enough. A more serious problem is, when the uplink is spammed for example by a larger upload. My demands are very high in these situations, as it may well be that I am playing EliteForce, a game using the quake3 engine which needs low latencies. I am using the HTB qdisc. Generally, I have put up four classes, the first class is where interactive traffic gets in and has guaranteed some bandwidth, the second one is for ACK packets getting their share of uplink to ensure fast downloading while having a big upload :), the third one is for web requests, like sending HTTP requests to pages with not much bandwidth and priority and the last one is the default queue. Every traffic that is not matched by the filter gets into the default class which has by default almost no rate but may ceil up to 15 kbyte/s, maximum allowed for the link. Same goes for all queues. This principle works quite well, I have managed to get the latency for the interactive class well below 150 where it would be > 2000 without traffic shaper. I regard this already for quite an accomplishment, though it is not good enough yet for a gamer. I need stable pings, and the pings now are around 60 and 150. To demonstrate what I mean I give you the ping output: 64 bytes from 2int.de (217.160.128.207): icmp_seq=347 ttl=56 time=128 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=348 ttl=56 time=136 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=349 ttl=56 time=133 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=350 ttl=56 time=143 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=351 ttl=56 time=60.4 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=352 ttl=56 time=63.1 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=353 ttl=56 time=60.9 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=354 ttl=56 time=60.7 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=355 ttl=56 time=65.4 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=356 ttl=56 time=64.5 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=357 ttl=56 time=64.3 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=358 ttl=56 time=72.0 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=359 ttl=56 time=82.1 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=360 ttl=56 time=99.6 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=361 ttl=56 time=99.3 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=362 ttl=56 time=107 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=363 ttl=56 time=127 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=364 ttl=56 time=128 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=365 ttl=56 time=136 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=366 ttl=56 time=136 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=367 ttl=56 time=59.2 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=368 ttl=56 time=61.0 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=369 ttl=56 time=63.3 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=370 ttl=56 time=62.0 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=371 ttl=56 time=91.2 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=372 ttl=56 time=90.1 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=373 ttl=56 time=87.2 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=374 ttl=56 time=86.7 ms 64 bytes from 2int.de (217.160.128.207): icmp_seq=375 ttl=56 time=86.5 ms You get the idea - first it''s nice around 60 (this is the default ping without any upload), but with upload and traffic shaper the ping gets high to 140 and after a while drops down to 60 again and again to 140 and so forth and so forth. Does anyone of you have an idea how I can minimize this effect, and let pings be stable at 60 ms? stable 80ms delay are okay for me too, no question. If I let the worst-priority bulkdownload class ceil up only to 10kbyte/s I have the same effect, only when the max ceil class is put down under 6 i do not have this changing ping effect. Here''s still my script, if you are interested to look at it. #!/bin/bash DEV=ppp0 # delete any qdiscs or rule sets created so far. tc qdisc del dev $DEV root 2> /dev/null > /dev/null # tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null # create the root qdisc tc qdisc add dev $DEV root handle 1: htb default 13 # install a root class, so that other clients can borrow from each other. tc class add dev $DEV parent 1: classid 1:1 htb rate 15kbps ceil 15kbps # now install 4 sub classes for different priorities # highest priority for low latency games like quake3 and ssh / ftp control. tc class add dev $DEV parent 1:1 classid 1:10 htb rate 7kbps ceil 15kbps \ prio 0 burst 20000b cburst 22000b # not as high but still high priority for ACK''s - useful for keeping large # d/l''s alive :) tc class add dev $DEV parent 1:1 classid 1:11 htb rate 7kbps ceil 15kbps prio 1 burst 200b cburst 200b # very few data allowed for HTTP requests, but still higher priority than bulk uploads. tc class add dev $DEV parent 1:1 classid 1:12 htb rate 2kbps ceil 15kbps prio 10 burst 1b cburst 1b # bulk uploads have no prio :D tc class add dev $DEV parent 1:1 classid 1:13 htb rate 1bps ceil 15kbps prio 20 burst 1b cburst 1b # now make all qdiscs simple pfifo # small queues for minimum latency tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0 tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0 # larger queues for more latency. tc qdisc add dev $DEV parent 1:12 handle 40: pfifo limit 5 tc qdisc add dev $DEV parent 1:13 handle 50: pfifo limit 20 #quake3-style udp games have been marked in iptables tc filter add dev $DEV protocol ip parent 1: prio 0 handle 1 fw flowid 1:10 # icmp to get the response times. tc filter add dev $DEV protocol ip parent 1: prio 1 u32 match ip protocol 1 0xff flowid 1:10 # ssh - not scp! scp is seperated by the TOS bits from ssh tc filter add dev $DEV protocol ip parent 1: prio 2 u32 match ip dport 22 0xffff match ip tos 0x10 0xff flowid 1:10 # ftp tc filter add dev $DEV protocol ip parent 1: prio 3 u32 match ip dport 21 0xffff match ip tos 0x10 0xff flowid 1:10 # ACK packets .. tc filter add dev $DEV protocol ip parent 1: prio 4 u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x10 0xff at 33 flowid 1:11 # HTTP requests tc filter add dev $DEV protocol ip parent 1: prio 10 u32 match ip dport 80 0xffff flowid 1:12 # that''s it for now ... - Thilo Schulz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) iD8DBQE+60VLZx4hBtWQhl4RAgpkAKCZA9jGYSQoWojsKBbM6iX+FBMcPwCfb2ht mNdHOs2WynRmRsizhoCKoBY=A4NF -----END PGP SIGNATURE----- _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Stef Coene
2003-Jun-15 09:09 UTC
Re: Low latency on large uploads - almost done but not quite.
> Here''s still my script, if you are interested to look at it.I''m interested and I have some remarks.> #!/bin/bash > > DEV=ppp0 > > # delete any qdiscs or rule sets created so far. > tc qdisc del dev $DEV root 2> /dev/null > /dev/null > # tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null > > # create the root qdisc > tc qdisc add dev $DEV root handle 1: htb default 13 > > # install a root class, so that other clients can borrow from each other. > tc class add dev $DEV parent 1: classid 1:1 htb rate 15kbps ceil 15kbps > > # now install 4 sub classes for different priorities > # highest priority for low latency games like quake3 and ssh / ftp control. > tc class add dev $DEV parent 1:1 classid 1:10 htb rate 7kbps ceil 15kbps \ > prio 0 burst 20000b cburst 22000b > # not as high but still high priority for ACK''s - useful for keeping large > # d/l''s alive :) > tc class add dev $DEV parent 1:1 classid 1:11 htb rate 7kbps ceil 15kbps > prio 1 burst 200b cburst 200b > # very few data allowed for HTTP requests, but still higher priority than > bulk uploads. > tc class add dev $DEV parent 1:1 classid 1:12 htb rate 2kbps ceil 15kbps > prio 10 burst 1b cburst 1b > # bulk uploads have no prio :D > tc class add dev $DEV parent 1:1 classid 1:13 htb rate 1bps ceil 15kbps > prio 20 burst 1b cburst 1bYour burst is too low. I understand you want a minimum burst, but you have to follow some rules. The best you can do is to remove the burst/cburst option so htb can calculate the minimum burst/cburst for you. And don''t you get quantum errors in your kernel log? That''s because your quantum is too low for the classes. There is a long explanation for this, see www.docum.org on the faq page. You also use different prio''s. This can be ok in most cases, except if you have a low prio class that''s sending more data then the configured rate. If you do so, the latency can go up for that class. I (still) didn''t test it myself, but you can find prove of it on the htb homepage. The solution for this is to make sure you never put too much traffic in a low prio class.> # now make all qdiscs simple pfifo > # small queues for minimum latency > tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0 > tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0Are you sure limit 0 is possible ????> # larger queues for more latency. > tc qdisc add dev $DEV parent 1:12 handle 40: pfifo limit 5 > tc qdisc add dev $DEV parent 1:13 handle 50: pfifo limit 20 > > #quake3-style udp games have been marked in iptables > tc filter add dev $DEV protocol ip parent 1: prio 0 handle 1 fw flowid 1:10 > # icmp to get the response times. > tc filter add dev $DEV protocol ip parent 1: prio 1 u32 match ip protocol 1 > 0xff flowid 1:10 > # ssh - not scp! scp is seperated by the TOS bits from ssh > tc filter add dev $DEV protocol ip parent 1: prio 2 u32 match ip dport 22 > 0xffff match ip tos 0x10 0xff flowid 1:10 > # ftp > tc filter add dev $DEV protocol ip parent 1: prio 3 u32 match ip dport 21 > 0xffff match ip tos 0x10 0xff flowid 1:10 > # ACK packets .. > tc filter add dev $DEV protocol ip parent 1: prio 4 u32 match ip protocol 6 > 0xff match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 match u8 0x10 > 0xff at 33 flowid 1:11 > # HTTP requests > tc filter add dev $DEV protocol ip parent 1: prio 10 u32 match ip dport 80 > 0xffff flowid 1:12Stef -- stef.coene@docum.org "Using Linux as bandwidth manager" http://www.docum.org/ #lartc @ irc.oftc.net _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Thilo Schulz
2003-Jun-15 11:44 UTC
Re: Low latency on large uploads - almost done but not quite.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday 15 June 2003 11:09, you wrote:> > Here''s still my script, if you are interested to look at it. > I''m interested and I have some remarks.> Your burst is too low. I understand you want a minimum burst, but you have > to follow some rules. The best you can do is to remove the burst/cburst > option so htb can calculate the minimum burst/cburst for you.yes, sounds reasonable now that I spend a second thought about it.> And don''t you get quantum errors in your kernel log? That''s because your > quantum is too low for the classes. There is a long explanation for this, > see www.docum.org on the faq page.hmm .. quantum? I have never set quantum with any parameter, or have I?> You also use different prio''s. This can be ok in most cases, except if you > have a low prio class that''s sending more data then the configured rate. > If you do so, the latency can go up for that class. I (still) didn''t test > it myself, but you can find prove of it on the htb homepage. The solution > for this is to make sure you never put too much traffic in a low prio > class.I have given plenty of bandwidth to the 1:10 class. Quake3 streams are max. 1500 bytes/s. And ssh does not use that much either.> > # now make all qdiscs simple pfifo > > # small queues for minimum latency > > tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0 > > tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0 > > Are you sure limit 0 is possible ????Yes, at least the status command showed me, that the limit was set to 0. - Thilo Schulz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) iD8DBQE+7FwPZx4hBtWQhl4RAn8XAKDSJR6E7w3Q6I0ki4bVpDGfH//anwCfestd aj5fVwoC9ANATJ1CA50N5P4=9XOi -----END PGP SIGNATURE----- _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Stef Coene
2003-Jun-15 12:00 UTC
Re: Low latency on large uploads - almost done but not quite.
On Sunday 15 June 2003 13:44, Thilo Schulz wrote:> On Sunday 15 June 2003 11:09, you wrote: > > > Here''s still my script, if you are interested to look at it. > > > > I''m interested and I have some remarks. > > > > Your burst is too low. I understand you want a minimum burst, but you > > have to follow some rules. The best you can do is to remove the > > burst/cburst option so htb can calculate the minimum burst/cburst for > > you. > > yes, sounds reasonable now that I spend a second thought about it. > > > And don''t you get quantum errors in your kernel log? That''s because your > > quantum is too low for the classes. There is a long explanation for > > this, see www.docum.org on the faq page. > > hmm .. quantum? I have never set quantum with any parameter, or have I?No. Quantum is used for leaf classes to determine the amount of packets they can send. It''s calculates as rate / r2q. And r2q is 10 by default. You can overrule r2q if you add the htb qdisc and you can overrule quantum if you add a htb class. Quantum must be > 1500 (the size of 1 packet) and < 60000.> > You also use different prio''s. This can be ok in most cases, except if > > you have a low prio class that''s sending more data then the configured > > rate. If you do so, the latency can go up for that class. I (still) > > didn''t test it myself, but you can find prove of it on the htb homepage. > > The solution for this is to make sure you never put too much traffic in a > > low prio class. > > I have given plenty of bandwidth to the 1:10 class. Quake3 streams are max. > 1500 bytes/s. And ssh does not use that much either.Ok. As long as you are aware of the problem. You can also try to limit the amount of packets the filters with a policer. So there are never too much packets in a class.> > > # now make all qdiscs simple pfifo > > > # small queues for minimum latency > > > tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0 > > > tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0 > > > > Are you sure limit 0 is possible ???? > > Yes, at least the status command showed me, that the limit was set to 0.Ok. Stef -- stef.coene@docum.org "Using Linux as bandwidth manager" http://www.docum.org/ #lartc @ irc.oftc.net _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Corey Rogers
2003-Jun-16 07:54 UTC
Re: Low latency on large uploads - almost done but not quite.
I maybe a bit late but incase this helps anyone, I was trying to place icmp in its own queue and give a high priority. I couldnt mark with iptables due to the fact it core dumps when I attempt to use the mangle chain (perhaps because these boxes also run freeswan. So i shaped based on protocol number. My problem was it wasn''t working until I specified a prio for the icmp filter, also using prio 0 it did not work but it did with prio 1. tc filter add dev $DEV protocol ip parent 1: prio 1 u32 \ match ip protocol 1 0xff flowid 1:21 On Sun, 2003-06-15 at 08:00, Stef Coene wrote:> On Sunday 15 June 2003 13:44, Thilo Schulz wrote: > > On Sunday 15 June 2003 11:09, you wrote: > > > > Here''s still my script, if you are interested to look at it. > > > > > > I''m interested and I have some remarks. > > > > > > Your burst is too low. I understand you want a minimum burst, but you > > > have to follow some rules. The best you can do is to remove the > > > burst/cburst option so htb can calculate the minimum burst/cburst for > > > you. > > > > yes, sounds reasonable now that I spend a second thought about it. > > > > > And don''t you get quantum errors in your kernel log? That''s because your > > > quantum is too low for the classes. There is a long explanation for > > > this, see www.docum.org on the faq page. > > > > hmm .. quantum? I have never set quantum with any parameter, or have I? > No. Quantum is used for leaf classes to determine the amount of packets they > can send. It''s calculates as rate / r2q. And r2q is 10 by default. You can > overrule r2q if you add the htb qdisc and you can overrule quantum if you add > a htb class. Quantum must be > 1500 (the size of 1 packet) and < 60000. > > > > You also use different prio''s. This can be ok in most cases, except if > > > you have a low prio class that''s sending more data then the configured > > > rate. If you do so, the latency can go up for that class. I (still) > > > didn''t test it myself, but you can find prove of it on the htb homepage. > > > The solution for this is to make sure you never put too much traffic in a > > > low prio class. > > > > I have given plenty of bandwidth to the 1:10 class. Quake3 streams are max. > > 1500 bytes/s. And ssh does not use that much either. > Ok. As long as you are aware of the problem. You can also try to limit the > amount of packets the filters with a policer. So there are never too much > packets in a class. > > > > > # now make all qdiscs simple pfifo > > > > # small queues for minimum latency > > > > tc qdisc add dev $DEV parent 1:10 handle 20: pfifo limit 0 > > > > tc qdisc add dev $DEV parent 1:11 handle 30: pfifo limit 0 > > > > > > Are you sure limit 0 is possible ???? > > > > Yes, at least the status command showed me, that the limit was set to 0. > Ok. > > Stef-- Corey Rogers Junior System Administrator Wamco Technology Group Ltd (Barbados) #3 Mahogany Court, Wildey, St. Michael Phone: (246)437-3154 FAX: (246)228-4319 [F]or those of you who are constantly belittled by your peers for believing that Big Brother is out to get you, be assured, it is. In fact,you are probably not paranoid enough." - editorial, "Today''s Technology Can Easily Track Criminals and Ex-offenders", _The_ECHO_ newspaper, Jan. 1998 CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any,is (are) for the intended recipient only (person or entity)and may contain confidential or proprietary information some or all of which may be legally privileged. Any unauthorised review, use, copy, print, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message and do not in any way rely on this e-mail. If you are the intended recipient but do not wish to receive communications through this medium, please so advise the sender immediately.
sufcrusher
2003-Jun-17 18:16 UTC
Re: Low latency on large uploads - almost done but not quite.
Playing CounterStrike myself regurlarly and being on a LAN with a few professional P2Pers, I had the same problem and also experienced the non-stable pings. At first I started experimenting with the rates and ceilings, but in practice that didn''t help much. (one of) the reasons for the unstable ping is that a packet of ~ 1500 bytes on a 128kbit connection (like yours and mine) takes roughly a 10th of a second to send (100ms). So at the moment a large packet is being sent and a quake packet is next in the queue, it still has to wait 100ms (worst case). This latency of course adds to the normal latency you already have to the quake server. What does seem to help a little is lowering the maximum packet size (MTU) in your routing table: #!/bin/sh oldroute=`ip route | grep default | cut -d'' '' -f-5` ip route change $oldroute mtu 500 The cut takes care of removing the ''mtu xxx'' from a line like this (type "ip route": to see it) default via a.b.c.d dev eth0 mtu 900 It only takes the first 5 words. I presume that number of words can be different in other situations, so you might have to adapt for that. (or use something other than ''cut'' to do it properly). Note that the MTU only effects the outgoing packetsize, so downloads are not affected at all. Uploads do get a little less efficient (the packetheaders consist of a larger portion of the traffic) but in practise this is still acceptable. Game packets are pretty small anyway, so won''t be affected at all. I''m assuming the burst and quantum settings can be optimized for smaller packet sizes to take full advantage of this. But to be honest I haven''t really done that yet. I use a cable connection which has a more than 10 times faster download than upload, so for me shaping the download isn''t very effective. If you want to limit the maximum packetsize for incoming packets as well (at least for TCP) you can simply do this: iptables -I PREROUTING -t mangle -i eth0 -j TCPMSS --set-mss 1000 -p TCP --tcp-flags SYN,RST SYN iptables -I INPUT -t mangle -i eth0 -j TCPMSS --set-mss 1000 -p TCP --tcp-flags SYN,RST SYN For 1000 bytes packets. You can also use --clamp-mss-to-mtu option, which probably makes sense. Note that the MSS thing only works for new connections. Note: After you change the MSS value, existing connections will still use the old size. MTU changes have effect immediately. Also make sure you patched the kernel to use the high resolution timer (info at www.docum.org somewhere). That helped a lot in my case (you can put the ceilingrates closer to the actual 128kbit and therefore reduce latency as well). I''m not sure it''s still necessary on 2.4.20 and/or 2.4.21. Jannes Faber ----- Original Message ----- From: "Thilo Schulz" <arny@ats.s.bawue.de> To: <lartc@mailman.ds9a.nl> Sent: Saturday, June 14, 2003 5:54 PM Subject: [LARTC] Low latency on large uploads - almost done but not quite. Does anyone of you have an idea how I can minimize this effect, and let pings be stable at 60 ms? stable 80ms delay are okay for me too, no question. If I let the worst-priority bulkdownload class ceil up only to 10kbyte/s I have the same effect, only when the max ceil class is put down under 6 i do not have this changing ping effect. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Thilo Schulz
2003-Jun-18 12:32 UTC
Re: Low latency on large uploads - almost done but not quite.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 17 June 2003 20:16, sufcrusher wrote:> (one of) the reasons for the unstable ping is that a packet of ~ 1500 bytes > on a 128kbit connection (like yours and mine) takes roughly a 10th of a > second to send (100ms). So at the moment a large packet is being sent and a > quake packet is next in the queue, it still has to wait 100ms (worst case). > This latency of course adds to the normal latency you already have to the > quake server.Yes, I have thought so too and thought about playing around with the MTU .. but did not really want to change it yet. Thank you anyways for these helpful hints, I am going to try it as soon as possible :)> I''m assuming the burst and quantum settings can be optimized for smaller > packet sizes to take full advantage of this. But to be honest I haven''t > really done that yet.The lower the burst settings, the less delay I have in theory, so high-prio queues _definitely_ get their turn in time.> I use a cable connection which has a more than 10 times faster download > than upload, so for me shaping the download isn''t very effective. If you > want to limit the maximum packetsize for incoming packets as well (at least > for TCP) you can simply do this:The same applies to me, I have 768 kbyte/s downlink, I doubt that this is an issue when gaming.> For 1000 bytes packets. You can also use --clamp-mss-to-mtu option, which > probably makes sense. Note that the MSS thing only works for new > connections.I have to do this anyways, as my pppoe is limiting the MTU on the virtual ppp0 device to 1492 because ethernet frames can only have a certain size and pppoe still encapsules ppp in the ethernet.> Also make sure you patched the kernel to use the high resolution timer > (info at www.docum.org somewhere). That helped a lot in my case (you can > put the ceilingrates closer to the actual 128kbit and therefore reduce > latency as well). I''m not sure it''s still necessary on 2.4.20 and/or > 2.4.21.I have a 133 Mhz AMD 486 - whether setting the resolution timer up would be very good for performance I don''t know. - Thilo Schulz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) iD8DBQE+8FviZx4hBtWQhl4RAkyNAKC+3wE3bqMmQEr8qwkxdpPX6cuzdwCff03Z 6YTVuELLr1BNRPl/hym44Fw=4Qwt -----END PGP SIGNATURE----- _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
sufcrusher
2003-Jun-18 19:10 UTC
Re: Low latency on large uploads - almost done but not quite.
You couldn''t even if you wanted. High res timer requeres at least one of the faster pentiums (not all pentiums can do it). If you can''t get a stable latency in the end, it might be worthwhile to upgrade to a pentium, but for now I''d keep trying some finetuning. Jannes Faber ----- Original Message ----- From: "Thilo Schulz" <arny@ats.s.bawue.de> To: <lartc@mailman.ds9a.nl> Sent: Wednesday, June 18, 2003 2:32 PM Subject: Re: [LARTC] Low latency on large uploads - almost done but not quite. On Tuesday 17 June 2003 20:16, sufcrusher wrote:> Also make sure you patched the kernel to use the high resolution timer > (info at www.docum.org somewhere). That helped a lot in my case (you can > put the ceilingrates closer to the actual 128kbit and therefore reduce > latency as well). I''m not sure it''s still necessary on 2.4.20 and/or > 2.4.21.I have a 133 Mhz AMD 486 - whether setting the resolution timer up would be very good for performance I don''t know. - Thilo Schulz _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/