Hi all, First, let me say I''ve been most impressed with how quickly and professionally people on this list ask and answer questions. Next, let me say that with which I need help is properly configuring strict PQ, and gathering certain stats. Specifically: - I need to create a priority queue with four queues (let''s say they are of high, medium, normal, and low priority) - I need to use tc filters such that: - EF (0xB8) goes to the high priority queue - AF21 (0x50) goes to the medium priority queue - AF11 (0x28) goes to the normal priority queue, and - BE traffic goes to the low priority queue - For stat collection, I need to see: - how many bytes and packets are in each of the four queues - My configuration thus far is: tc qdisc add dev eml_test root handle 1: prio bands 4 priomap 0 1 2 3 tc filter add dev eml_test parent 1:0 prio 1 protocol ip u32 match ip tos 0xb8 0xff flowid 1:1 tc filter add dev eml_test parent 1:0 prio 2 protocol ip u32 match ip tos 0x80 0xff flowid 1:2 tc filter add dev eml_test parent 1:0 prio 3 protocol ip u32 match ip tos 0x50 0xff flowid 1:3 tc filter add dev eml_test parent 1:0 prio 4 protocol ip u32 match ip tos 0x00 0xff flowid 1:4 __________ My questions are: - What if anything is missing/requiring change in my config given the stated requirements? - What if any command should I use to view how many bytes and packets are in each of the four queues? Any help would be most appreciated.
Hi,>Hi all, > >First, let me say I''ve been most impressed with how quickly and >professionally people on this list ask and answer questions. > >Next, let me say that with which I need help is properly configuring strict >PQ, and gathering certain stats. Specifically: > >- I need to create a priority queue with four queues (let''s say they are of >high, medium, normal, and low priority) > >- I need to use tc filters such that: > > - EF (0xB8) goes to the high priority queue > > - AF21 (0x50) goes to the medium priority queue > > - AF11 (0x28) goes to the normal priority queue, and > > - BE traffic goes to the low priority queue > >- For stat collection, I need to see: > > - how many bytes and packets are in each of the four queues > >- My configuration thus far is: > >tc qdisc add dev eml_test root handle 1: prio bands 4 priomap 0 1 2 3 > >tc filter add dev eml_test parent 1:0 prio 1 protocol ip u32 match ip tos >0xb8 0xff flowid 1:1 > >tc filter add dev eml_test parent 1:0 prio 2 protocol ip u32 match ip tos >0x80 0xff flowid 1:2 > >tc filter add dev eml_test parent 1:0 prio 3 protocol ip u32 match ip tos >0x50 0xff flowid 1:3 > >tc filter add dev eml_test parent 1:0 prio 4 protocol ip u32 match ip tos >0x00 0xff flowid 1:4 >__________Here is an article you may find useful: http://citeseer.ist.psu.edu/539891.html Here is the description of the configuration parameters of the PRIO qdisc: http://www.lartc.org/howto/lartc.qdisc.classful.html#AEN903 (just in case you did not know what the "priomap" option is used for)>My questions are: > >- What if anything is missing/requiring change in my config given the stated >requirements?Your config does not prevent an higher priority class from starving a lower priority class. You can prevent it in two different ways (at least): 1) You can assign a TBF qdisc (Token Bucket) to the PRIO classes TBF: http://www.lartc.org/howto/lartc.qdisc.classless.html#AEN691 2) You can replace the PRIO qdisc with something like HTB/CBQ CBQ: http://www.lartc.org/howto/lartc.qdisc.classful.html#AEN939 HTB: http://luxik.cdi.cz/~devik/qos/htb/>- What if any command should I use to view how many bytes and packets are in >each of the four queues?The PRIO qdisc does not return statistics for its classes. However, a simple workaround consists of explicitly adding a qdisc to the four classes. By default the PRIO qdisc assigns a pFIFO (packet FIFO) qdisc to its classes. Here is how you can replace the 4 default pFIFO qdisc with 4 explicit pFIFO qdisc: tc qdisc add dev eml_test parent 1:1 pfifo limit 1000 tc qdisc add dev eml_test parent 1:2 pfifo limit 1000 tc qdisc add dev eml_test parent 1:3 pfifo limit 1000 tc qdisc add dev eml_test parent 1:4 pfifo limit 1000 Now you can get the stats with: tc -s -d qdisc list dev eml_test Regards /Christian [ http://benve.info ]
Hi Christian, Thanks for the help. Please see my in-line comments:> -----Original Message----- > From: lartc-bounces@mailman.ds9a.nl [mailto:lartc-bounces@mailman.ds9a.nl] > On Behalf Of Christian Benvenuti > Sent: Thursday, June 14, 2007 4:44 PM > To: lartc@mailman.ds9a.nl > Subject: [LARTC] Re: PQ questions > > Hi, > > >Hi all, > > > >First, let me say I''ve been most impressed with how quickly and > >professionally people on this list ask and answer questions. > > > >Next, let me say that with which I need help is properly configuring > strict > >PQ, and gathering certain stats. Specifically: > > > >- I need to create a priority queue with four queues (let''s say they are > of > >high, medium, normal, and low priority) > > > >- I need to use tc filters such that: > > > > - EF (0xB8) goes to the high priority queue > > > > - AF21 (0x50) goes to the medium priority queue > > > > - AF11 (0x28) goes to the normal priority queue, and > > > > - BE traffic goes to the low priority queue > > > >- For stat collection, I need to see: > > > > - how many bytes and packets are in each of the four queues > > > >- My configuration thus far is: > > > >tc qdisc add dev eml_test root handle 1: prio bands 4 priomap 0 1 2 3 > > > >tc filter add dev eml_test parent 1:0 prio 1 protocol ip u32 match ip tos > >0xb8 0xff flowid 1:1 > > > >tc filter add dev eml_test parent 1:0 prio 2 protocol ip u32 match ip tos > >0x80 0xff flowid 1:2 > > > >tc filter add dev eml_test parent 1:0 prio 3 protocol ip u32 match ip tos > >0x50 0xff flowid 1:3 > > > >tc filter add dev eml_test parent 1:0 prio 4 protocol ip u32 match ip tos > >0x00 0xff flowid 1:4 > >__________ > > Here is an article you may find useful: > http://citeseer.ist.psu.edu/539891.html > > Here is the description of the configuration parameters of the > PRIO qdisc: > http://www.lartc.org/howto/lartc.qdisc.classful.html#AEN903 > (just in case you did not know what the "priomap" option is > used for) > > >My questions are: > > > >- What if anything is missing/requiring change in my config given the > stated > >requirements? > > Your config does not prevent an higher priority class from starving > a lower priority class.Exactly. That is requirement.> You can prevent it in two different ways (at > least):Don''t want to prevent it right now.> > 1) You can assign a TBF qdisc (Token Bucket) to the PRIO classes > TBF: http://www.lartc.org/howto/lartc.qdisc.classless.html#AEN691 > > 2) You can replace the PRIO qdisc with something like HTB/CBQ > CBQ: http://www.lartc.org/howto/lartc.qdisc.classful.html#AEN939 > HTB: http://luxik.cdi.cz/~devik/qos/htb/ > > >- What if any command should I use to view how many bytes and packets are > in > >each of the four queues? > > The PRIO qdisc does not return statistics for its classes. > However, a simple workaround consists of explicitly adding > a qdisc to the four classes. > By default the PRIO qdisc assigns a pFIFO (packet FIFO) qdisc to > its classes. > Here is how you can replace the 4 default pFIFO qdisc with 4 > explicit pFIFO qdisc: > > tc qdisc add dev eml_test parent 1:1 pfifo limit 1000 > tc qdisc add dev eml_test parent 1:2 pfifo limit 1000 > tc qdisc add dev eml_test parent 1:3 pfifo limit 1000 > tc qdisc add dev eml_test parent 1:4 pfifo limit 1000 > > Now you can get the stats with: > tc -s -d qdisc list dev eml_testThose stats are nice to have, but the ones I must have are for how many bytes/packets are enqueued at whatever time I check the queues.> > Regards > /Christian > [ http://benve.info ] >I have tried to configure PQ to have two queues per filter with no success. Is it even possible to have (what I''ll call) hierarchical PQ? I have yet to find it.> > > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Hi,> > Your config does not prevent an higher priority class from starving > > a lower priority class. > > Exactly. That is requirement.OK> Those stats are nice to have, but the ones I must have are for how many > bytes/packets are enqueued at whatever time I check the queues.That information is there. Here is an example: (b=bytes p=packets) #tc -s -d qdisc list dev eth1 qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 35p requeues 0 +-> This field is not initialized for this qdisc type qdisc pfifo 10: parent 1:1 limit 1000p Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 ^^^^^^^^^^^^^ qdisc pfifo 20: parent 1:2 limit 1000p Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 50470b 35p requeues 0 ^^^^^^^^^^^^^^^^^^ qdisc pfifo 30: parent 1:3 limit 1000p Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 ^^^^^^^^^^^^^> I have tried to configure PQ to have two queues per filter with no success.What do you mean?> Is it even possible to have (what I''ll call) hierarchical PQ? I have yet to > find it.Something like this? tc qdisc add dev eth1 handle 1: root prio tc qdisc add dev eth1 parent 1:1 handle 10 prio tc qdisc add dev eth1 parent 1:2 handle 20 prio tc qdisc add dev eth1 parent 1:3 handle 30 prio Regards /Christian [ http://benve.info ]
Slightly offtopic... Has anyone really experienced starving of low priority traffic with PRIO qdisc? In my setup, I never achieved that, though I also wanted exactly that situation. I gave both the classes same amount of traffic at the same time. High prio got more bandwidth, but no starvation, even after I sent more traffic than the link capacity.> -----Original Message----- > From: lartc-bounces@mailman.ds9a.nl[mailto:lartc-bounces@mailman.ds9a.nl]> On Behalf Of Christian Benvenuti > Sent: Friday, June 15, 2007 3:32 PM > To: lartc@mailman.ds9a.nl > Subject: [LARTC] Re: PQ questions > > Hi, > > > > Your config does not prevent an higher priority class fromstarving> > > a lower priority class. > > > > Exactly. That is requirement. > > OK > > > Those stats are nice to have, but the ones I must have are for howmany> > bytes/packets are enqueued at whatever time I check the queues. > > That information is there. Here is an example: > (b=bytes p=packets) > > #tc -s -d qdisc list dev eth1 > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 35p requeues 0 > +-> This field is not initialized for this > qdisc type > qdisc pfifo 10: parent 1:1 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^ > qdisc pfifo 20: parent 1:2 limit 1000p > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 50470b 35p requeues 0 > ^^^^^^^^^^^^^^^^^^ > qdisc pfifo 30: parent 1:3 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^ > > > I have tried to configure PQ to have two queues per filter with no > success. > > What do you mean? > > > Is it even possible to have (what I''ll call) hierarchical PQ? I haveyet> to > > find it. > > Something like this? > > tc qdisc add dev eth1 handle 1: root prio > tc qdisc add dev eth1 parent 1:1 handle 10 prio > tc qdisc add dev eth1 parent 1:2 handle 20 prio > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > Regards > /Christian > [ http://benve.info ] > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Hi, a class is starved only if those with higher priority are always (of pretty often) backlogged and do not give the lower priority classes a chance to transmit. Therefore, if you transmit at a rate smaller than your CPU/s and NIC/s can handle you will not experience any starving. For example, if you generate 50Mbit traffic on a 100Mbit NIC it is likely that you won''t see any starving (unless your system is not able to handle 50Mbit traffic because of a complex TC or iptables configuration that consumes lot of CPU). Regards /Christian [ http://benve.info ] On Fri, 2007-06-15 at 15:46 +0800, Salim S I wrote:> Slightly offtopic... Has anyone really experienced starving of low > priority traffic with PRIO qdisc? > In my setup, I never achieved that, though I also wanted exactly that > situation. I gave both the classes same amount of traffic at the same > time. High prio got more bandwidth, but no starvation, even after I sent > more traffic than the link capacity. > > > -----Original Message----- > > From: lartc-bounces@mailman.ds9a.nl > [mailto:lartc-bounces@mailman.ds9a.nl] > > On Behalf Of Christian Benvenuti > > Sent: Friday, June 15, 2007 3:32 PM > > To: lartc@mailman.ds9a.nl > > Subject: [LARTC] Re: PQ questions > > > > Hi, > > > > > > Your config does not prevent an higher priority class from > starving > > > > a lower priority class. > > > > > > Exactly. That is requirement. > > > > OK > > > > > Those stats are nice to have, but the ones I must have are for how > many > > > bytes/packets are enqueued at whatever time I check the queues. > > > > That information is there. Here is an example: > > (b=bytes p=packets) > > > > #tc -s -d qdisc list dev eth1 > > > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 35p requeues 0 > > +-> This field is not initialized for this > > qdisc type > > qdisc pfifo 10: parent 1:1 limit 1000p > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 0p requeues 0 > > ^^^^^^^^^^^^^ > > qdisc pfifo 20: parent 1:2 limit 1000p > > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 50470b 35p requeues 0 > > ^^^^^^^^^^^^^^^^^^ > > qdisc pfifo 30: parent 1:3 limit 1000p > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 0p requeues 0 > > ^^^^^^^^^^^^^ > > > > > I have tried to configure PQ to have two queues per filter with no > > success. > > > > What do you mean? > > > > > Is it even possible to have (what I''ll call) hierarchical PQ? I have > yet > > to > > > find it. > > > > Something like this? > > > > tc qdisc add dev eth1 handle 1: root prio > > tc qdisc add dev eth1 parent 1:1 handle 10 prio > > tc qdisc add dev eth1 parent 1:2 handle 20 prio > > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > > > Regards > > /Christian > > [ http://benve.info ]
I tested on wireless link. It could give a maximum of 45Mbps. And I sent 30Mbps of both low prio and high prio traffic. Total of 60Mbps. My test was done with UDP, using tcpdump. When I increased the bandwidth to 40Mbps each, the high priority class got lesser bandwidth. (maybe the effect of the known issue that large amount of low prio traffic can starve high prio traffic)> -----Original Message----- > From: lartc-bounces@mailman.ds9a.nl[mailto:lartc-bounces@mailman.ds9a.nl]> On Behalf Of Christian Benvenuti > Sent: Friday, June 15, 2007 4:16 PM > To: lartc@mailman.ds9a.nl > Subject: [LARTC] Re: PQ questions > > Hi, > a class is starved only if those with higher priority are > always (of pretty often) backlogged and do not give the lower > priority classes a chance to transmit. > Therefore, if you transmit at a rate smaller than your CPU/s and > NIC/s can handle you will not experience any starving. > > For example, if you generate 50Mbit traffic on a 100Mbit NIC > it is likely that you won''t see any starving (unless your system is > not able to handle 50Mbit traffic because of a complex TC or > iptables configuration that consumes lot of CPU). > > Regards > /Christian > [ http://benve.info ] > > On Fri, 2007-06-15 at 15:46 +0800, Salim S I wrote: > > Slightly offtopic... Has anyone really experienced starving of low > > priority traffic with PRIO qdisc? > > In my setup, I never achieved that, though I also wanted exactlythat> > situation. I gave both the classes same amount of traffic at thesame> > time. High prio got more bandwidth, but no starvation, even after Isent> > more traffic than the link capacity. > > > > > -----Original Message----- > > > From: lartc-bounces@mailman.ds9a.nl > > [mailto:lartc-bounces@mailman.ds9a.nl] > > > On Behalf Of Christian Benvenuti > > > Sent: Friday, June 15, 2007 3:32 PM > > > To: lartc@mailman.ds9a.nl > > > Subject: [LARTC] Re: PQ questions > > > > > > Hi, > > > > > > > > Your config does not prevent an higher priority class from > > starving > > > > > a lower priority class. > > > > > > > > Exactly. That is requirement. > > > > > > OK > > > > > > > Those stats are nice to have, but the ones I must have are forhow> > many > > > > bytes/packets are enqueued at whatever time I check the queues. > > > > > > That information is there. Here is an example: > > > (b=bytes p=packets) > > > > > > #tc -s -d qdisc list dev eth1 > > > > > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 11> > > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues0)> > > rate 0bit 0pps backlog 0b 35p requeues 0 > > > +-> This field is not initialized forthis> > > qdisc type > > > qdisc pfifo 10: parent 1:1 limit 1000p > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > ^^^^^^^^^^^^^ > > > qdisc pfifo 20: parent 1:2 limit 1000p > > > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues0)> > > rate 0bit 0pps backlog 50470b 35p requeues 0 > > > ^^^^^^^^^^^^^^^^^^ > > > qdisc pfifo 30: parent 1:3 limit 1000p > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > ^^^^^^^^^^^^^ > > > > > > > I have tried to configure PQ to have two queues per filter withno> > > success. > > > > > > What do you mean? > > > > > > > Is it even possible to have (what I''ll call) hierarchical PQ? Ihave> > yet > > > to > > > > find it. > > > > > > Something like this? > > > > > > tc qdisc add dev eth1 handle 1: root prio > > > tc qdisc add dev eth1 parent 1:1 handle 10 prio > > > tc qdisc add dev eth1 parent 1:2 handle 20 prio > > > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > > > > > Regards > > > /Christian > > > [ http://benve.info ] > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Hi Christian, <snip>> #tc -s -d qdisc list dev eth1 > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 35p requeues 0 > +-> This field is not initialized for this > qdisc type > qdisc pfifo 10: parent 1:1 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^ > qdisc pfifo 20: parent 1:2 limit 1000p > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 50470b 35p requeues 0 > ^^^^^^^^^^^^^^^^^^ > qdisc pfifo 30: parent 1:3 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^Yes, I can see that from your output. Here however is my config: tc qdisc add dev eml_test root handle 1: prio bands 4 priomap 0 1 2 3 tc filter add dev eml_test parent 1:0 prio 1 protocol ip u32 match ip tos 0xb8 0xff flowid 1:1 tc filter add dev eml_test parent 1:0 prio 2 protocol ip u32 match ip tos 0x50 0xff flowid 1:2 tc filter add dev eml_test parent 1:0 prio 3 protocol ip u32 match ip tos 0x28 0xff flowid 1:3 tc filter add dev eml_test parent 1:0 prio 4 protocol ip u32 match ip tos 0x00 0xff flowid 1:4 tc qdisc add dev eml_test parent 1:1 handle 10: pfifo limit 2 tc qdisc add dev eml_test parent 1:2 handle 20: pfifo limit 2 tc qdisc add dev eml_test parent 1:3 handle 30: pfifo limit 2 tc qdisc add dev eml_test parent 1:4 handle 40: pfifo limit 2 ___ Here is what I see when issuing the same command: # tc -s -d qdisc list dev eml_test qdisc prio 1: bands 4 priomap 0 1 2 3 1 2 0 0 1 1 1 1 1 1 1 1 Sent 0 bytes 0 pkts (dropped 0, overlimits 0 requeues 0) qdisc pfifo 10: parent 1:1 limit 2p Sent 0 bytes 0 pkts (dropped 0, overlimits 0 requeues 0) qdisc pfifo 20: parent 1:2 limit 2p Sent 0 bytes 0 pkts (dropped 0, overlimits 0 requeues 0) qdisc pfifo 30: parent 1:3 limit 2p Sent 0 bytes 0 pkts (dropped 0, overlimits 0 requeues 0) qdisc pfifo 40: parent 1:4 limit 2p Sent 0 bytes 0 pkts (dropped 0, overlimits 0 requeues 0)> > > I have tried to configure PQ to have two queues per filter with no > success. > > What do you mean?Sorry, let me try to explain it this way (please refer to the above config): - I presently have -a strict PQ scheme which uses four queues - four filters each of which determine what type of traffic gets into which queue (EF, AF21, AF11 and BE respectively in my case) - a specific pFIFO qdisc for each PQ "class" __________> > > Is it even possible to have (what I''ll call) hierarchical PQ? I have yet > to > > find it. > > Something like this? > > tc qdisc add dev eth1 handle 1: root prio > tc qdisc add dev eth1 parent 1:1 handle 10 prio > tc qdisc add dev eth1 parent 1:2 handle 20 prio > tc qdisc add dev eth1 parent 1:3 handle 30 prio(see above) I already have something just like this, just with pfifo for each child as opposed to the prio listed in the above config (thanks in great part to your previous help). What I need is one more layer of hierarchy. Specifically, the queues defined by: tc qdisc add dev eth1 parent 1:1 handle 10 prio tc qdisc add dev eth1 parent 1:2 handle 20 prio tc qdisc add dev eth1 parent 1:3 handle 30 prio themselves need to be parents (e.g.): tc qdisc add dev eth1 parent 10:0 handle 11 prio tc qdisc add dev eth1 parent 20:0 handle 21 prio tc qdisc add dev eth1 parent 30:0 handle 31 prio> > Regards > /Christian > [ http://benve.info ] > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Please send me the exact config by which you got all those params in the output (especially backlog 0b 35p)... I just do not see that in mine.> -----Original Message----- > From: lartc-bounces@mailman.ds9a.nl [mailto:lartc-bounces@mailman.ds9a.nl] > On Behalf Of Christian Benvenuti > Sent: Friday, June 15, 2007 3:32 AM > To: lartc@mailman.ds9a.nl > Subject: [LARTC] Re: PQ questions > > Hi, > > > > Your config does not prevent an higher priority class from starving > > > a lower priority class. > > > > Exactly. That is requirement. > > OK > > > Those stats are nice to have, but the ones I must have are for how many > > bytes/packets are enqueued at whatever time I check the queues. > > That information is there. Here is an example: > (b=bytes p=packets) > > #tc -s -d qdisc list dev eth1 > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 35p requeues 0 > +-> This field is not initialized for this > qdisc type > qdisc pfifo 10: parent 1:1 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^ > qdisc pfifo 20: parent 1:2 limit 1000p > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 50470b 35p requeues 0 > ^^^^^^^^^^^^^^^^^^ > qdisc pfifo 30: parent 1:3 limit 1000p > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > rate 0bit 0pps backlog 0b 0p requeues 0 > ^^^^^^^^^^^^^ > > > I have tried to configure PQ to have two queues per filter with no > success. > > What do you mean? > > > Is it even possible to have (what I''ll call) hierarchical PQ? I have yet > to > > find it. > > Something like this? > > tc qdisc add dev eth1 handle 1: root prio > tc qdisc add dev eth1 parent 1:1 handle 10 prio > tc qdisc add dev eth1 parent 1:2 handle 20 prio > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > Regards > /Christian > [ http://benve.info ] > > > _______________________________________________ > LARTC mailing list > LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Hi, On Fri, 2007-06-15 at 17:13 +0800, Salim S I wrote:> I tested on wireless link. It could give a maximum of 45Mbps. And I sent > 30Mbps of both low prio and high prio traffic. Total of 60Mbps.Do you mean to say that your wireless link can transmit at 45Mbps? If that''s what you meant, what I meant to say is that if you generate almost (or more than) 45Mbps of high prio traffic than there is nothing (or almost nothing) left for the low prio traffic. When you forward the above traffic (as opposed to when you generate it locally) there are other factors to take into account that can change the overall behavior. For example, for each CPU there is one ingress queue that is shared by all ingress traffic that is received by interfaces whose driver does not use NAPI. These CPU queues are traversed before the ingress queueing disciplines and they have nothing to do with Traffic Control. It is possible therefore that under heavy load the low prio traffic fills in a significant portion of such CPU queues and reduces the amount of high prio traffic that reaches the egress queueing discipline (leaving therefore more chances to the low priority traffic to be scheduled).> My test was done with UDP, using tcpdump. When I increased the bandwidth > to 40Mbps each, the high priority class got lesser bandwidth. (maybe the > effect of the known issue that large amount of low prio traffic can > starve high prio traffic)Possible. See my comment above. Regards /Christian [ http://benve.info ]> > -----Original Message----- > > From: lartc-bounces@mailman.ds9a.nl > [mailto:lartc-bounces@mailman.ds9a.nl] > > On Behalf Of Christian Benvenuti > > Sent: Friday, June 15, 2007 4:16 PM > > To: lartc@mailman.ds9a.nl > > Subject: [LARTC] Re: PQ questions > > > > Hi, > > a class is starved only if those with higher priority are > > always (of pretty often) backlogged and do not give the lower > > priority classes a chance to transmit. > > Therefore, if you transmit at a rate smaller than your CPU/s and > > NIC/s can handle you will not experience any starving. > > > > For example, if you generate 50Mbit traffic on a 100Mbit NIC > > it is likely that you won''t see any starving (unless your system is > > not able to handle 50Mbit traffic because of a complex TC or > > iptables configuration that consumes lot of CPU). > > > > Regards > > /Christian > > [ http://benve.info ] > > > > On Fri, 2007-06-15 at 15:46 +0800, Salim S I wrote: > > > Slightly offtopic... Has anyone really experienced starving of low > > > priority traffic with PRIO qdisc? > > > In my setup, I never achieved that, though I also wanted exactly > that > > > situation. I gave both the classes same amount of traffic at the > same > > > time. High prio got more bandwidth, but no starvation, even after I > sent > > > more traffic than the link capacity. > > > > > > > -----Original Message----- > > > > From: lartc-bounces@mailman.ds9a.nl > > > [mailto:lartc-bounces@mailman.ds9a.nl] > > > > On Behalf Of Christian Benvenuti > > > > Sent: Friday, June 15, 2007 3:32 PM > > > > To: lartc@mailman.ds9a.nl > > > > Subject: [LARTC] Re: PQ questions > > > > > > > > Hi, > > > > > > > > > > Your config does not prevent an higher priority class from > > > starving > > > > > > a lower priority class. > > > > > > > > > > Exactly. That is requirement. > > > > > > > > OK > > > > > > > > > Those stats are nice to have, but the ones I must have are for > how > > > many > > > > > bytes/packets are enqueued at whatever time I check the queues. > > > > > > > > That information is there. Here is an example: > > > > (b=bytes p=packets) > > > > > > > > #tc -s -d qdisc list dev eth1 > > > > > > > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 > 1 > > > > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues > 0) > > > > rate 0bit 0pps backlog 0b 35p requeues 0 > > > > +-> This field is not initialized for > this > > > > qdisc type > > > > qdisc pfifo 10: parent 1:1 limit 1000p > > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > > ^^^^^^^^^^^^^ > > > > qdisc pfifo 20: parent 1:2 limit 1000p > > > > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues > 0) > > > > rate 0bit 0pps backlog 50470b 35p requeues 0 > > > > ^^^^^^^^^^^^^^^^^^ > > > > qdisc pfifo 30: parent 1:3 limit 1000p > > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > > ^^^^^^^^^^^^^ > > > > > > > > > I have tried to configure PQ to have two queues per filter with > no > > > > success. > > > > > > > > What do you mean? > > > > > > > > > Is it even possible to have (what I''ll call) hierarchical PQ? I > have > > > yet > > > > to > > > > > find it. > > > > > > > > Something like this? > > > > > > > > tc qdisc add dev eth1 handle 1: root prio > > > > tc qdisc add dev eth1 parent 1:1 handle 10 prio > > > > tc qdisc add dev eth1 parent 1:2 handle 20 prio > > > > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > > > > > > > Regards > > > > /Christian > > > > [ http://benve.info ]
Hi, On Fri, 2007-06-15 at 14:31 -0400, Tim Enos wrote:> Please send me the exact config by which you got all those params in the > output (especially backlog 0b 35p)... I just do not see that in mine.The configuration is the same as yours, with the difference that I have eth0 instead of eml_test. I believe your config is OK. I managed to get backlog!=0 by generating a huge amount of traffic with mgen: 10K pkts/s of 1300bytes of size. If you do not saturate your link it is likely you will not see anything sitting in the queue. Regards /Christian [ http://benve.info ]> > -----Original Message----- > > From: lartc-bounces@mailman.ds9a.nl [mailto:lartc-bounces@mailman.ds9a.nl] > > On Behalf Of Christian Benvenuti > > Sent: Friday, June 15, 2007 3:32 AM > > To: lartc@mailman.ds9a.nl > > Subject: [LARTC] Re: PQ questions > > > > Hi, > > > > > > Your config does not prevent an higher priority class from starving > > > > a lower priority class. > > > > > > Exactly. That is requirement. > > > > OK > > > > > Those stats are nice to have, but the ones I must have are for how many > > > bytes/packets are enqueued at whatever time I check the queues. > > > > That information is there. Here is an example: > > (b=bytes p=packets) > > > > #tc -s -d qdisc list dev eth1 > > > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 35p requeues 0 > > +-> This field is not initialized for this > > qdisc type > > qdisc pfifo 10: parent 1:1 limit 1000p > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 0p requeues 0 > > ^^^^^^^^^^^^^ > > qdisc pfifo 20: parent 1:2 limit 1000p > > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 50470b 35p requeues 0 > > ^^^^^^^^^^^^^^^^^^ > > qdisc pfifo 30: parent 1:3 limit 1000p > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > rate 0bit 0pps backlog 0b 0p requeues 0 > > ^^^^^^^^^^^^^ > > > > > I have tried to configure PQ to have two queues per filter with no > > success. > > > > What do you mean? > > > > > Is it even possible to have (what I''ll call) hierarchical PQ? I have yet > > to > > > find it. > > > > Something like this? > > > > tc qdisc add dev eth1 handle 1: root prio > > tc qdisc add dev eth1 parent 1:1 handle 10 prio > > tc qdisc add dev eth1 parent 1:2 handle 20 prio > > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > > > Regards > > /Christian > > [ http://benve.info ] > >
Cool, Thanks Christian! I''m wishing that all of those same params showed up in the output without having to run anything. No problem. Should it matter that I''m using an emulated interface? Also wondering what you think about my "hierarchical PQ" question. Have a good weekend.> -----Original Message----- > From: Christian Benvenuti [mailto:christian.benvenuti@libero.it] > Sent: Friday, June 15, 2007 2:57 PM > To: lartc@mailman.ds9a.nl > Cc: Tim Enos > Subject: RE: [LARTC] Re: PQ questions > > Hi, > > On Fri, 2007-06-15 at 14:31 -0400, Tim Enos wrote: > > Please send me the exact config by which you got all those params in the > > output (especially backlog 0b 35p)... I just do not see that in mine. > > The configuration is the same as yours, with the difference that I have > eth0 instead of eml_test. > I believe your config is OK. > I managed to get backlog!=0 by generating a huge amount of traffic with > mgen: 10K pkts/s of 1300bytes of size. > If you do not saturate your link it is likely you will not see anything > sitting in the queue. > > Regards > /Christian > [ http://benve.info ] > > > > > -----Original Message----- > > > From: lartc-bounces@mailman.ds9a.nl [mailto:lartc- > bounces@mailman.ds9a.nl] > > > On Behalf Of Christian Benvenuti > > > Sent: Friday, June 15, 2007 3:32 AM > > > To: lartc@mailman.ds9a.nl > > > Subject: [LARTC] Re: PQ questions > > > > > > Hi, > > > > > > > > Your config does not prevent an higher priority class from > starving > > > > > a lower priority class. > > > > > > > > Exactly. That is requirement. > > > > > > OK > > > > > > > Those stats are nice to have, but the ones I must have are for how > many > > > > bytes/packets are enqueued at whatever time I check the queues. > > > > > > That information is there. Here is an example: > > > (b=bytes p=packets) > > > > > > #tc -s -d qdisc list dev eth1 > > > > > > qdisc prio 1: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > > > Sent 85357186 bytes 59299 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 0b 35p requeues 0 > > > +-> This field is not initialized for this > > > qdisc type > > > qdisc pfifo 10: parent 1:1 limit 1000p > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > ^^^^^^^^^^^^^ > > > qdisc pfifo 20: parent 1:2 limit 1000p > > > Sent 85357120 bytes 59298 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 50470b 35p requeues 0 > > > ^^^^^^^^^^^^^^^^^^ > > > qdisc pfifo 30: parent 1:3 limit 1000p > > > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > > > rate 0bit 0pps backlog 0b 0p requeues 0 > > > ^^^^^^^^^^^^^ > > > > > > > I have tried to configure PQ to have two queues per filter with no > > > success. > > > > > > What do you mean? > > > > > > > Is it even possible to have (what I''ll call) hierarchical PQ? I have > yet > > > to > > > > find it. > > > > > > Something like this? > > > > > > tc qdisc add dev eth1 handle 1: root prio > > > tc qdisc add dev eth1 parent 1:1 handle 10 prio > > > tc qdisc add dev eth1 parent 1:2 handle 20 prio > > > tc qdisc add dev eth1 parent 1:3 handle 30 prio > > > > > > Regards > > > /Christian > > > [ http://benve.info ] > > >
Salim S I wrote:> I tested on wireless link. It could give a maximum of 45Mbps. And I sent > 30Mbps of both low prio and high prio traffic. Total of 60Mbps. > My test was done with UDP, using tcpdump. When I increased the bandwidth > to 40Mbps each, the high priority class got lesser bandwidth.Maybe wireless is a special case here - was the driver/device actually on the prio box? (maybe the> effect of the known issue that large amount of low prio traffic can > starve high prio traffic)On eth using tcp I can get prio to behave quite well. You need to remember to filter arp to a high (best empty) class - it goes to x:2 by default, which made for a bit of wierdness when I tried last. If you use tcp on my 100meg eth there is still a 300pkt buffer to fill before prio gets backlogged, so window scaling needs to be on and both ends need decent size buffers/scale amounts. Maybe UDP would be different I''ll have to try sometime. Andy.
Tim Enos wrote:> Cool, > > Thanks Christian! I''m wishing that all of those same params showed up in the > output without having to run anything. No problem. Should it matter that I''m > using an emulated interface?Quite possibly - using prio on real devices still can appear not to work until you have filled up any buffer the driver uses. On my 100meg eth it would take 5/6 unscaled tcp connections to fill enough for prio to do anything. You can use prio as a child of hfsc/htb so that they set the rate. It may be nicer to use htb''s own prio though, if you need a slow rate and care about latency. Andy.
It''s PQ that is required. Here is what I have for config so far: tc qdisc add dev eth0 root handle 1: prio bands 4 priomap 0 1 2 3 tc filter add dev eth0 parent 1:0 prio 1 protocol ip u32 match ip tos 0xb8 0xff flowid 1:1 tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32 match ip tos 0x50 0xff flowid 1:2 tc filter add dev eth0 parent 1:0 prio 3 protocol ip u32 match ip tos 0x28 0xff flowid 1:3 tc filter add dev eth0 parent 1:0 prio 4 protocol ip u32 match ip tos 0x00 0xff flowid 1:4 tc qdisc add dev eth0 parent 1:1 handle 10: pfifo limit 2 tc qdisc add dev eth0 parent 1:2 handle 11: pfifo limit 2 tc qdisc add dev eth0 parent 1:3 handle 12: pfifo limit 2 tc qdisc add dev eth0 parent 1:4 handle 13: pfifo limit 2 __________ The above config works fine. The last four qdisc lines (handles 10: - 13: inclusive) also work as prio if you leave out the ''limit'' part of course. The remaining part is to set children for the last four qdiscs (one for each). Said children qdiscs would have all the same attributes (as the parents (limit is something I''d change; the ''2'' is just an example). Is this possible?> -----Original Message----- > From: Andy Furniss [mailto:lists@andyfurniss.entadsl.com] > Sent: Tuesday, June 19, 2007 6:17 PM > To: Tim Enos > Cc: ''Christian Benvenuti''; lartc@mailman.ds9a.nl > Subject: Re: [LARTC] Re: PQ questions > > Tim Enos wrote: > > Cool, > > > > Thanks Christian! I''m wishing that all of those same params showed up in > the > > output without having to run anything. No problem. Should it matter that > I''m > > using an emulated interface? > > Quite possibly - using prio on real devices still can appear not to work > until you have filled up any buffer the driver uses. > > On my 100meg eth it would take 5/6 unscaled tcp connections to fill > enough for prio to do anything. > > You can use prio as a child of hfsc/htb so that they set the rate. It > may be nicer to use htb''s own prio though, if you need a slow rate and > care about latency. > > Andy.
Hi Tim, Andy, On Wed, 2007-06-20 at 19:07 -0400, Tim Enos wrote:> It''s PQ that is required. Here is what I have for config so far: > > tc qdisc add dev eth0 root handle 1: prio bands 4 priomap 0 1 2 3Is "priomap 0 1 2 3" what you want/need or just a random mapping? (this is the default mapping that is used when none of the filters matches)> tc filter add dev eth0 parent 1:0 prio 1 protocol ip u32 match ip tos 0xb8 > 0xff flowid 1:1 > > tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32 match ip tos 0x50 > 0xff flowid 1:2 > > tc filter add dev eth0 parent 1:0 prio 3 protocol ip u32 match ip tos 0x28 > 0xff flowid 1:3 > > tc filter add dev eth0 parent 1:0 prio 4 protocol ip u32 match ip tos 0x00 > 0xff flowid 1:4 > > > tc qdisc add dev eth0 parent 1:1 handle 10: pfifo limit 2 > > tc qdisc add dev eth0 parent 1:2 handle 11: pfifo limit 2 > > tc qdisc add dev eth0 parent 1:3 handle 12: pfifo limit 2 > > tc qdisc add dev eth0 parent 1:4 handle 13: pfifo limit 2 > > __________ > > The above config works fine. The last four qdisc lines (handles 10: - 13: > inclusive) also work as prio if you leave out the ''limit'' part of course.What do you mean?> The remaining part is to set children for the last four qdiscs (one for > each). Said children qdiscs would have all the same attributes (as the > parents (limit is something I''d change; the ''2'' is just an example). Is this > possible?Do you mean something like this? tc qdisc add dev eth0 parent 10: handle 100: prio ... tc qdisc add dev eth0 parent 11: handle 110: prio ... tc qdisc add dev eth0 parent 12: handle 120: prio ... tc qdisc add dev eth0 parent 13: handle 130: prio ... Why would you need to put a pfifo qdisc between the two prio qdisc? Wouldn''t it be better to have prio -> prio OR prio -> prio -> pfifo instead of prio -> pfifo -> prio ? What criteria are you going to use to assign the right priority to the packets in the nested (i.e., 2nd level) prio qdisc? Regards /Christian [ http://benve.info ]> > -----Original Message----- > > From: Andy Furniss [mailto:lists@andyfurniss.entadsl.com] > > Sent: Tuesday, June 19, 2007 6:17 PM > > To: Tim Enos > > Cc: ''Christian Benvenuti''; lartc@mailman.ds9a.nl > > Subject: Re: [LARTC] Re: PQ questions > > > > Tim Enos wrote: > > > Cool, > > > > > > Thanks Christian! I''m wishing that all of those same params showed up in > > the > > > output without having to run anything. No problem. Should it matter that > > I''m > > > using an emulated interface? > > > > Quite possibly - using prio on real devices still can appear not to work > > until you have filled up any buffer the driver uses. > > > > On my 100meg eth it would take 5/6 unscaled tcp connections to fill > > enough for prio to do anything. > > > > You can use prio as a child of hfsc/htb so that they set the rate. It > > may be nicer to use htb''s own prio though, if you need a slow rate and > > care about latency. > > > > Andy.
Hi Christian, Good morning, and thank you for proving me correct about how professional and responsive people on this list are (sincerely). Brief comments in-line:> -----Original Message----- > From: Christian Benvenuti [mailto:christian.benvenuti@libero.it] > Sent: Thursday, June 21, 2007 4:23 PM > To: Tim Enos > Cc: lists@andyfurniss.entadsl.com; lartc@mailman.ds9a.nl > Subject: Re: PQ questions > > Hi Tim, Andy, > > On Wed, 2007-06-20 at 19:07 -0400, Tim Enos wrote: > > It''s PQ that is required. Here is what I have for config so far: > > > > tc qdisc add dev eth0 root handle 1: prio bands 4 priomap 0 1 2 3 > > Is "priomap 0 1 2 3" what you want/need or just a random mapping? > (this is the default mapping that is used when none of the filters > matches) > > > tc filter add dev eth0 parent 1:0 prio 1 protocol ip u32 match ip tos > 0xb8 > > 0xff flowid 1:1 > > > > tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32 match ip tos > 0x50 > > 0xff flowid 1:2 > > > > tc filter add dev eth0 parent 1:0 prio 3 protocol ip u32 match ip tos > 0x28 > > 0xff flowid 1:3 > > > > tc filter add dev eth0 parent 1:0 prio 4 protocol ip u32 match ip tos > 0x00 > > 0xff flowid 1:4 > > > > > > tc qdisc add dev eth0 parent 1:1 handle 10: pfifo limit 2 > > > > tc qdisc add dev eth0 parent 1:2 handle 11: pfifo limit 2 > > > > tc qdisc add dev eth0 parent 1:3 handle 12: pfifo limit 2 > > > > tc qdisc add dev eth0 parent 1:4 handle 13: pfifo limit 2 > > > > __________ > > > > The above config works fine. The last four qdisc lines (handles 10: - > 13: > > inclusive) also work as prio if you leave out the ''limit'' part of > course. > > What do you mean?I mean that when saying something like: "qdisc add dev eth0 parent 1:1 handle 10: prio limit 2" you will get the following error (at least I do): " What is "limit"? Usage: ... prio bands NUMBER priomap P1 P2..." Changing the line like so works (and no error messages are generated): "qdisc add dev eth0 parent 1:1 handle 10: prio"> > > The remaining part is to set children for the last four qdiscs (one for > > each). Said children qdiscs would have all the same attributes (as the > > parents (limit is something I''d change; the ''2'' is just an example). Is > this > > possible? > > Do you mean something like this? > > tc qdisc add dev eth0 parent 10: handle 100: prio ... > tc qdisc add dev eth0 parent 11: handle 110: prio ... > tc qdisc add dev eth0 parent 12: handle 120: prio ... > tc qdisc add dev eth0 parent 13: handle 130: prio ...Yes.> > Why would you need to put a pfifo qdisc between the two prio qdisc? > Wouldn''t it be better to have > > prio -> prio > > OR > > prio -> prio -> pfifo > > instead of > > prio -> pfifo -> prio ? > > What criteria are you going to use to assign the right priority to > the packets in the nested (i.e., 2nd level) prio qdisc?The idea is that within each of the four priority classes/queues there would be two queues: one of some very small length (say 2) and another of some larger length (whatever the default is). So the thinking is that the traffic (having been marked by the application say) hits the top-level queue. If the traffic is marked EF, it will go into the highest priority queue. Once in that queue, it will hit the first pfifo (which in this model is 2 packets long). It will then hit the second pfifo queue before heading out onto the wire. The ultimate concern is to know how many packets are in each of the priority queues at any given time.> > Regards > /Christian > [ http://benve.info ] > > > > > > -----Original Message----- > > > From: Andy Furniss [mailto:lists@andyfurniss.entadsl.com] > > > Sent: Tuesday, June 19, 2007 6:17 PM > > > To: Tim Enos > > > Cc: ''Christian Benvenuti''; lartc@mailman.ds9a.nl > > > Subject: Re: [LARTC] Re: PQ questions > > > > > > Tim Enos wrote: > > > > Cool, > > > > > > > > Thanks Christian! I''m wishing that all of those same params showed > up in > > > the > > > > output without having to run anything. No problem. Should it matter > that > > > I''m > > > > using an emulated interface? > > > > > > Quite possibly - using prio on real devices still can appear not to > work > > > until you have filled up any buffer the driver uses. > > > > > > On my 100meg eth it would take 5/6 unscaled tcp connections to fill > > > enough for prio to do anything. > > > > > > You can use prio as a child of hfsc/htb so that they set the rate. It > > > may be nicer to use htb''s own prio though, if you need a slow rate and > > > care about latency. > > > > > > Andy.