Hi all. I''m quite new to the concepts of the "traffic control" framework, and I''ve got a programming-related question. Hopefully someone has the answer... Is it possible, either for the device driver itself or for a userspace program, to get information about how many packets are currently queued for a given network interface? Let''s describe it a little more in detail: I have a network interface eth0 in my linux box. Now I apply traffic shaping to that interface, for example the outgoing traffic is shaped down to 1 MBit/s. There is an application that creates packets which are meant to be sent out via eth0, and the application creates its packets with a much higher rate than 1 MBit/s. This would result in the shaper enqueuing packets for eth0 and, sooner or later, in dropping some of the packets if the queue is full. So I want to slow down the rate at which the application creates its packets. The easiest way would be to take a look at the "traffic control" queues for eth0 and check their current state. When the queue is filled up to a specified level, the application should stop creation of new packets until the queue has been emptied. (*) So, is there any way for my application to check the state of the eth0-queues? Or is this possible for the driver of eth0 (as I''m in control of this driver, I could implement a way to pass the needed information down to the application if necessary)? Next question: if I understood the concepts of the "traffic control" system correctly, one could add several queues to a single device. Is there any way to simply get the total amount of packets that are waiting in all attached queues? Or would I need to check each queue and sum up the values? And last question: what kind of information can I get about the currently enqueued packets? Just the amount of packets that are enqueued, or only the amount of enqueued bytes, or both? I''d appreciate any kind of help very much. Pointers to existing documentation welcome - I didn''t find the answers in the docs I found, but maybe I just didn''t search well enough (or in the wrong places)? Thanks in advance. Bye, Mike (*) In other words: I want to have the effects of slowing down the traffic generation of my application without having to care about the actual implementation of the traffic shaping. In my special case this makes sense and would save me a lot of work. _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi Michael Renzmann; On Thu, 29 Jan 2004, Michael Renzmann wrote:> Is it possible, either for the device driver itself or for a userspace > program, to get information about how many packets are currently queued > for a given network interface?Yes, if a small extension to the scheduler in question is carried out. You may add a variable counting packets in the enqueue () and dequeue(), and either write this to the /proc file system or poll the result with the help from tc. Look at sch_fifo.c, it counts packets in the queue. But it does not report it any further. Lars> > Let''s describe it a little more in detail: > > I have a network interface eth0 in my linux box. Now I apply traffic > shaping to that interface, for example the outgoing traffic is shaped > down to 1 MBit/s. There is an application that creates packets which are > meant to be sent out via eth0, and the application creates its packets > with a much higher rate than 1 MBit/s. This would result in the shaper > enqueuing packets for eth0 and, sooner or later, in dropping some of the > packets if the queue is full. > > So I want to slow down the rate at which the application creates its > packets. The easiest way would be to take a look at the "traffic > control" queues for eth0 and check their current state. When the queue > is filled up to a specified level, the application should stop creation > of new packets until the queue has been emptied. (*) > > So, is there any way for my application to check the state of the > eth0-queues? Or is this possible for the driver of eth0 (as I''m in > control of this driver, I could implement a way to pass the needed > information down to the application if necessary)? > > Next question: if I understood the concepts of the "traffic control" > system correctly, one could add several queues to a single device. Is > there any way to simply get the total amount of packets that are waiting > in all attached queues? Or would I need to check each queue and sum up > the values?Using class based queuing you may isolate each queue from each other and count packets each queue holds. On the other hand, if you want the total amount of queued packets, then a global variable would help you.> And last question: what kind of information can I get about the > currently enqueued packets? Just the amount of packets that are > enqueued, or only the amount of enqueued bytes, or both?You may obtain both.> I''d appreciate any kind of help very much. Pointers to existing > documentation welcome - I didn''t find the answers in the docs I found, > but maybe I just didn''t search well enough (or in the wrong places)? > > Thanks in advance. > > Bye, Mike > > (*) In other words: I want to have the effects of slowing down the > traffic generation of my application without having to care about the > actual implementation of the traffic shaping. In my special case this > makes sense and would save me a lot of work. > > _______________________________________________ > LARTC mailing list / LARTC@mailman.ds9a.nl > http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/ >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi Lars. First of all, thanks for your fast reply. Lars Landmark wrote:>>Is it possible, either for the device driver itself or for a userspace >>program, to get information about how many packets are currently queued >>for a given network interface? > Yes, if a small extension to the scheduler in question is carried out. > You may add a variable counting packets in the enqueue () and dequeue(), > and either write this to the /proc file system or poll the result with the > help from tc. Look at sch_fifo.c, it counts packets in the queue. But it > does not report it any further.Hmm... just to get it right: if I modify the source of any scheduler, there is no need to recompile the kernel, as the schedulers are "encapsulated" completely as loadable kernel modules? Because that is a important criterion for my decision, I want to (better: have to) avoid to recompile the kernel at any costs.>>Next question: if I understood the concepts of the "traffic control" >>system correctly, one could add several queues to a single device. Is >>there any way to simply get the total amount of packets that are waiting >>in all attached queues? Or would I need to check each queue and sum up >>the values? > Using class based queuing you may isolate each queue from each other and > count packets each queue holds. On the other hand, if you want the total > amount of queued packets, then a global variable would help you.As far as I understood the concepts of the tc framework, each interface has a queuing discipline attached. Optionally, this discipline may have several filters and classes. Filters decide which packet belongs to which class, and each class may optionally have other qdiscs attached to it. Is this understanding correct? If so, then in the following situation... +----[client1] | (Internet)---eth0--[router]--eth1--+----[client2] | +----[client3] ... I''d choose a discipline that implements classes. At least one class per client with one qdisc attached to each class would be necessary to allow bandwidth shaping for traffic that passes the router on its way from the clients to the Internet. Right? In this case I need to modify the "root discipline" in order to implement a "per class" counter, so that I can see if the queue for a client fills up. While thinking about the above, another idea came to my mind. Maybe there is a way to avoid to modify every scheduler that might become interesting for the described task. Wouldn''t it be more reasonable to write my own dummy qdisc handler that just implements the very basic functions needed to attach another qdisc to it? In this case, the dummy only needed to have the functions enqueue() and dequeue() which keep the counter up-to-date. Of course one counter per class would be necessary in the above described scenario, but that shouldn''t be much harder than having a global counter. That could be thought to be a "statistic qdisc" which also could be used for simple accounting purposes... hmm, do you think that this could work? If so, I''d like to try that one. Bye, Mike _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> Hmm... just to get it right: if I modify the source of any scheduler, > there is no need to recompile the kernel, as the schedulers are > "encapsulated" completely as loadable kernel modules? Because that is a > important criterion for my decision, I want to (better: have to) avoid > to recompile the kernel at any costs.It is optional to compile it as module or into the kernel.> > As far as I understood the concepts of the tc framework, each interface > has a queuing discipline attached. Optionally, this discipline may have > several filters and classes. Filters decide which packet belongs to > which class, and each class may optionally have other qdiscs attached to > it. Is this understanding correct?You may change the default FIFO discipline, for instance to sfq. Read the lartc documentation, where you can see an example for HTB, if I do remember right :-).> > If so, then in the following situation... > > +----[client1] > | > (Internet)---eth0--[router]--eth1--+----[client2] > | > +----[client3] > > ... I''d choose a discipline that implements classes. At least one class > per client with one qdisc attached to each class would be necessary to > allow bandwidth shaping for traffic that passes the router on its way > from the clients to the Internet. Right? > > In this case I need to modify the "root discipline" in order to > implement a "per class" counter, so that I can see if the queue for a > client fills up.As you explains; a client is associated to a class, then you need an extension to the "class struct" and not the global root "struct".> > > While thinking about the above, another idea came to my mind. Maybe > there is a way to avoid to modify every scheduler that might become > interesting for the described task. Wouldn''t it be more reasonable to > write my own dummy qdisc handler that just implements the very basic > functions needed to attach another qdisc to it? In this case, the dummy > only needed to have the functions enqueue() and dequeue() which keep the > counter up-to-date. Of course one counter per class would be necessary > in the above described scenario, but that shouldn''t be much harder than > having a global counter. That could be thought to be a "statistic qdisc" > which also could be used for simple accounting purposes... hmm, do you > think that this could work? If so, I''d like to try that one.I do not understand why you should do this, while a class has full control of the packets entering or leaving independent of the qdisc in use. Note; Packets may enter and/or leaving an interface at variable rate, which most likely leads to high oscillation of the packets counted. :(> > Bye, Mike >_______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Hi. Lars Landmark wrote:> It is optional to compile it as module or into the kernel.Ok, and as long as I compile it as modules I just need to recompile those that have been modified, the kernel can stay untouched. Sounds good. [ idea: implementing the statistics inside a "statistic qdisc" ]> I do not understand why you should do this, while a class has full control > of the packets entering or leaving independent of the qdisc in use.That''s simple. This way I don''t have to touch each and every scheduler''s source that might be interesting now or in the future. And it is more in the sense of "modularity" the tc framework was built on. Just throw in the sch_stat, put it in the correct place of a "qdisc-hierarchie" and you are able to keep track of the packets that are enqueued and dequeued inside the sub-qdisc. But this rises two questions: 1. Does the parent qdisc get information back if the called child-qdisc enqueued / dequeued packets? And for dequeuing: does the parent know how many packets have been dequeued by the child? 2. Are enqueue() and dequeue() of a qdisc called seperately for every single packet, or is it possible to enqueue / dequeue more than one packet per call?> Note; > Packets may enter and/or leaving an interface at variable rate, which most > likely leads to high oscillation of the packets counted. :(Currently I don''t see why this could be a problem for the idea of implementing sch_stat... what point do I miss here? Bye, Mike _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
> That''s simple. This way I don''t have to touch each and every scheduler''s > source that might be interesting now or in the future. And it is more in > the sense of "modularity" the tc framework was built on. Just throw in > the sch_stat, put it in the correct place of a "qdisc-hierarchie" and > you are able to keep track of the packets that are enqueued and dequeued > inside the sub-qdisc. > > But this rises two questions: > > 1. Does the parent qdisc get information back if the called child-qdisc > enqueued / dequeued packets? And for dequeuing: does the parent know how > many packets have been dequeued by the child? > > 2. Are enqueue() and dequeue() of a qdisc called seperately for every > single packet, or is it possible to enqueue / dequeue more than one > packet per call?Each packet gets queued separately, as the qdisc only sees one packet at the time. To the former question; If you look at the kernel code you will see inside the enqueue() function that this function calls the filter function. The filter returns a pointer to the appropriate class, and then the packet is enqueued. Regardless optional qdisc chosen for the class, the packet first enter the parent qdisc. If you so have optionally configured another qdisc to a class, this new qdisc will further be called. Otherwise, the default FIFO queue is taken care of your packet. At the time it is ready to dequeue a stored packet, the class-full qdisc dequeue() is called. If you now have configured another qdisc to handle the packets, then it will be called.> Currently I don''t see why this could be a problem for the idea of > implementing sch_stat... what point do I miss here?To the oscillation problem, I do not know what your are planning for the statistic. But if you are going to make any use of this statestic, packet by packet, I imagine that you will probably need some extra CPU power. I may be wrong at this point, tell me otherwise. Anyway, it would be nice to hear about your work if you start working with the project.. Lars http://www.unik.no _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/