kevin-lartc@horizon.com
2005-Oct-11 11:07 UTC
How to do network emulation on incoming traffic?
I''m trying to simulate a satellite link to a Linux server to test application performance. I haven''t used any of the tc stuff before, but I blandly assured people it would be "easy" to set up a simulated long thin pipe on a spare network interface. However, now that I''m exploring, it''s proving quite difficult. Let me start with the general question first. My setup is: +--------+ +---------+ | Linux |-----------| Windows | | Server | LAN | Client | +--------+ +---------+ And I want the LAN to look like a satellite link, with delay, jitter, packet loss, and (asymmetric) rate limiting in both directions. (If you care, I''m trying to emulate a DirecWay satellite link for a feasibility test. The parameters are ~350+/-35 ms delay each way, 75 kbit/s uplink, 550 kbit/s downlink. The latter takes from a multi-megabyte "fair usage" bucket that refills at 50 kbit/s. I don''t have good packet loss numbers, so I''m going to start with 1% and see how sensitive performance is.) Can anyone tell me how to do that? My problem is that trying to set up netem incoming is proving to be a pain: # tc qdisc add dev spare handle ffff: ingress # tc qdisc add dev spare parent ffff: handle 10: netem delay 300ms 50ms 25% loss 1% RTNETLINK answers: Unknown error 4294967295 I''m not at all certain why this doesn''t work. I''m told that the ingress queue is a bit of a kludge; is there an explanaiton of how it is implemented somewhere, that would help me understand its limitations? The whole tc system is causing me some confusion. First of all, am I right that there''s considerable overlap in functionality with netfilter? Both have packet selection (filtering) mechanisms, and both can throw away packets, but they differ in what other actions they can do: Netfilter can redirect, reply to, and modify packets, but it cannot delay or reorder them. Its throttling features (limit and hashlimit match modules) are fairly simplistic. It does, however, have sophtisticated stateful packet classification features. Netfilter also lets you mess with packets in multiple different places in the routing path. There''s PREROUTING and POSTROUTING, and every packet also passes through one of INPUT, OUTPUT, and FORWARD. tc is all about throttling and reordering packets. It cannot redirect, reply to, or modify packets, and its classification is stateless and fairly simplistic. You can use netfilter to perform filtering (classification) for tc, but not vice-versa. I *think* netfilter''s flexibility comes at a bit of a speed penalty, and doing pure-tc classification will be faster than the equivalent logic using netfilter. (But for a typical broadband connection up to 10 Mbit/sec, this is not a big issue.) One tc question: If most queueing is done outgoing, is there some sort of "local delivery" outgoing queue that I can use to throttle traffic to local services? Now, I think that I understand netfilter. Each packet passes through a succession of rules, each of which has some match conditions and an action. This continues until a final disposition action is performed. tc is a little more confusing. With classless qdiscs, it seems that there is a chain of queues, and packets pass through them in sequence. QUESTION: It seems that these queues are "active" at both ends. A source pushes packets into them, and a device pulls them out at its transmission rate. When a device polls for packets from a priority queue, the queue will give the "best" packet available at the time. It''s not clear how this works when two queues are connected together. If a rate-limited FIFO is reciving packets from a priority queue, does it "pull" until it''s full, even though waiting might result in better packet ordering? I need to use netem plus a rate-control queue like tbf. QUESTION: The whole major:minor number thing is a bit confusing. I know that minor number 0 is reserved for qdescs, but is the convention that class x:y is associated with qdesc x:0 something that is enforced somewhere, or are they just random 32-bit numbers, 65536 of which are reserved for qdescs? But when you have classful qdiscs, thing start getting confusing. It appears that you need three things: - A "tc qdisc add" statement to create the "major" qdisc in the chain. - Some "tc class add" statements to create queue classes - Some "tc filter add" statements to assign packets to the various classes. The picture at http://pupa.da.ru/tc/ seems to help, but it doesn''t explain the multiple-major case at all. (But that web page *des* tell me about the IMQ device, which may be the solution to my problems... I''ll go away and play with that now.) Anyway, thanks for any guidance on the subject. I think there''s some big conceptual issue I''m just not getting, leading to a disconnect.
kevin-lartc@horizon.com
2005-Oct-11 14:14 UTC
Re: How to do network emulation on incoming traffic?
> Somebody will probaby correct me quickly here but I dont think there is > a way of creating jitter and latency and packet loss easily in linux.Er... excuse me? The network emulator module ("netem") does it very nicely. The problem is, it''s a traffic control queue discipline, and thus only works on egress traffic. Actually, after having found some more docs, the whole business of nested qdiscs is starting to make more sense. A classful qdisc just chooses among a number of sub-queues when a "dequeue" request from the device for more data to send arrives. The result is a tree of qdiscs, with classless qdiscs at the leaves. But this means that it makes no sense to have a child of a classless qdisc. And yet the netem examples are full of such things, e.g.: ttp://linux-net.osdl.org/index.php/Netem#Rate_control When does netem ever pull from the "child" queue?