similar to: How to do network emulation on incoming traffic?

Displaying 20 results from an estimated 3000 matches similar to: "How to do network emulation on incoming traffic?"

2005 May 13
1
Qdisc requeue should be void?
There is an design problem with the qdisc interface that causes qlen related bugs in netem, tbf, and other qdisc''s that peek at the top of the queue. The problem is that requeue needs to be called from the dequeue function but requeue can fail. If requeue fails, then the calling qdisc can not properly handle the error. If it returns NULL, then the parent''s expectation about qlen
2004 Oct 08
2
Delay packets by 50ms
Hi all, I am trying to solve a tiny problem that is trivial to solve using dummynet (FreeBSD). I just want to add a delay of 50ms to each outgoing packet from an interface. This is to simulate a large pool of multiple modem users so I also need to add b/w limits etc (which seems to be easy to do). From the mailing list I could fine 2 qdiscs that can simulate latency : "delay" &
2006 Apr 16
9
how to do probabilistic packet loss in kernel?
Hi, I am using iproute2 to setup fowarding, adding routes like "ip route add 192.168.1.3 via 192.168.1.2" I was wondering where in the kernel I can insert probabilistic packet loss only for forwarded packets? So that for instance I can drop 5% of all forwarded packets? I don''t need help with the actual code, just need help finding where to insert this code :) Thanks! George
2006 Apr 26
5
how to change classful netem loss probability?
Hi, I am using netem to add loss and then adding another qdisc within netem according to the wiki. Then i want to change the netem drop probability without having to delete the qdisc and recreate it. I try it but I get invalid argument: thorium-ini hedpe # tc qdisc add dev ath0 root handle 1:0 netem drop 1% thorium-ini hedpe # tc qdisc add dev ath0 parent 1:1 handle 10: xcp capacity 54Mbit
2007 Mar 30
1
Please Help: applying multiple different delays with netem
I''m trying to use tc and netem to delay packets from several different machines as they exit via eth0. Assume two source IPs, 10.0.0.122 and 10.0.0.133. I''d like to delay packets from the first one by 200ms, and packets from the second one by 300 ms. Any other traffic should be sent out normally. Here''s what I tried: # make three classes, 1:1, 1:2, and 1:3: tc qdisc add
2004 Aug 31
1
netem usage example
I''m trying to setup a netem delay with no luck (using iproute2-2.6.8, compilation broke during arpd compile, so I use the tc binary in the tc/ subdir, there''s also a q_netem.so there). kernel is 2.6.8.1, compile with CPU cycle counter as time reference. I was using sch_delay of 2.6.7 happily with something like: tc qdisc add dev eth0 root 1: delay latency 1ms rate 35M now I use:
2006 Jul 17
1
How to add multiple filters and netem rules on a single interface?
Hi! We want to run TCP streams to several port numbers through one interface, each with a different delay set by Netem. E.g. TCP streams to port 80 could have 50ms delay, while TCP streams to port 81 could have 100 ms delay and so on. We have tried to solve this by using a combination of tc filter and netem rules, but we can''t get it quite right. We are considering one class per port,
2005 Jan 27
2
netem bug?
Hi all, I''m running some tests with netem and I noticed some strange behaviour that looks like a bug: I''m pinging another machine and adding delay with netem. When I tell netem to give me a 10ms delay, it works fine. The problem is that when I ask for a 11ms delay, it gives me 20ms! It happens for any value between 11ms an 20ms, and it repeats for values over 20ms, now
2007 Apr 02
1
Please Help: Can''t access bands > 10 on prio qdisc
Hi, I''m trying to set up 15 different delay intervals for packets leaving on an interface, using netems hanging off of a 16-band prio. I''m having trouble adding anything to bands higher than 10. Here''s what I tried: tc qdisc add dev eth0 root handle 1: prio bands 16 \ priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I want all default traffic to go to
2007 Apr 23
1
Multiple bands with equal priority ?
I''m trying to build a wan latency test environment, where packets from different "remote" locations get delayed by different amounts of time, depending on which remote location we''re pretending they are from. Currently, I''m doing this using the ''prio'' qdisc to obtain multiple bands, and hanging a different netem qdisc off each of the branches
2006 Jun 26
4
Can i attach another qdisc under classes or root qdisc?
now, i''m learning and try to read a lot of article about tc command in linux for setting traffic shaper. but i''m doubt about In the theory about tc command ... In general, we define class under root qdisc but Is it can be possible ???? If we define another qdisc under root qdisc, Can i do it? because i have just read tc command syntax and i found this point ... syntax: tc qdisc
2004 Jul 01
20
[PATCH 2.6] update to network emulation QOS scheduler
This patch updates the network emulation packet scheduler. * name changed from delay to netem since it does more than just delay * Catalin''s merged code to do packet reordering * uses a socket queue''s directly rather than layering on qdisc(fifo) because this is used in performance tests. * adds placeholder in API for future enhancements (rate and duplicate).
2002 Dec 31
3
[tcng] More complex example?
Hi I''m completely stuck with the tcng language - I assume there must be some way to arrange queues hierachically like eth1 | TBF | PRIO / \ class class but my attempt (below) produces a "inferno.tc:8: qdisc "tbf" has no classes near "prio"" when run through tcc. dev eth1 { egress { tbf (rate 128kbps, burst 64kb,
2005 May 24
3
four tc filter and netem questions
The following (occuring on debian/testing with kernel-image-2.6.8-2-386 version 2.6.8-13 and iproute version 20041019-3) confuses me: # tc qdisc add dev eth0 root handle 1: prio # tc filter add dev eth0 parent 1: proto ip pref 1 handle 1 fw classid 1:2 # tc filter ls dev eth0 filter parent 1: protocol ip pref 1 fw filter parent 1: protocol ip pref 1 fw handle 0x1 classid 1:2 # tc filter del dev
2005 Mar 30
5
netem with prio hangs on duplicate
hi i tried the example given on the examples page to duplicate selected traffic like tc qdisc add dev eth0 root handle 1: prio tc qdisc add dev eth0 parent 1:3 handle 3: netem duplicate 40% tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 11.0.2.2 flowid 1:3 when i ping from 11.0.2.2 to this interface my machine hangs. the same thing works for drop or delay. i would
2005 Jan 22
2
network emulation
hi, I am really a newbie in linux traffic control.But i have task to implement a tool similar to the nistnet tool used for netwok emulation tests but which emulates a wireless environment. I was exploring the use of the traffic control subsystem for this task.In this regard i have a few questions i need to post in order to clarify my thoughts on how to do this.I am using tcng to classify
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >>
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >>