similar to: Multiple bands with equal priority ?

Displaying 20 results from an estimated 3000 matches similar to: "Multiple bands with equal priority ?"

2007 Apr 02
1
Please Help: Can''t access bands > 10 on prio qdisc
Hi, I''m trying to set up 15 different delay intervals for packets leaving on an interface, using netems hanging off of a 16-band prio. I''m having trouble adding anything to bands higher than 10. Here''s what I tried: tc qdisc add dev eth0 root handle 1: prio bands 16 \ priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I want all default traffic to go to
2007 Mar 30
1
Please Help: applying multiple different delays with netem
I''m trying to use tc and netem to delay packets from several different machines as they exit via eth0. Assume two source IPs, 10.0.0.122 and 10.0.0.133. I''d like to delay packets from the first one by 200ms, and packets from the second one by 300 ms. Any other traffic should be sent out normally. Here''s what I tried: # make three classes, 1:1, 1:2, and 1:3: tc qdisc add
2006 Jul 17
1
How to add multiple filters and netem rules on a single interface?
Hi! We want to run TCP streams to several port numbers through one interface, each with a different delay set by Netem. E.g. TCP streams to port 80 could have 50ms delay, while TCP streams to port 81 could have 100 ms delay and so on. We have tried to solve this by using a combination of tc filter and netem rules, but we can''t get it quite right. We are considering one class per port,
2005 Jul 12
0
Teql and NetEm can''t work together
Thanks in advance! Summary: when I load netem and teql together, teql doesn''t work correctly. (If I load teql only, everything is fine) I loaded both netem and teql. Netem is associated with eth0, and teql is associated with both eth0 and eth1. But traffic only goes out of eth1. Attached are the commands that I used to configure teql and netem (on machine 1), and commands to
2004 Oct 08
2
Delay packets by 50ms
Hi all, I am trying to solve a tiny problem that is trivial to solve using dummynet (FreeBSD). I just want to add a delay of 50ms to each outgoing packet from an interface. This is to simulate a large pool of multiple modem users so I also need to add b/w limits etc (which seems to be easy to do). From the mailing list I could fine 2 qdiscs that can simulate latency : "delay" &
2000 Nov 02
1
tc: weight, defmap and split usage?
Some (non urgent) questions about "tc" parameters: What is "weight" for? We''ve seen that leaving it out of our script causes error/warnings like: CQB: class 00010012 had bad quantum==0, repaired. Which values should it have and why? When should one use the "split" and "defmap" parameters? I guess they can be used to specify the best effort
2007 Aug 22
4
Limited number of bands in PRIO qdisc
Hello, is it possible that the number of bands for the PRIO qdisc is limited to 16? tc qdisc add dev $DEVICE root handle 1: prio bands 16 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 succeeds but tc qdisc add dev $DEVICE root handle 1: prio bands 17 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 returns: ''RTNETLINK answers: Invalid argument'' Is there any possibility to raise the
2006 Apr 26
5
how to change classful netem loss probability?
Hi, I am using netem to add loss and then adding another qdisc within netem according to the wiki. Then i want to change the netem drop probability without having to delete the qdisc and recreate it. I try it but I get invalid argument: thorium-ini hedpe # tc qdisc add dev ath0 root handle 1:0 netem drop 1% thorium-ini hedpe # tc qdisc add dev ath0 parent 1:1 handle 10: xcp capacity 54Mbit
2004 Aug 31
1
netem usage example
I''m trying to setup a netem delay with no luck (using iproute2-2.6.8, compilation broke during arpd compile, so I use the tc binary in the tc/ subdir, there''s also a q_netem.so there). kernel is 2.6.8.1, compile with CPU cycle counter as time reference. I was using sch_delay of 2.6.7 happily with something like: tc qdisc add dev eth0 root 1: delay latency 1ms rate 35M now I use:
2005 Oct 11
1
How to do network emulation on incoming traffic?
I''m trying to simulate a satellite link to a Linux server to test application performance. I haven''t used any of the tc stuff before, but I blandly assured people it would be "easy" to set up a simulated long thin pipe on a spare network interface. However, now that I''m exploring, it''s proving quite difficult. Let me start with the general question
2005 Jan 27
2
netem bug?
Hi all, I''m running some tests with netem and I noticed some strange behaviour that looks like a bug: I''m pinging another machine and adding delay with netem. When I tell netem to give me a 10ms delay, it works fine. The problem is that when I ask for a 11ms delay, it gives me 20ms! It happens for any value between 11ms an 20ms, and it repeats for values over 20ms, now
2005 May 13
1
Qdisc requeue should be void?
There is an design problem with the qdisc interface that causes qlen related bugs in netem, tbf, and other qdisc''s that peek at the top of the queue. The problem is that requeue needs to be called from the dequeue function but requeue can fail. If requeue fails, then the calling qdisc can not properly handle the error. If it returns NULL, then the parent''s expectation about qlen
2007 Aug 28
2
prio bands and ignored priomap when any tc filter is present
Today I''ve noticed a bit strange (?) behaviour when prio qdisc is used. Example (having no filters/qdisc/etc. at the start) : Add simple 9 bands qdisc, set each mapping to lowest priority band: tc qdisc add dev $eth root handle 1: prio bands 9 priomap 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 If I do just that, all is fine - whole traffic ends in 9th band, what can easily be verified by tc -s
2005 May 24
3
four tc filter and netem questions
The following (occuring on debian/testing with kernel-image-2.6.8-2-386 version 2.6.8-13 and iproute version 20041019-3) confuses me: # tc qdisc add dev eth0 root handle 1: prio # tc filter add dev eth0 parent 1: proto ip pref 1 handle 1 fw classid 1:2 # tc filter ls dev eth0 filter parent 1: protocol ip pref 1 fw filter parent 1: protocol ip pref 1 fw handle 0x1 classid 1:2 # tc filter del dev
2005 Mar 30
5
netem with prio hangs on duplicate
hi i tried the example given on the examples page to duplicate selected traffic like tc qdisc add dev eth0 root handle 1: prio tc qdisc add dev eth0 parent 1:3 handle 3: netem duplicate 40% tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 11.0.2.2 flowid 1:3 when i ping from 11.0.2.2 to this interface my machine hangs. the same thing works for drop or delay. i would
2007 May 16
1
Re: drop silently locally generated packets
Hi. I want to drop silently locally generated packets on a specific interface. I tried 2 approaches: tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb tc filter add dev eth0 parent 1: proto ip u32 match ip dst 10.10.10.1 flowid 1:1 police conform-exceed drop/drop tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: prio bands 2 priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >>
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote: > On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >>