search for: net_xmit_drop

Displaying 13 results from an estimated 13 matches for "net_xmit_drop".

2004 Jun 18
1
Help:how to generate different packets?souce code explanation?
Hi,All I setup traffic control configuration with HTB this way: 1: root HTB qdisc | 1:1 HTB class rate 1024kbit | /-----+-----+-----+------+-----\ 1:10 1:20 1:30 1:40 1:50 1:60 EF AF41 AF31 AF21 AF11 BE and alloct different bandwidth to these PHBs(queues).So which tool would I use to generate these packets at the same to for
2002 Jun 08
2
New qdisc path, try it (what is the problem)
hello, this is my new qdisc patch, when i recompile the kernel with this patch i dn''nt succeed please look at it and if there are any mistakes plesease send me a mail thanks in advance ___________________________________________________________ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com
2006 Feb 22
0
Re: [PATCH] Fix IPSec for Xen checksum offload packets (Jon Mason)
...rame will be transmitted as it may be dropped due >- * to congestion or traffic shaping. >- * >- * ----------------------------------------------------------------------------------- >- * I notice this method can also return errors from the queue disciplines, >- * including NET_XMIT_DROP, which is a positive value. So, errors can also >- * be positive. >- * >- * Regardless of the return value, the skb is consumed, so it is currently >- * difficult to retry a send to this method. (You can bump the ref count >- * before sending to hold a reference...
2009 Nov 17
11
[Bridge] [PATCH 0/3] macvlan: add vepa and bridge mode
This is based on an earlier patch from Eric Biederman adding forwarding between macvlans. I extended his approach to allow the administrator to choose the mode for each macvlan, and to implement a functional VEPA between macvlan. Still missing from this is support for communication between the lower device that the macvlans are based on. This would be extremely useful but as others have found out
2009 Nov 17
11
[Bridge] [PATCH 0/3] macvlan: add vepa and bridge mode
This is based on an earlier patch from Eric Biederman adding forwarding between macvlans. I extended his approach to allow the administrator to choose the mode for each macvlan, and to implement a functional VEPA between macvlan. Still missing from this is support for communication between the lower device that the macvlans are based on. This would be extremely useful but as others have found out
2009 Nov 17
11
[Bridge] [PATCH 0/3] macvlan: add vepa and bridge mode
This is based on an earlier patch from Eric Biederman adding forwarding between macvlans. I extended his approach to allow the administrator to choose the mode for each macvlan, and to implement a functional VEPA between macvlan. Still missing from this is support for communication between the lower device that the macvlans are based on. This would be extremely useful but as others have found out
2006 Apr 26
5
how to change classful netem loss probability?
Hi, I am using netem to add loss and then adding another qdisc within netem according to the wiki. Then i want to change the netem drop probability without having to delete the qdisc and recreate it. I try it but I get invalid argument: thorium-ini hedpe # tc qdisc add dev ath0 root handle 1:0 netem drop 1% thorium-ini hedpe # tc qdisc add dev ath0 parent 1:1 handle 10: xcp capacity 54Mbit
2005 Mar 30
5
netem with prio hangs on duplicate
hi i tried the example given on the examples page to duplicate selected traffic like tc qdisc add dev eth0 root handle 1: prio tc qdisc add dev eth0 parent 1:3 handle 3: netem duplicate 40% tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst 11.0.2.2 flowid 1:3 when i ping from 11.0.2.2 to this interface my machine hangs. the same thing works for drop or delay. i would
2004 Jun 22
3
[ANNOUNCE] sch_ooo - Out-of-order packet queue discipline
...skb->len); + + /* do we have room? */ + if (sch->q.qlen < q->limit) { + __skb_queue_tail(&sch->q, skb); /* autoinc qlen */ + sch->stats.bytes += skb->len; + sch->stats.packets++; + + return NET_XMIT_SUCCESS; + } + + sch->stats.drops++; + kfree_skb(skb); + + return NET_XMIT_DROP; +} + +static struct sk_buff *ooo_dequeue(struct Qdisc *sch) +{ + struct ooo_sched_data *q = (struct ooo_sched_data *)sch->data; + struct sk_buff *skb = NULL; + long howmuch; + + /* time to delay a packet? */ + if ((q->gap > 0) && (q->counter >= q->gap)) { + struct sk_...
2006 Aug 02
10
[PATCH 0/6] htb: cleanup
The HTB scheduler code is a mess, this patch set does some basic house cleaning. The first four should cause no code change, but the last two need more testing. -- Stephen Hemminger <shemminger@osdl.org> "And in the Packet there writ down that doome" - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to
2004 Jul 01
20
[PATCH 2.6] update to network emulation QOS scheduler
...+ PSCHED_GET_TIME(cb->time_to_send); + PSCHED_TADD(cb->time_to_send, q->latency); + + __skb_queue_tail(&q->qnormal, skb); + sch->q.qlen++; + sch->stats.bytes += skb->len; + sch->stats.packets++; + return 0; + } + + sch->stats.drops++; + kfree_skb(skb); + return NET_XMIT_DROP; +} + +/* Requeue packets but don''t change time stamp */ +static int netem_requeue(struct sk_buff *skb, struct Qdisc *sch) +{ + struct netem_sched_data *q = (struct netem_sched_data *)sch->data; + + __skb_queue_head(&q->qnormal, skb); + sch->q.qlen++; + return 0; +} + +/* + *...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...2.2 Guest RX: After commit 5d097109257c03a71845729f8db6b5770c4bbedc (tun: only queue packets on device), pktgen start to report a unbelievable huge kpps. (>2099kpps even for one queue). The problem if tun report NETDEV_TX_OK even when it drops packet which confuse the pktgen. After change it to NET_XMIT_DROP, the value makes more sense but not very stable even doing some pining manually. Even this, multiqueue get a good speedup in the test. Will continue to investigate. 2 Netperf test: 2.0 Test Environment: Two Intel(R) Xeon(R) CPU E5620 @ 2.40GHz with two directed connected intel 82599EB 10 Gigabi...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...2.2 Guest RX: After commit 5d097109257c03a71845729f8db6b5770c4bbedc (tun: only queue packets on device), pktgen start to report a unbelievable huge kpps. (>2099kpps even for one queue). The problem if tun report NETDEV_TX_OK even when it drops packet which confuse the pktgen. After change it to NET_XMIT_DROP, the value makes more sense but not very stable even doing some pining manually. Even this, multiqueue get a good speedup in the test. Will continue to investigate. 2 Netperf test: 2.0 Test Environment: Two Intel(R) Xeon(R) CPU E5620 @ 2.40GHz with two directed connected intel 82599EB 10 Gigabi...