similar to: netem with prio hangs on duplicate

Displaying 11 results from an estimated 11 matches similar to: "netem with prio hangs on duplicate"

2004 Jul 01
20
[PATCH 2.6] update to network emulation QOS scheduler
This patch updates the network emulation packet scheduler. * name changed from delay to netem since it does more than just delay * Catalin''s merged code to do packet reordering * uses a socket queue''s directly rather than layering on qdisc(fifo) because this is used in performance tests. * adds placeholder in API for future enhancements (rate and duplicate).
2006 Aug 02
10
[PATCH 0/6] htb: cleanup
The HTB scheduler code is a mess, this patch set does some basic house cleaning. The first four should cause no code change, but the last two need more testing. -- Stephen Hemminger <shemminger@osdl.org> "And in the Packet there writ down that doome" - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to
2005 Jan 04
11
ESFQ?
Hi again, I was just looking around for ESFQ sources, and I see that the main site is down, and only has kernel 2.6.4 patches. Is ESFQ maintained? If so, where can I find patches for 2.6.10? Thanks, -justin _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
2006 Apr 26
5
how to change classful netem loss probability?
Hi, I am using netem to add loss and then adding another qdisc within netem according to the wiki. Then i want to change the netem drop probability without having to delete the qdisc and recreate it. I try it but I get invalid argument: thorium-ini hedpe # tc qdisc add dev ath0 root handle 1:0 netem drop 1% thorium-ini hedpe # tc qdisc add dev ath0 parent 1:1 handle 10: xcp capacity 54Mbit
2013 Mar 08
3
[Bridge] [Patch net] bridge: do not expire mdb entry when bridge still uses it
From: Cong Wang <amwang at redhat.com> This is a long-standing bug and reported several times: https://bugzilla.redhat.com/show_bug.cgi?id=880035 http://marc.info/?l=linux-netdev&m=136164389416341&w=2 This bug can be observed in virt environment, when a KVM guest communicates with the host via multicast. After some time (should be 260 sec, I didn't measure), the multicast
2013 Apr 30
6
[Bridge] [PATCHv4 net-next 0/2] Add two new flags to bridge.
The following series adds 2 new flags to bridge. One flag allows the user to control whether mac learning is performed on the interface or not. By default mac learning is on. The other flag allows the user to control whether unicast traffic is flooded (send without an fdb) to a given unicast port. Default is on. Changes since v4: - Implemented Stephen's suggestions. Changes since v2: -
2013 Jan 09
16
[Bridge] [PATCH net-next V5 00/14] Add basic VLAN support to bridges
This series of patches provides an ability to add VLANs to the bridge ports. This is similar to what can be found in most switches. The bridge port may have any number of VLANs added to it including vlan 0 priority tagged traffic. When vlans are added to the port, only traffic tagged with particular vlan will forwarded over this port. Additionally, vlan ids are added to FDB entries and become
2013 Feb 13
14
[Bridge] [PATCH v10 net-next 00/12] VLAN filtering/VLAN aware bridge
Changes since v9: * series re-ordering so make functionality more distinct. Basic vlan filtering is patches 1-4. Support for PVID/untagged vlans is patches 5 and 6. VLAN support for FDB/MDB is patches 7-11. Patch 12 is still additional egress policy. * Slight simplification to code that extracts the VID from skb. Since we now depend on the vlan module, at the time of input skb_tci is
2024 Feb 13
16
[Bug 1736] New: nftables - dynamic update for verdict map from the packet path
https://bugzilla.netfilter.org/show_bug.cgi?id=1736 Bug ID: 1736 Summary: nftables - dynamic update for verdict map from the packet path Product: nftables Version: 1.0.x Hardware: x86_64 OS: All Status: NEW Severity: enhancement Priority: P5 Component: nft
2011 Jun 24
19
SKB paged fragment lifecycle on receive
When I was preparing Xen''s netback driver for upstream one of the things I removed was the zero-copy guest transmit (i.e. netback receive) support. In this mode guest data pages ("foreign pages") were mapped into the backend domain (using Xen grant-table functionality) and placed into the skb''s paged frag list (skb_shinfo(skb)->frags, I hope I am using the right
2013 Aug 26
0
[PATCH] bridge: separate querier and query timer into IGMP/IPv4 and MLD/IPv6 ones
Currently we would still potentially suffer multicast packet loss if there is just either an IGMP or an MLD querier: For the former case, we would possibly drop IPv6 multicast packets, for the latter IPv4 ones. This is because we are currently assuming that if either an IGMP or MLD querier is present that the other one is present, too. This patch makes the behaviour and fix added in "bridge: