search for: pacekt

Displaying 14 results from an estimated 14 matches for "pacekt".

Did you mean: packt
2006 Sep 25
0
[Bug 517] New: failed to forward packets via some interface
...AssignedTo: laforge@netfilter.org ReportedBy: raymond1860@gmail.com Now we are testing a adsl box based broadcom's 6348 (kernel 2.6.8.1). Initialy,everything is ok,After long time working with heavy traffic(the time varies from several hours to several days),the box can't forward pacekts via the adsl/atm port. Infomation: iptables/netfilter/masquerade/dnat pppoe link mips cpu 240Mmips when the box is down,we can't get any traffic from adsl/atm/pppoe interface (everything via eth0/br0 is still very ok).we have to break down pppd daemon (lower link is down and lcp start t...
2006 Sep 25
0
[Bug 518] New: failed to forward packets via some interface
...AssignedTo: laforge@netfilter.org ReportedBy: raymond1860@gmail.com Now we are testing a adsl box based broadcom's 6348 (kernel 2.6.8.1). Initialy,everything is ok,After long time working with heavy traffic(the time varies from several hours to several days),the box can't forward pacekts via the adsl/atm port. Infomation: iptables/netfilter/masquerade/dnat pppoe link mips cpu 240Mmips when the box is down,we can't get any traffic from adsl/atm/pppoe interface (everything via eth0/br0 is still very ok).we have to break down pppd daemon (lower link is down and lcp start t...
2006 Sep 26
0
[Bug 519] New: failed to forward packets via some interface
...AssignedTo: laforge@netfilter.org ReportedBy: raymond1860@gmail.com Now we are testing a adsl box based broadcom's 6348 (kernel 2.6.8.1). Initialy,everything is ok,After long time working with heavy traffic(the time varies from several hours to several days),the box can't forward pacekts via the adsl/atm port. Infomation: iptables/netfilter/masquerade/dnat pppoe link mips cpu 240Mmips when the box is down,we can't get any traffic from adsl/atm/pppoe interface (everything via eth0/br0 is still very ok).we have to break down pppd daemon (lower link is down and lcp start t...
2005 Sep 28
1
Does HTB consider PRIO or not?
Hello LARTC!!!! There is a question that kills me everytime I think about it. I just love HTB and for one year since I started to work wih it I had no complains until one day. One client needs to allocate the shared bandwith , based on priorities. The HTB as everybody knows have the CEIL parameter and also PRIO which are supposed to solve the problem. Now the problem: I configure everything, rate
2005 Sep 28
4
Re:Does HTB consider PRIO or not? 2
...(the specified rate is guaranteed). Prio in HTB only affects > borrowing bandwidth from other classes... In the example below, the class > 1:5 should be allowed to borrow bandwidth before 1:14 does. Thats exactly what I want from HTB to do..to prio the borrowed bandwidth. >Why are there pacekts in direct_packets_stat? I really dont know what that parameter means..i have to google... Well the output is really big . The classes are 1:5 and 1:14... #########################################QDISC############################## root@srv1:/etc# tc -s -d qdisc show dev eth1 qdisc htb 1: r2q 10...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
...es from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 50 ++++++++++++++++++++++++++++++++++++++++++++------ drivers/vhost/net.c | 23 ++++++++++++++++++++--- drivers/vhos...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
...es from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 50 ++++++++++++++++++++++++++++++++++++++++++++------ drivers/vhost/net.c | 23 ++++++++++++++++++++--- drivers/vhos...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
...hed=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------- drivers/vhost/net.c | 23 +++++++++++++++--- drivers/vhost/vh...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
...hed=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------- drivers/vhost/net.c | 23 +++++++++++++++--- drivers/vhost/vh...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
...es from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++---- drivers/vhost/net.c | 23 ++++++++++++++-- drivers/vhost/vhos...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
...es from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++---- drivers/vhost/net.c | 23 ++++++++++++++-- drivers/vhost/vhos...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...ges from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++---- drivers/vhost/net.c | 23 ++++++++++++++-- drivers/vhost/vhos...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...ges from V2: - remove uselss queue limitation check (and we don't drop any packet now) Changes from V1: - drop NAPI handler since we don't use NAPI now - fix the issues that may exceeds max pending of zerocopy - more improvement on available buffer detection - move the limitation of batched pacekts from vhost to tuntap Please review. Thanks Jason Wang (3): vhost: better detection of available buffers vhost_net: tx batching tun: rx batching drivers/net/tun.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++---- drivers/vhost/net.c | 23 ++++++++++++++-- drivers/vhost/vhos...
2000 Nov 18
9
priority bands don't reduce interactive latency?
I run a small Linux webserver and NAT router from my cable modem at home. Whenever someone starts an http download, all other traffic from my LAN is starved. Bandwidth is not really an issue, but latency is particularly horrible -- pings that usually come back in 20ms can take up to 600ms while the web server is active! I set up QoS (netfilter+iproute2) on the NAT machine in an attempt to give