Displaying 8 results from an estimated 8 matches for "qdic".
Did you mean:
dic
2005 Jan 11
3
Need help regarding TBF Token rate setting
Hi,
I would like to know how to specify the token rate when a tbf qdic is created using tc tool.. Will it be
a default value when tbf qdisc is created?
This could be a silly question.... im quite new to all these stuff.. but im really interested..
any help will be most appreciated...
thanks in advance,
sanjeev
--
______________________________________________
Chec...
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...similar issue for the packets sent by tcp_sendpage() was
>> > blocked or delayed.
> What's the issue exactly? How would you trigger it?
I mean it looks similar to the issue that if we use vmsplice() to splice
user pages to TCP socket, and then the packet were blocked or delayed by
qdics or other. Did we wait for all pending packets in this case before
terminating the process?
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...similar issue for the packets sent by tcp_sendpage() was
>> > blocked or delayed.
> What's the issue exactly? How would you trigger it?
I mean it looks similar to the issue that if we use vmsplice() to splice
user pages to TCP socket, and then the packet were blocked or delayed by
qdics or other. Did we wait for all pending packets in this case before
terminating the process?
2007 Dec 04
2
Simple Example isnt working (ssh/bulk traffic)
...w to get ssh connections running well while
downloading, but even the 100kbps (100kbyte/s?) doesnt work - can still
download with 500+kb/s. Whats wrong?
INTERFACE=eth0
#clear all on $INTERFACE
tc qdisc del dev $INTERFACE root
tc qdisc add dev $INTERFACE root handle 1:0 htb default 15
#root qdics, allows borrowing for its children
tc class add dev $INTERFACE parent 1:0 classid 1:1 htb rate 100kbps ceil
100kbps
#ssh qdisc
tc class add dev $INTERFACE parent 1:1 classid 1:5 htb rate 20kbps ceil
100kbps prio 2
#other traffic
tc class add dev $INTERFACE parent 1:1 classid 1:15 htb rate 80kbps...
2007 Mar 28
1
traffic shaping with NAT: IFB as IMQ replacement?
...uot;shaping") is still much better than plain rate
limiting or no action at all. (see also parts of [2]). If there is a
better solution than "ingress shaping" available or being worked on,
please tell me.
First of all: Why is it difficult?
Because you can''t use the advanced qdics (htb, cbq, ...) on ingres
directly (only the ingress "qdisc").
Using IMQ it is quite straightforward to work around this limitation.
It seems IFB is intented as IMQ replacement [3]
I managed to use IFB as IMQ replacement in a setup without NAT.
But when NAT is involved I am in trouble...
2003 Jun 06
4
tc show error for ingress
...st
1926b/8 mpu 0b cburst 1926b/8 mpu 0b level 0
Sent 193313679 bytes 189055 pkts (dropped 1, overlimits 0)
rate 32656bps 32pps backlog 9p
lended: 189046 borrowed: 0 giants: 0
tokens: -77245 ctokens: -77245
What is wrong here?
The shaping+limiting script is provided below
---
#Delete existing qdics
tc qdisc del dev eth0 root
tc qdisc del dev eth0 ingress
#add HTB for egress
tc qdisc add dev eth0 root handle 1: htb default 1
tc class add dev eth0 parent 1: classid 1:1 htb rate 256kbit ceil 256kbit
#Add ingress queue
tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
> On 2014/2/26 13:53, Jason Wang wrote:
>> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
>>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation