Displaying 3 results from an estimated 3 matches for "10msec".
Did you mean:
20msec
2004 Sep 20
0
Shaper & prio qdisc
...parent 1: prio 5 u32 match ip dport 20
0xffff flowid 1:3
....
and same for eth1.
Now i need to add shapers for some client connecting from eth1 via vpn and
gain real ip addresses (like 218.33.x.x)
I think, it must looks like this:
tc qdisc add dev ppp7 root tbf rate 150kbit buffer 1600 latency 10msec
it''s shape outgoing traffic from client, right?
But how to shape incoming traffic? I think, it must be class on eth0 with
parent 1:1(tbf qdisc), but tbf is classless, so i need to replace it? For
example, with htb.
I have 2 questions:
a) Which qdisc i should use to replace tbf and save...
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
...Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very much more, you've
got a drive that drags down the whole array's performance.
Ignore the very first output from the command - it's an
average of the disk subsystem since boot.
Post a representative output along with the...
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
The HBA is an HP H220.
We haven?t really benchmarked individual drives ? all 12 drives are utilized in one RAID-10 array, I?m unsure how we would test individual drives without breaking the array.
Trying ?hdparm -tT /dev/sda? now ? it?s been running for 25 minutes so far?
Kelly
On 2016-05-25, 2:12 PM, "centos-bounces at centos.org on behalf of Dennis Jacobfeuerborn"