Displaying 4 results from an estimated 4 matches for "231k".
Did you mean:
231
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Jun 21
0
Network issue in RHCS/GFS environment
...>
1 32 27 39 1 0| 176k 47M| 0 0 :1216k 25M: 51M 323k>
1 29 27 43 1 0| 192k 42M| 35B 35B:2042k 50M: 42M 249k>
0 29 38 32 1 0| 198k 41M| 936B 1293B:1748k 40M: 41M 233k>
1 26 34 38 0 0| 246k 38M| 0 35B:1804k 42M: 41M 231k>
1 27 33 38 1 0| 234k 41M| 35B 0 :1800k 40M: 40M 250k>
However, it is very stranger in node2: "eth1 recv and send" are both very
high! while eth0 and eth2 have low I/O.
# dstat -N eth0,eth3,eth4 2
----total-cpu-usage---- -dsk/total- --net/eth0----net/eth1----net...
2005 Dec 08
3
trouble with shorewall on Mandriva 2006 (2nd)
...tate RELATED,ESTABLISHED
369 70518 ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0
Chain fw2net (1 references)
pkts bytes target prot opt in out source
destination
3461 255K ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0 state RELATED,ESTABLISHED
2588 231K ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0
Chain loc2fw (2 references)
pkts bytes target prot opt in out source
destination
47546 3160K ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0 state RELATED,ESTABLISHED
663 87699 ACCEPT all -- * *...
2010 Apr 02
6
L2ARC & Workingset Size
...used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 3 61 197K 366K
c10t0d0 2.52G 925G 3 61 197K 366K
cache - - - - - -
c9t0d0 394K 7.45G 2 8 75.2K 231K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 91 5 985K 210K
c10t0d0 2.52G 925G 91...