Hi.
S10U2+patches, SPARC.
NFS v3/tcp server with ZFS as local storage. ZFS does only striping, actual
RAID-10 is done on 3510. I can see MUCH more throutput generated to disks than
over the net to nfs server. Nothing else runs on the server.
bash-3.00# ./nicstat.pl 1
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:25 ce1 0.16 7.20 2.64 10.91 62.63 675.71 0.01 0.00
20:23:25 ce0 16.17 14.43 37.16 27.10 445.61 545.38 0.03 0.00
20:23:25 ce3 0.12 0.12 1.62 1.62 79.03 77.50 0.00 0.00
20:23:25 ce2 0.12 0.12 1.62 1.62 78.85 78.04 0.00 0.00
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:26 ce1 0.29 669.82 4.88 1100.51 60.00 623.25 0.55 0.00
20:23:26 ce0 2329.65 1146.76 4516.11 3304.78 528.23 355.33 2.85 0.00
20:23:26 ce3 0.06 0.00 0.98 0.00 60.00 0.00 0.00 0.00
20:23:26 ce2 0.06 0.00 0.98 0.00 60.00 0.00 0.00 0.00
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:27 ce1 0.06 139.34 0.99 460.39 60.00 309.93 0.11 0.00
20:23:27 ce0 4347.98 4522.37 6328.65 6029.64 703.52 768.02 7.27 0.00
20:23:27 ce3 0.06 0.00 0.99 0.00 60.00 0.00 0.00 0.00
20:23:27 ce2 0.06 0.00 0.99 0.00 60.00 0.00 0.00 0.00
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:28 ce1 0.17 333.06 2.97 442.64 60.00 770.49 0.27 0.00
20:23:28 ce0 899.31 1550.09 2180.55 2038.94 422.32 778.49 2.01 0.00
20:23:28 ce3 0.06 0.00 0.99 0.00 60.00 0.00 0.00 0.00
20:23:28 ce2 0.06 0.00 0.99 0.00 60.00 0.00 0.00 0.00
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:29 ce1 0.12 127.15 1.98 729.65 60.00 178.44 0.10 0.00
20:23:29 ce0 3589.55 1247.44 4065.95 2673.02 904.02 477.88 3.96 0.00
20:23:29 ce3 0.13 0.00 1.98 0.00 69.00 0.00 0.00 0.00
20:23:29 ce2 0.13 0.00 1.98 0.00 69.00 0.00 0.00 0.00
Time Int rKb/s wKb/s rPk/s wPk/s rAvs wAvs %Util Sat
20:23:30 ce1 0.00 87.28 0.00 287.14 0.00 311.26 0.07 0.00
20:23:30 ce0 2619.44 1901.52 4292.34 3789.34 624.91 513.85 3.70 0.00
20:23:30 ce3 0.06 0.71 0.99 11.88 60.00 61.50 0.00 0.00
20:23:30 ce2 0.06 0.71 0.99 11.88 60.00 61.50 0.00 0.00
^C
bash-3.00#
bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 995 213 61.4M 5.72M
f3-2 188K 1.20T 0 0 10 13
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1.10K 210 68.6M 6.11M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 733 442 44.3M 8.84M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1.01K 88 64.7M 499K
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1.25K 122 78.3M 4.12M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1020 76 61.8M 1.79M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1.22K 104 76.8M 3.95M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
f3-1 310G 914G 1.06K 70 65.9M 2.24M
f3-2 188K 1.20T 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
^C
bash-3.00#
bash-3.00# iostat -xnzC 1
[...]
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1175.0 174.7 73282.4 4613.8 0.0 16.2 0.0 12.0 0 298 c5
410.8 17.1 25942.1 789.4 0.0 5.6 0.0 13.1 0 99
c5t600C0FF000000000098FD57F9DA83C00d0
382.6 116.5 23841.1 2187.4 0.0 5.5 0.0 10.9 0 100
c5t600C0FF000000000098FD55DBA4EA000d0
381.6 41.2 23499.2 1637.0 0.0 5.1 0.0 12.2 0 99
c5t600C0FF000000000098FD516E4403200d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1189.1 48.0 76372.1 800.1 0.0 15.9 0.0 12.8 0 298 c5
399.0 48.0 27603.4 800.1 0.0 5.6 0.0 12.4 0 99
c5t600C0FF000000000098FD57F9DA83C00d0
384.0 0.0 22877.9 0.0 0.0 4.6 0.0 12.0 0 100
c5t600C0FF000000000098FD55DBA4EA000d0
406.0 0.0 25890.7 0.0 0.0 5.7 0.0 14.1 0 100
c5t600C0FF000000000098FD516E4403200d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
784.2 48.0 48345.4 1416.4 0.0 18.2 0.0 21.8 0 292 c5
272.1 8.0 18210.9 492.1 0.0 4.9 0.0 17.5 0 95
c5t600C0FF000000000098FD57F9DA83C00d0
270.1 0.0 15471.1 0.0 0.0 6.6 0.0 24.4 0 99
c5t600C0FF000000000098FD55DBA4EA000d0
242.1 40.0 14663.4 924.2 0.0 6.7 0.0 23.7 0 98
c5t600C0FF000000000098FD516E4403200d0
^C
bash-3.00#
bash-3.00# zpool status
pool: f3-1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
f3-1 ONLINE 0 0 0
c5t600C0FF000000000098FD516E4403200d0 ONLINE 0 0 0
c5t600C0FF000000000098FD55DBA4EA000d0 ONLINE 0 0 0
c5t600C0FF000000000098FD57F9DA83C00d0 ONLINE 0 0 0
errors: No known data errors
pool: f3-2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
f3-2 ONLINE 0 0 0
c5t600C0FF000000000098FD52BABFF1D00d0 ONLINE 0 0 0
c5t600C0FF000000000098FD511BA5C8000d0 ONLINE 0 0 0
c5t600C0FF000000000098FD54DDFB18300d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00#
So I have something like 10MB/s on the network and 60-80MB generated to disks.
On similar NFS servers with UFS as a local storage I do not see UFS to generate
much more throutput than I can see on a network. And also other nfs servers (the
same hardware) have much more data and bigger traffic from clients (the same
application) and still they generate less activity to disks.
This message posted from opensolaris.org
Robert Milkowski wrote:> Hi. > > S10U2+patches, SPARC. NFS v3/tcp server with ZFS as local storage. > ZFS does only striping, actual RAID-10 is done on 3510. I can see > MUCH more throutput generated to disks than over the net to nfs > server. Nothing else runs on the server.It looks like you are seeing much more reads off the disks than data sent out over NFS, but not much more writes to the disk. You are probably seeing the effects of the vdev cache. See: 6437054 vdev_cache: wise up or die --matt
IIRC there was a tunable variable to set how much data to read-in. And default was 64KB... ???? This message posted from opensolaris.org
Lowering from default 64K to 16K turned into about 10x less read throutput! And similar factor for latency for nfs clients. For now I''ll probably leave it as it is and later will do some comparisons with different settings. ps. very big thanks to Roch! I owe you! This message posted from opensolaris.org