Matthew Huang
2008-Jul-16 14:53 UTC
[zfs-discuss] [Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL,
IHAC who would like to use Sun Fire X4500 to be the NFS server for the
backend services, and would like to see the potential performance gain
comparing to their existing systems. However the outputs of the I/O
stress test with iozone show the mixed results as follows:
* The read performance sharply degrades (almost down to 1/20, i.e
from 2,000,000 down to 100,000) when the file sizes are larger
than 256KBytes.
* The write performance remains good (roughly 1,000,000) even with
the file sizes larger than 100MBytes.
The NFS/ZFS server configuraion and the test environment is briefed as
* The ZFS pool for NFS is composed of the 6 disks in stripping with
one on each SATA controller.
* Solaris 10 Update 5 (Solaris Factory Installation)
* The on-board GigE ports are trunked for better I/O and network
throughput.
* Single NFS client on which the I/O stress tool iozone is deployed
and run.
Attached the iozone output, and some outputs extracted from the attached
file are presetned later in this email. Any inputs, like NFS/ZFS tuning,
troubleshooting, etc., are very welcome. Many thanks for your time and
support.
Regards,
Matthew
------------------------------------------------------------------------
"Writer report"
"4" "8" "16"
"32" "64" "128" "256"
"512" "1024" "2048" "4096"
"8192" "16384"
"64" 779620 1032956 1086132 1030728 1083646
"128" 888805 1048964 1049389 1131926 1123528 1075981
"256" 914273 841924 885664 914105 920652 933973 1062429
"512" 840779 853200 953428 927458 936045 942734 948038
1030096
"1024" 847660 872161 941096 942849 973457 962306 970583
1008880 1072174
"2048" 850868 918755 960554 992736 1000505 1019961 1019952
1042211 1037506 1061164
"4096" 889644 986289 1035360 1066665 1078489 1098988 1097544
1105839 1099925 1071938 938146
"8192" 915000 1020690 1085911 1102268 1112897 1130393 1142702
1147031 1139839 1107481 972561 814472
"16384" 916635 1031351 1095623 1109493 1123586 1134307 1142546
1142854 1148374 1121658 993808 862090 801800
"32768" 0 0 0 0 1125390 1135211 1141106
1139796 1136396 1117100 1001529 862452 830220
"65536" 0 0 0 0 1122712 1129913 1134172
1141482 1143636 1127615 1009551 864830 834768
*"131072" 0 0 0 0 1118828 1130799
1133341
1133643 1143385 1128063 1010421 861890 833680*
"262144" 0 0 0 0 951793 963910 939610
953132 957778 925947 848510 751680 727216
"524288" 0 0 0 0 902587 932762 900618
911183 883468 851441 752104 689376 656806
"Reader report"
"4" "8" "16"
"32" "64" "128" "256"
"512" "1024" "2048" "40 96"
"8192" "16384"
"64" 1278121 1882241&n bsp; 2063082 2136657 2291288
"128" 1542262 1936240 2000479 2170065 2288046 2202688
*"256" 105303 106272 106619 107021 107116 105002
107026*
"512" 110011 110654 110679 110896 111402 98158 98860
97191
"1024" 107314 108612 109332 109321 109554 96658 97413
96304 96449
"2048" 111117 111839 111863 112158 112114 100171 93958
93730 93653 93703
"4096" 112831 113058 113268 113346 113286 108239 98639
98651 100903 99805 95888
"8192" 113939 113965 114112 114205 114177 111413 106600
105741 107029 106588 101477 96805
"16384" 113757 113999 114529 114538 114577 112883 111071
110964 110951 105530 106261 102985 101311
"32768" 0 0 0 0 113580 113973 112219
112981 111017 111138 110726 107683 104093
"65536" 0 0 0 0 114672 113785 112932
113050 113672 113219 112366 110995 108959
"131072" 0 0 0 0 114295 114519 113907
113827 114091 114142 112111 112946 112200
"262144" 0 0 0 0 114687 114522 114532
114478 114436 114371 114177 113774 113154
"524288" 0 0 0 0 114669 114691 114578
114715 114487 114628 114434 114149 114141
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080716/a2d64b81/attachment.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: X4500_Remote_NFS_all.log
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080716/a2d64b81/attachment.ksh>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: matthew_huang.vcf
Type: text/x-vcard
Size: 351 bytes
Desc: not available
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080716/a2d64b81/attachment.vcf>
Bob Friesenhahn
2008-Jul-16 15:59 UTC
[zfs-discuss] [Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
On Wed, 16 Jul 2008, Matthew Huang wrote:> comparing to their existing systems. However the outputs of the I/O stress > test with iozone show the mixed results as follows: > > * The read performance sharply degrades (almost down to 1/20, i.e > from 2,000,000 down to 100,000) when the file sizes are larger > than 256KBytes.This issue is almost certainly client-side rather than server side. The 256KByte threshold is likely the NFS buffer cache size (could be overall filesystem cache size) on the client. In order to know for sure, run iozone directly on the server as well. If tests directly on the server don''t show a slowdown at the 256KByte threshold, then the abrupt slowdown is due to client caching combined with inadequate network transfer performance or excessive network latency. If sequential read performance is important to you, then you should investigate NFS client tuning parameters (mount parameters) related to the amount of sequential read-ahead performed by the client. If clients request an unnecessary amount of read-ahead, then network performance could suffer due to transferring data which is never used. When using NFSv3 or later, TCP tuning parameters can be a factor as well. You can expect that ZFS read performance will slow down on the server once the ZFS ARC size becomes significant as compared to the amount of installed memory on the server. For re-reads, if the file is larger than the ARC can grow, then ZFS needs to go to disk rather than use its cache. Do an ftp transfer from the server to the client. A well-tuned NFS should be at least as fast as this. Bob