fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0 46239.2 0.0 0.0 6.6 14.7 0 79 sd7 452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79 sd8 460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73 sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79 sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 90 sd9 430.7 0.0 44266.5 0.0 5.8 0.8 15.5 78 84 sd10 421.7 0.0 44451.4 0.0 6.3 0.9 17.1 80 87 sd11 421.7 0.0 44196.1 0.0 5.8 0.8 15.8 75 80 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- z 1.06T 3.81T 2.92K 0 360M 0 raidz1 564G 2.86T 1.51K 0 187M 0 c0t1d0 - - 457 0 47.3M 0 c1t1d0 - - 457 0 47.4M 0 c0t6d0 - - 456 0 47.4M 0 c0t4d0 - - 458 0 47.4M 0 c1t3d0 - - 463 0 47.3M 0 raidz1 518G 970G 1.40K 0 174M 0 c1t4d0 - - 434 0 44.7M 0 c1t6d0 - - 433 0 45.3M 0 c0t3d0 - - 445 0 45.3M 0 c1t5d0 - - 427 0 44.4M 0 c0t5d0 - - 424 0 44.3M 0 ---------- ----- ----- ----- ----- ----- -----
On 10 January, 2008 - Rob Logan sent me these 1,9K bytes:> > fun example that shows NCQ lowers wait and %w, but doesn''t have > much impact on final speed. [scrubbing, devs reordered for clarity]The final speed is limited by the slowest of your two raidz groups, so a better example would be two different pools doing the exact same thing or the same pool with and without NCQ.. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On Jan 9, 2008, at 9:09 PM, Rob Logan wrote:> > fun example that shows NCQ lowers wait and %w, but doesn''t have > much impact on final speed. [scrubbing, devs reordered for clarity]Here are the results i found when comparing random reads vs. sequential reads for NCQ: http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis eric> > extended device statistics > device r/s w/s kr/s kw/s wait actv svc_t %w %b > sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 > sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 > sd6 445.7 0.0 46239.2 0.0 0.0 6.6 14.7 0 79 > sd7 452.7 0.0 46850.7 0.0 0.0 6.0 13.3 0 79 > sd8 460.7 0.0 46947.7 0.0 0.0 5.5 11.8 0 73 > sd3 426.7 0.0 43726.4 0.0 5.6 0.8 14.9 73 79 > sd5 424.7 0.0 44456.4 0.0 6.6 0.9 17.7 83 90 > sd9 430.7 0.0 44266.5 0.0 5.8 0.8 15.5 78 84 > sd10 421.7 0.0 44451.4 0.0 6.3 0.9 17.1 80 87 > sd11 421.7 0.0 44196.1 0.0 5.8 0.8 15.8 75 80 > > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > z 1.06T 3.81T 2.92K 0 360M 0 > raidz1 564G 2.86T 1.51K 0 187M 0 > c0t1d0 - - 457 0 47.3M 0 > c1t1d0 - - 457 0 47.4M 0 > c0t6d0 - - 456 0 47.4M 0 > c0t4d0 - - 458 0 47.4M 0 > c1t3d0 - - 463 0 47.3M 0 > raidz1 518G 970G 1.40K 0 174M 0 > c1t4d0 - - 434 0 44.7M 0 > c1t6d0 - - 433 0 45.3M 0 > c0t3d0 - - 445 0 45.3M 0 > c1t5d0 - - 427 0 44.4M 0 > c0t5d0 - - 424 0 44.3M 0 > ---------- ----- ----- ----- ----- ----- ----- > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss