First - many, many congrats to team ZFS. Developing/writing a new Unix fs is a very non-trivial exercise with zero tolerance for developer bugs. I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108) hooked up to the built-in SCSI controller (the only device on the SCSI bus). My initial ZFS test was to create a pool on a 5Gb slice set aside for zfs testing and then try some small file I/O ops on the pool. It turns out (which was not my original intent) that the test files I chose were zero bytes in length. The test data was tarred and gzipped to /tmp and then untarred onto the zfs pool as follows: $ zfs list NAME USED AVAIL REFER MOUNTPOINT testpool 192M 4.62G 192M /testpool $ pwd /testpool/al $ mkdir test5 $ cd test5 $ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - What I''m seeing is very high CPU utilization (system calls) and I/O rates that average (only) about 350kb/Sec. Here''s the data: $ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - real 1:37.865 user 1.176 sys 0.194 output from vmstat 5 55: 0 0 60 1008956 293008 46 103 0 0 0 0 0 10 0 0 0 388 78533 693 5 57 38 2 0 60 1007580 291456 0 1 0 0 0 0 0 38 0 0 0 431 121582 1011 7 93 0 0 0 60 1006408 290284 0 0 0 0 0 0 0 42 0 0 0 416 122132 1014 7 93 0 0 0 60 1004684 288560 0 0 0 0 0 0 0 30 0 0 0 404 123167 978 7 93 0 0 0 60 1002192 286072 0 0 0 0 0 0 0 28 0 0 0 404 123590 927 7 93 0 0 0 60 998072 281952 0 0 0 0 0 0 0 11 0 0 0 385 123647 875 7 93 0 0 0 60 990592 274468 0 0 0 0 0 0 0 27 0 0 0 401 122342 908 7 93 0 0 0 60 982244 266120 0 0 0 0 0 0 0 30 0 0 0 406 123428 933 7 93 0 0 0 60 973512 257388 0 0 0 0 0 0 0 21 0 0 0 395 124192 895 7 93 0 0 0 60 965084 248960 0 0 0 0 0 0 0 18 0 0 0 394 123737 891 7 93 0 0 0 60 956872 240748 0 0 0 0 0 0 0 22 0 0 0 397 123310 892 7 93 0 0 0 60 949088 232964 0 0 0 0 0 0 0 27 0 0 0 402 123672 925 7 93 0 1 0 60 943940 227816 0 0 0 0 0 0 0 29 0 0 0 400 121999 916 7 93 0 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s1 -- in sy cs us sy id 0 0 60 936860 220736 0 0 0 0 0 0 0 22 0 0 0 397 123031 894 7 93 0 0 0 60 928368 212244 0 0 0 0 0 0 0 27 0 0 0 400 123579 934 7 93 0 0 0 60 918804 202680 0 0 0 0 0 0 0 49 0 0 0 425 122854 1068 7 93 0 0 0 60 909868 193740 0 0 0 0 0 0 0 26 0 0 0 402 119735 889 7 93 0 0 0 60 901148 185020 0 0 0 0 0 0 0 24 0 0 0 405 123276 894 7 93 0 0 0 60 892660 176532 0 0 0 0 0 0 0 34 0 0 0 411 122107 985 7 93 0 0 0 60 886412 170284 2 3 0 0 0 0 0 42 0 0 0 417 111849 1077 7 85 8 0 0 60 883332 167040 0 0 0 0 0 0 0 15 0 0 0 391 475 354 0 1 99 output from iostat -xcn 5: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 9.6 0.0 70.9 0.0 0.0 0.0 1.5 0 1 c1t0d0 0.0 38.4 0.0 559.8 0.0 0.6 0.0 15.3 0 4 c1t0d0 0.4 41.8 3.2 646.5 0.0 0.5 0.0 11.9 0 4 c1t0d0 0.0 30.4 0.0 551.9 0.0 0.4 0.0 12.5 0 3 c1t0d0 0.2 28.2 12.8 355.7 0.0 0.2 0.0 6.4 0 3 c1t0d0 0.2 11.2 12.8 346.7 0.0 0.0 0.0 3.2 0 2 c1t0d0 0.0 26.4 0.0 356.9 0.0 0.2 0.0 7.4 0 3 c1t0d0 0.2 29.4 12.8 351.7 0.0 0.4 0.0 15.1 0 4 c1t0d0 0.4 20.4 25.6 351.0 0.0 0.1 0.0 3.5 0 2 c1t0d0 0.0 17.8 0.0 350.1 0.0 0.1 0.0 3.6 0 2 c1t0d0 0.2 21.6 12.8 344.5 0.0 0.1 0.0 3.9 0 3 c1t0d0 0.0 26.8 0.0 405.3 0.0 0.3 0.0 11.7 0 3 c1t0d0 0.0 29.0 0.0 355.7 0.0 0.4 0.0 13.7 0 2 c1t0d0 0.0 21.6 0.0 341.8 0.0 0.1 0.0 4.0 0 2 c1t0d0 0.0 27.4 0.0 357.9 0.0 0.3 0.0 12.3 0 2 c1t0d0 0.2 48.8 12.8 396.1 0.0 0.9 0.0 18.0 0 4 c1t0d0 0.2 25.4 12.8 349.2 0.0 0.2 0.0 6.1 0 2 c1t0d0 0.2 24.2 12.8 339.8 0.0 0.1 0.0 4.7 0 3 c1t0d0 0.2 34.2 12.8 521.8 0.0 0.5 0.0 15.3 0 4 c1t0d0 0.0 42.4 0.0 679.6 0.0 0.9 0.0 20.3 0 5 c1t0d0 0.2 14.6 12.8 273.9 0.0 0.0 0.0 3.1 0 1 c1t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0 So - on average the I/O xfer rate was ~ 355kb/Sec and the corresponding CPU utilization was 100% with approx. 123,000 Sys calls/Sec. IOW the CPU was saturated way before the disk drive was. This is not what I would have expected. Comments? Followup Ques: Should the zfs code special case zero length (or very small) files? I''m not trying to be unfair to the zfs code by deliberately picking a pathalogical worst case scenario. This just happened to be the first test I ran. But I am a bit surprised at the results. Obviously I''m not even going to compare these results to ufs, because zfs offers so many more benefits, in terms of usability, data integrity etc. etc. that such a comparison is largely meaningless IMHO. Also, comparing a rev 1.0 body of code (zfs) to a body of code with a bazillion man years of development behind it, is most unfair IMO. PS: the test data looks like (a partial dump): -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58055.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58058.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58061.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58064.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58067.msg drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/ -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58070.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58073.msg drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/ -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58078.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58081.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58084.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58087.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58090.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58093.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58096.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58099.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58102.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58105.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58108.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58111.msg drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/ -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58114.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58117.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58120.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58123.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58126.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58129.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58132.msg -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58135.msg Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
I don''t know if there''s something in particular about the small files, but one of our main 3 areas of focus for performance work in the coming weeks is around tar performance. We''ve made some improvements to the lookup() and stat() paths in build 28, but there''s still a bunch of work that needs to be done. Eric Kustarz might have some insight into whether this is exhibiting some pathological result or whether it''s just more of the same overall work that is in progress. - Eric On Sun, Nov 20, 2005 at 03:42:04PM -0600, Al Hopper wrote:> > First - many, many congrats to team ZFS. Developing/writing a new Unix fs > is a very non-trivial exercise with zero tolerance for developer bugs. > > I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and > a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108) > hooked up to the built-in SCSI controller (the only device on the SCSI > bus). > > My initial ZFS test was to create a pool on a 5Gb slice set aside for zfs > testing and then try some small file I/O ops on the pool. It turns out > (which was not my original intent) that the test files I chose were zero > bytes in length. The test data was tarred and gzipped to /tmp and then > untarred onto the zfs pool as follows: > > $ zfs list > NAME USED AVAIL REFER MOUNTPOINT > testpool 192M 4.62G 192M /testpool > > $ pwd > /testpool/al > $ mkdir test5 > $ cd test5 > $ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - > > What I''m seeing is very high CPU utilization (system calls) and I/O rates > that average (only) about 350kb/Sec. > > Here''s the data: > > $ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - > > real 1:37.865 > user 1.176 > sys 0.194 > > output from vmstat 5 55: > > 0 0 60 1008956 293008 46 103 0 0 0 0 0 10 0 0 0 388 78533 693 5 57 38 > 2 0 60 1007580 291456 0 1 0 0 0 0 0 38 0 0 0 431 121582 1011 7 93 0 > 0 0 60 1006408 290284 0 0 0 0 0 0 0 42 0 0 0 416 122132 1014 7 93 0 > 0 0 60 1004684 288560 0 0 0 0 0 0 0 30 0 0 0 404 123167 978 7 93 0 > 0 0 60 1002192 286072 0 0 0 0 0 0 0 28 0 0 0 404 123590 927 7 93 0 > 0 0 60 998072 281952 0 0 0 0 0 0 0 11 0 0 0 385 123647 875 7 93 0 > 0 0 60 990592 274468 0 0 0 0 0 0 0 27 0 0 0 401 122342 908 7 93 0 > 0 0 60 982244 266120 0 0 0 0 0 0 0 30 0 0 0 406 123428 933 7 93 0 > 0 0 60 973512 257388 0 0 0 0 0 0 0 21 0 0 0 395 124192 895 7 93 0 > 0 0 60 965084 248960 0 0 0 0 0 0 0 18 0 0 0 394 123737 891 7 93 0 > 0 0 60 956872 240748 0 0 0 0 0 0 0 22 0 0 0 397 123310 892 7 93 0 > 0 0 60 949088 232964 0 0 0 0 0 0 0 27 0 0 0 402 123672 925 7 93 0 > 1 0 60 943940 227816 0 0 0 0 0 0 0 29 0 0 0 400 121999 916 7 93 0 > kthr memory page disk faults cpu > r b w swap free re mf pi po fr de sr s0 s1 s1 -- in sy cs us sy id > 0 0 60 936860 220736 0 0 0 0 0 0 0 22 0 0 0 397 123031 894 7 93 0 > 0 0 60 928368 212244 0 0 0 0 0 0 0 27 0 0 0 400 123579 934 7 93 0 > 0 0 60 918804 202680 0 0 0 0 0 0 0 49 0 0 0 425 122854 1068 7 93 0 > 0 0 60 909868 193740 0 0 0 0 0 0 0 26 0 0 0 402 119735 889 7 93 0 > 0 0 60 901148 185020 0 0 0 0 0 0 0 24 0 0 0 405 123276 894 7 93 0 > 0 0 60 892660 176532 0 0 0 0 0 0 0 34 0 0 0 411 122107 985 7 93 0 > 0 0 60 886412 170284 2 3 0 0 0 0 0 42 0 0 0 417 111849 1077 7 85 8 > 0 0 60 883332 167040 0 0 0 0 0 0 0 15 0 0 0 391 475 354 0 1 99 > > output from iostat -xcn 5: > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 9.6 0.0 70.9 0.0 0.0 0.0 1.5 0 1 c1t0d0 > 0.0 38.4 0.0 559.8 0.0 0.6 0.0 15.3 0 4 c1t0d0 > 0.4 41.8 3.2 646.5 0.0 0.5 0.0 11.9 0 4 c1t0d0 > 0.0 30.4 0.0 551.9 0.0 0.4 0.0 12.5 0 3 c1t0d0 > 0.2 28.2 12.8 355.7 0.0 0.2 0.0 6.4 0 3 c1t0d0 > 0.2 11.2 12.8 346.7 0.0 0.0 0.0 3.2 0 2 c1t0d0 > 0.0 26.4 0.0 356.9 0.0 0.2 0.0 7.4 0 3 c1t0d0 > 0.2 29.4 12.8 351.7 0.0 0.4 0.0 15.1 0 4 c1t0d0 > 0.4 20.4 25.6 351.0 0.0 0.1 0.0 3.5 0 2 c1t0d0 > 0.0 17.8 0.0 350.1 0.0 0.1 0.0 3.6 0 2 c1t0d0 > 0.2 21.6 12.8 344.5 0.0 0.1 0.0 3.9 0 3 c1t0d0 > 0.0 26.8 0.0 405.3 0.0 0.3 0.0 11.7 0 3 c1t0d0 > 0.0 29.0 0.0 355.7 0.0 0.4 0.0 13.7 0 2 c1t0d0 > 0.0 21.6 0.0 341.8 0.0 0.1 0.0 4.0 0 2 c1t0d0 > 0.0 27.4 0.0 357.9 0.0 0.3 0.0 12.3 0 2 c1t0d0 > 0.2 48.8 12.8 396.1 0.0 0.9 0.0 18.0 0 4 c1t0d0 > 0.2 25.4 12.8 349.2 0.0 0.2 0.0 6.1 0 2 c1t0d0 > 0.2 24.2 12.8 339.8 0.0 0.1 0.0 4.7 0 3 c1t0d0 > 0.2 34.2 12.8 521.8 0.0 0.5 0.0 15.3 0 4 c1t0d0 > 0.0 42.4 0.0 679.6 0.0 0.9 0.0 20.3 0 5 c1t0d0 > 0.2 14.6 12.8 273.9 0.0 0.0 0.0 3.1 0 1 c1t0d0 > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0 > > So - on average the I/O xfer rate was ~ 355kb/Sec and the corresponding CPU > utilization was 100% with approx. 123,000 Sys calls/Sec. IOW the CPU was > saturated way before the disk drive was. This is not what I would have > expected. Comments? > > Followup Ques: Should the zfs code special case zero length (or very small) > files? > > I''m not trying to be unfair to the zfs code by deliberately picking a > pathalogical worst case scenario. This just happened to be the first test > I ran. But I am a bit surprised at the results. > > Obviously I''m not even going to compare these results to ufs, because zfs > offers so many more benefits, in terms of usability, data integrity etc. > etc. that such a comparison is largely meaningless IMHO. Also, comparing a > rev 1.0 body of code (zfs) to a body of code with a bazillion man years of > development behind it, is most unfair IMO. > > PS: the test data looks like (a partial dump): > > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58055.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58058.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58061.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58064.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58067.msg > drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/ > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58070.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58073.msg > drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/ > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58078.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58081.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58084.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58087.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58090.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58093.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58096.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58099.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58102.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58105.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58108.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58111.msg > drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/ > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58114.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58117.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58120.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58123.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58126.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58129.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58132.msg > -rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58135.msg > > Regards, > > Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com > Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT > OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock wrote:>I don''t know if there''s something in particular about the small files, >but one of our main 3 areas of focus for performance work in the coming >weeks is around tar performance. We''ve made some improvements to the >lookup() and stat() paths in build 28, but there''s still a bunch of work >that needs to be done. Eric Kustarz might have some insight into >whether this is exhibiting some pathological result or whether it''s just >more of the same overall work that is in progress. > >Yeah, i''ll have to run a similar test myself to see exactly what''s going on... but i''ll definitely include it as part of making tar go fast. eric>- Eric > >On Sun, Nov 20, 2005 at 03:42:04PM -0600, Al Hopper wrote: > > >>First - many, many congrats to team ZFS. Developing/writing a new Unix fs >>is a very non-trivial exercise with zero tolerance for developer bugs. >> >>I just loaded build 27a on a w1100z with a single AMD 150 CPU (2Gb RAM) and >>a single (for now) SCSI disk drive: FUJITSU MAP3367NP (Revision: 0108) >>hooked up to the built-in SCSI controller (the only device on the SCSI >>bus). >> >>My initial ZFS test was to create a pool on a 5Gb slice set aside for zfs >>testing and then try some small file I/O ops on the pool. It turns out >>(which was not my original intent) that the test files I chose were zero >>bytes in length. The test data was tarred and gzipped to /tmp and then >>untarred onto the zfs pool as follows: >> >>$ zfs list >>NAME USED AVAIL REFER MOUNTPOINT >>testpool 192M 4.62G 192M /testpool >> >>$ pwd >>/testpool/al >>$ mkdir test5 >>$ cd test5 >>$ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - >> >>What I''m seeing is very high CPU utilization (system calls) and I/O rates >>that average (only) about 350kb/Sec. >> >>Here''s the data: >> >>$ ptime gunzip -c /tmp/alltestdir.tar.gz | tar xf - >> >>real 1:37.865 >>user 1.176 >>sys 0.194 >> >>output from vmstat 5 55: >> >> 0 0 60 1008956 293008 46 103 0 0 0 0 0 10 0 0 0 388 78533 693 5 57 38 >> 2 0 60 1007580 291456 0 1 0 0 0 0 0 38 0 0 0 431 121582 1011 7 93 0 >> 0 0 60 1006408 290284 0 0 0 0 0 0 0 42 0 0 0 416 122132 1014 7 93 0 >> 0 0 60 1004684 288560 0 0 0 0 0 0 0 30 0 0 0 404 123167 978 7 93 0 >> 0 0 60 1002192 286072 0 0 0 0 0 0 0 28 0 0 0 404 123590 927 7 93 0 >> 0 0 60 998072 281952 0 0 0 0 0 0 0 11 0 0 0 385 123647 875 7 93 0 >> 0 0 60 990592 274468 0 0 0 0 0 0 0 27 0 0 0 401 122342 908 7 93 0 >> 0 0 60 982244 266120 0 0 0 0 0 0 0 30 0 0 0 406 123428 933 7 93 0 >> 0 0 60 973512 257388 0 0 0 0 0 0 0 21 0 0 0 395 124192 895 7 93 0 >> 0 0 60 965084 248960 0 0 0 0 0 0 0 18 0 0 0 394 123737 891 7 93 0 >> 0 0 60 956872 240748 0 0 0 0 0 0 0 22 0 0 0 397 123310 892 7 93 0 >> 0 0 60 949088 232964 0 0 0 0 0 0 0 27 0 0 0 402 123672 925 7 93 0 >> 1 0 60 943940 227816 0 0 0 0 0 0 0 29 0 0 0 400 121999 916 7 93 0 >> kthr memory page disk faults cpu >> r b w swap free re mf pi po fr de sr s0 s1 s1 -- in sy cs us sy id >> 0 0 60 936860 220736 0 0 0 0 0 0 0 22 0 0 0 397 123031 894 7 93 0 >> 0 0 60 928368 212244 0 0 0 0 0 0 0 27 0 0 0 400 123579 934 7 93 0 >> 0 0 60 918804 202680 0 0 0 0 0 0 0 49 0 0 0 425 122854 1068 7 93 0 >> 0 0 60 909868 193740 0 0 0 0 0 0 0 26 0 0 0 402 119735 889 7 93 0 >> 0 0 60 901148 185020 0 0 0 0 0 0 0 24 0 0 0 405 123276 894 7 93 0 >> 0 0 60 892660 176532 0 0 0 0 0 0 0 34 0 0 0 411 122107 985 7 93 0 >> 0 0 60 886412 170284 2 3 0 0 0 0 0 42 0 0 0 417 111849 1077 7 85 8 >> 0 0 60 883332 167040 0 0 0 0 0 0 0 15 0 0 0 391 475 354 0 1 99 >> >>output from iostat -xcn 5: >> >> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device >> 0.0 9.6 0.0 70.9 0.0 0.0 0.0 1.5 0 1 c1t0d0 >> 0.0 38.4 0.0 559.8 0.0 0.6 0.0 15.3 0 4 c1t0d0 >> 0.4 41.8 3.2 646.5 0.0 0.5 0.0 11.9 0 4 c1t0d0 >> 0.0 30.4 0.0 551.9 0.0 0.4 0.0 12.5 0 3 c1t0d0 >> 0.2 28.2 12.8 355.7 0.0 0.2 0.0 6.4 0 3 c1t0d0 >> 0.2 11.2 12.8 346.7 0.0 0.0 0.0 3.2 0 2 c1t0d0 >> 0.0 26.4 0.0 356.9 0.0 0.2 0.0 7.4 0 3 c1t0d0 >> 0.2 29.4 12.8 351.7 0.0 0.4 0.0 15.1 0 4 c1t0d0 >> 0.4 20.4 25.6 351.0 0.0 0.1 0.0 3.5 0 2 c1t0d0 >> 0.0 17.8 0.0 350.1 0.0 0.1 0.0 3.6 0 2 c1t0d0 >> 0.2 21.6 12.8 344.5 0.0 0.1 0.0 3.9 0 3 c1t0d0 >> 0.0 26.8 0.0 405.3 0.0 0.3 0.0 11.7 0 3 c1t0d0 >> 0.0 29.0 0.0 355.7 0.0 0.4 0.0 13.7 0 2 c1t0d0 >> 0.0 21.6 0.0 341.8 0.0 0.1 0.0 4.0 0 2 c1t0d0 >> 0.0 27.4 0.0 357.9 0.0 0.3 0.0 12.3 0 2 c1t0d0 >> 0.2 48.8 12.8 396.1 0.0 0.9 0.0 18.0 0 4 c1t0d0 >> 0.2 25.4 12.8 349.2 0.0 0.2 0.0 6.1 0 2 c1t0d0 >> 0.2 24.2 12.8 339.8 0.0 0.1 0.0 4.7 0 3 c1t0d0 >> 0.2 34.2 12.8 521.8 0.0 0.5 0.0 15.3 0 4 c1t0d0 >> 0.0 42.4 0.0 679.6 0.0 0.9 0.0 20.3 0 5 c1t0d0 >> 0.2 14.6 12.8 273.9 0.0 0.0 0.0 3.1 0 1 c1t0d0 >> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0 >> >>So - on average the I/O xfer rate was ~ 355kb/Sec and the corresponding CPU >>utilization was 100% with approx. 123,000 Sys calls/Sec. IOW the CPU was >>saturated way before the disk drive was. This is not what I would have >>expected. Comments? >> >>Followup Ques: Should the zfs code special case zero length (or very small) >>files? >> >>I''m not trying to be unfair to the zfs code by deliberately picking a >>pathalogical worst case scenario. This just happened to be the first test >>I ran. But I am a bit surprised at the results. >> >>Obviously I''m not even going to compare these results to ufs, because zfs >>offers so many more benefits, in terms of usability, data integrity etc. >>etc. that such a comparison is largely meaningless IMHO. Also, comparing a >>rev 1.0 body of code (zfs) to a body of code with a bazillion man years of >>development behind it, is most unfair IMO. >> >>PS: the test data looks like (a partial dump): >> >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58055.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58058.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58061.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58064.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/07/wctp58067.msg >>drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/ >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58070.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/08/wctp58073.msg >>drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/ >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58078.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58081.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58084.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58087.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58090.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58093.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58096.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58099.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58102.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58105.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58108.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/09/wctp58111.msg >>drwxr----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/ >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58114.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58117.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58120.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58123.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58126.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58129.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58132.msg >>-rw-r----- 100/208 0 Nov 13 18:27 2004 ./testdir/messages/2004/09/21/10/wctp58135.msg >> >>Regards, >> >>Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com >> Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT >>OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 >>_______________________________________________ >>zfs-discuss mailing list >>zfs-discuss at opensolaris.org >>http://opensolaris.org/mailman/listinfo/zfs-discuss >> >> > >-- >Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://opensolaris.org/mailman/listinfo/zfs-discuss > >
Seemingly Similar Threads
- The exclude option of Rsync not work right.
- Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
- file access algorithm within pools
- Samba 3.0.33 ACL rename/delete issue
- Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.