I have seen a few arguments around the net for zfs on hardware raid, so i am posting the results of bonnie++ on a t3b. I am using snv_55b and the t3b has the latest firmware (read,write cache on) and is configured with 8 disks on raid5 and one hot spare. bonnie++ command was: bonnie++ -d /t3 -r 8192 -u amo 1)ufs (default options) ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 16G 23289 98 78140 84 21509 42 21272 99 77029 87 830.2 15 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files K/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2439 36 ++++ ++ 2441 38 5316 71 +++ +++ 6529 88 2)ufs with directio ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16G 3711 20 9391 20 2287 7 8384 42 11955 17 660.9 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 3710 66 +++ ++ 2453 46 5641 73 +++ +++ 6587 91 3)zfs ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16G 24383 98 86654 99 38173 72 22066 98 86612 79 794.5 11 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6377 99 +++ ++ 11079 99 6692 99 +++ +++ 11039 99 4)zfs with compression ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP 16G 26049 99 76441 88 39937 98 20058 99 101699 92 3254 191 ------Sequential Create------ --------Random Create-------- -Create-- --Read-- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6672 98 +++ ++ 12008 100 7207 100 ++++ +++ 12165 99 This message posted from opensolaris.org
I have attached a "readable" txt format of the test. This message posted from opensolaris.org -------------- next part -------------- Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ufs_cache_off 16G 22659 96 28608 31 709 1 21056 99 68828 78 504.0 9 ufs_cache_on 16G 23289 98 78140 84 21509 42 21272 99 77029 87 830.2 15 ufs_directio 16G 3711 20 9391 20 2287 7 8384 42 11955 17 660.9 7 zfs 16G 24383 98 86654 99 38173 72 22066 98 86612 79 794.5 11 zfs_compression 16G 26049 99 76441 88 39937 98 20058 99 101699 92 3254.3 191 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP ufs_cache_off 16 1245 25 +++++ +++ 1170 20 2163 21 +++++ +++ 678 9 ufs_cache_on 16 2439 36 +++++ +++ 2441 38 5316 71 +++++ +++ 6529 88 ufs_directio 16 3710 66 +++++ +++ 2453 46 5641 73 +++++ +++ 6587 91 zfs 16 6377 99 +++++ +++ 11079 99 6692 99 +++++ +++ 11039 99 zfs_compression 16 6672 98 +++++ +++ 12008 100 7207 100 +++++ +++ 12165 99
I ran testing of hardware RAID versus all software and didn''t see any differences that made either a clear winner. For production platforms you''re just as well off having JBODs. This was with bonnie++ on a V240 running Solaris 10u3. A 3511 array fully populated with 12 380-gig drives, single controller with 1-gig RAM with 415G firmware. I tried: HW RAID-5 (11 disks one hot spare) 2 HW RAID-5 LUNS, then ZFS mirroring on top of that Then I made each disk an indivdual LUN and ran: RAIDZ2 (11 disks in set, 1 hot spare) RAID-10 across all 12 disks Yeah they weren''t PRECISELY comparable setups but it was useful tests of some configuration you might run if given random older hardware. The ZFS RAID-10 came out ahead particularly on read, if I had to pick I would go with that. I understand the state of the ZIL flush is such that the HW RAID features as far as cache are essentially wasted. I read this is being worked on but not clear if fixes are in nv69 or not. Perhaps when that is all fixed it will make sense again to utilize controller hardware you already have. I have an nv69 box, if I can swipe a 3310 or 3511 array for it I''ll run bonnie++ on that later in the week. This message posted from opensolaris.org