I''m building a file-serving machine for my home network, and thought I''d take the opportunity to collect some quick benchmarks of Solaris/ZFS versus Linux/ext3 and Linux/XFS on same-hardware before going into ''production'' The test setup is as follows: I performed one Bonnie++ 1.03 run for each filesystem type locally on the fileserving box, and one run for each filesystem type from a client machine, using NFS. Bonnie++ was compiled from source using gcc on all platforms (4.0.2 on Linux, 3.4.3 on Solaris). The NFS client machine is a Thinkpad 42p running Fedora Core 4 with 1GB of memory, and a 1.8gHz Dothan with Speedstep locked at 1.8gHz. The NFS client machine''s NIC was the onboard Intel 82540EP. The switch used for all tests was a D-link DGS-1008D. The fileserver was built using a MSI Neo4-F motherboard, AMD64 3200+, 2x512MB Corsair VS (CAS2.5) RAM, and 4x300GB Seagate 7200.8 SATA drives connected to the onboard (Nvidia) SATA controllers. The NIC used was the onboard (Marvel 88E1111 PHY). Solaris Express Build 28. To minimize debates and nitpicking, and because I had a limited amount of time to play with this before putting it into use, I used strictly default parameters for all volume and filesystem creation - no non-default options of any sort were used, and absolutely no tuning or optimization of the kernels, filesystems, or anything else was performed. I reserved the first 10GB of each disk for OS installations, and used the remaining ~280GB on each disk to create one large (~840GB usable) test volume. I used Linux software RAID-5 for the Linux tests, and RAID-Z for the Solaris tests. I ensured there was no other activity on the test machines during test runs, and that the test arrays were not syncing, scrubbing, or rebuilding. The Bonnie++ command line used (with only the pathname parameter chaning between tests) was: bonnie++ -r 1024 -s 8g -u nobody:nobody -d /mnt/huge/test Results are as follows. Sorry for the formatting, I couldn''t find an option to specify using a fixed-width font (which would be a very handy feature, by the way, given the amount of terminal-formatted info and code that gets pasted here). I''ll let you all make your own analysis of these results - I''ve made mine, and I''ll be going ahead with using ZFS for this project. [b]Solaris with ZFS, local run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP polaris 8G 66401 73 89250 14 57942 14 65119 87 144274 29 288.8 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 32626 99 +++++ +++ +++++ +++ [b]Solaris with ZFS, NFS run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monolith 8G 29634 94 34089 8 18441 14 25174 88 58458 24 100.7 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 101 0 3364 10 94 0 100 0 3878 10 96 0 [b]Linux with ext3, local run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP polaris 8G 41097 76 54868 17 32639 9 48572 90 140578 22 225.3 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24612 99 +++++ +++ +++++ +++ 24563 99 +++++ +++ +++++ +++ [b]Linux with ext3, NFS run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monolith 8G 29213 93 27707 6 20286 14 26864 94 58242 23 211.9 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 221 1 3363 10 187 0 208 1 3410 9 189 0 [b]Linux with XFS, local run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP polaris 8G 50761 85 111591 15 32725 8 49101 89 169308 27 309.0 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1388 16 +++++ +++ 1281 9 1275 14 +++++ +++ 1236 9 [b]Linux with XFS, NFS run[/b]: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monolith 8G 29839 94 35471 8 26799 22 27468 96 76076 33 248.3 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 259 1 3327 9 306 1 261 1 3870 10 91 0 This message posted from opensolaris.org
Nice analysis... So not that the performance should differ for Bonnie, but out of curiosity, what versions of NFS (3 or 4) were you running on Solaris / Linux ? I believe Fedora Core 4 has NFSv4 support (don''t think its the default though) and Solaris 10''s default is v4. oh and glad to hear you went with zfs! eric Ben Lazarus wrote:>I''m building a file-serving machine for my home network, and thought I''d take the opportunity to collect some quick benchmarks of Solaris/ZFS versus Linux/ext3 and Linux/XFS on same-hardware before going into ''production'' > >The test setup is as follows: > >I performed one Bonnie++ 1.03 run for each filesystem type locally on the fileserving box, and one run for each filesystem type from a client machine, using NFS. > >Bonnie++ was compiled from source using gcc on all platforms (4.0.2 on Linux, 3.4.3 on Solaris). The NFS client machine is a Thinkpad 42p running Fedora Core 4 with 1GB of memory, and a 1.8gHz Dothan with Speedstep locked at 1.8gHz. The NFS client machine''s NIC was the onboard Intel 82540EP. The switch used for all tests was a D-link DGS-1008D. > >The fileserver was built using a MSI Neo4-F motherboard, AMD64 3200+, 2x512MB Corsair VS (CAS2.5) RAM, and 4x300GB Seagate 7200.8 SATA drives connected to the onboard (Nvidia) SATA controllers. The NIC used was the onboard (Marvel 88E1111 PHY). Solaris Express Build 28. > >To minimize debates and nitpicking, and because I had a limited amount of time to play with this before putting it into use, I used strictly default parameters for all volume and filesystem creation - no non-default options of any sort were used, and absolutely no tuning or optimization of the kernels, filesystems, or anything else was performed. > >I reserved the first 10GB of each disk for OS installations, and used the remaining ~280GB on each disk to create one large (~840GB usable) test volume. I used Linux software RAID-5 for the Linux tests, and RAID-Z for the Solaris tests. I ensured there was no other activity on the test machines during test runs, and that the test arrays were not syncing, scrubbing, or rebuilding. > >The Bonnie++ command line used (with only the pathname parameter chaning between tests) was: > >bonnie++ -r 1024 -s 8g -u nobody:nobody -d /mnt/huge/test > >Results are as follows. Sorry for the formatting, I couldn''t find an option to specify using a fixed-width font (which would be a very handy feature, by the way, given the amount of terminal-formatted info and code that gets pasted here). > >I''ll let you all make your own analysis of these results - I''ve made mine, and I''ll be going ahead with using ZFS for this project. > >[b]Solaris with ZFS, local run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >polaris 8G 66401 73 89250 14 57942 14 65119 87 144274 29 288.8 0 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ 32626 99 +++++ +++ +++++ +++ > >[b]Solaris with ZFS, NFS run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >monolith 8G 29634 94 34089 8 18441 14 25174 88 58458 24 100.7 0 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 101 0 3364 10 94 0 100 0 3878 10 96 0 > >[b]Linux with ext3, local run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >polaris 8G 41097 76 54868 17 32639 9 48572 90 140578 22 225.3 1 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 24612 99 +++++ +++ +++++ +++ 24563 99 +++++ +++ +++++ +++ > >[b]Linux with ext3, NFS run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >monolith 8G 29213 93 27707 6 20286 14 26864 94 58242 23 211.9 0 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 221 1 3363 10 187 0 208 1 3410 9 189 0 > >[b]Linux with XFS, local run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >polaris 8G 50761 85 111591 15 32725 8 49101 89 169308 27 309.0 1 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 1388 16 +++++ +++ 1281 9 1275 14 +++++ +++ 1236 9 > >[b]Linux with XFS, NFS run[/b]: > >Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >monolith 8G 29839 94 35471 8 26799 22 27468 96 76076 33 248.3 1 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 259 1 3327 9 306 1 261 1 3870 10 91 0 >This message posted from opensolaris.org >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
Ben Lazarus
2006-Jan-12 01:04 UTC
[zfs-discuss] Re: ZFS benchmark results vs. ext3 and XFS
I didn''t really do any analysis, but I''m glad someone found the results interesting. I do think FC4 has v4 support, but the mount came up as v3. I''m not sure if there''s a setting I need to hit somewhere on the FC4 box, since i know as of 10, Solaris will negotiate to 4 if possible. This message posted from opensolaris.org
Ben Lazarus
2006-Jan-12 05:50 UTC
[zfs-discuss] Re: ZFS benchmark results vs. ext3 and XFS
I did a run using NFSv4 for you - same machines, but not precisely the same conditions, since the array now has about 500GB of data on it. You''re right though, it wasn''t too different: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monolith 8G 26280 85 32892 8 17340 13 22530 82 50489 23 127.4 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 104 0 3626 11 99 0 103 0 3820 11 100 0 This message posted from opensolaris.org