Hi, I have done some performance tests on zfs to see how it behaves against LVM. The hardware I am using is a 2 procs Sun450 with 2G of memmory. I have 8 internal 18G drives to play with. I wanted to see how zfs is performing on seq database writes so I am using a Sybase 12.5.3 to load a sybase database with the bcp command. The database consists of 73 tables and the total number of rows in the tables are 1082731 rows. TEST-CASES 1) Load Sybase db on raw LVM devices raid 0 with softpartion ( 4 disks) 2) Load Sybase db on ZFS emulated volumes with no raid pool (4 disks) 3) Load Sybase db on raw LVM devices raid 1+0 with softpartion ( 8 disks) 4) Load Sybase db on ZFS emulated volumes with mirrored pool (8 disks) 5) Load Sybase db on raw LVM devices raid 5 with softpartion ( 4 disks) 6) Load Sybase db on ZFS emulated volumes with raidz pool (4 disks) TEST RESULTS 1) real 12m32.281s user 4m34.156s sys 1m26.404s 2) real 12m40.864s user 4m32.500s sys 1m34.996s 3) real 13m34.049s user 4m44.539s sys 1m23.716s 4) real 15m11.560s user 4m44.053s sys 1m23.055s 5) real 24m56.215s user 4m42.034s sys 1m19.429s 6) real 17m37.198s user 4m45.943s sys 1m27.435s CONCLUSIONS Shouldn''t ZFS be faster ? I think it is, but what I noticed is that ZFS is more CPU intensive when a pool is configured with somekind of raid. Raidz is more CPU intensive then Mirror. But it was a surprise that ZFS with a mirrored pool didn''t perform as good as a mirrored LVM. ZFS''s raidz beat LVM raid-5 with miles, which shouldn''t be any surprice. I will re-run the tests using filesystems instead of raw devices. DETAILED INFO ABOUT THE TESTS CONFIG Test 1) Sybase-binaries is installed on UFS (/opt). Sybase database is on raw devices using Softpartions on a four-wide stripe. LVM Layout: d10 1 4 c3t0d0s0 c3t1d0s0 c3t2d0s0 c2t2d0s0 -i 16b d20 -p d10 -o 32 -b 204800 (master device) d30 -p d10 -o 204848 -b 245760 (Sybsystemprocsdevice) d40 -p d10 -o 450624 -b 4194304 (Datadev1) d50 -p d10 -o 4644944 -b 2097152 (Logdev1) Note: LVM didn''t allow to set stripe size below 8k. Test 2) Sybase binaries is installed on zfs (/data/sybase) Sybase databases is on emulated volumes using no-raid pool. ZFS-CONFIG pool: data state: ONLINE scrub: scrub completed with 0 errors on Fri Dec 2 08:19:31 2005 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 NAME USED AVAIL REFER MOUNTPOINT data 3.88G 62.6G 8.50K /data data/sybase 680M 62.6G 680M /data/sybase data/sybase_datadev1 1.00G 63.6G 1.00G - data/sybase_logdev1 216M 63.4G 216M - data/sybase_master 48.5M 62.6G 48.5M - data/sybase_sybsystemprocs 122M 62.6G 122M - Volblocksize is set to 2K, because Sybase in this case in reading/writing in 2K. Test 3) Sybase-binaries is installed on UFS (/opt). Sybase database is on raw devices using Softpartions on a raid 0+1 konfig LVM-LAYOUT d10 -m d11 d12 1 d11 1 4 c3t0d0s0 c3t1d0s0 c3t2d0s0 c3t3d0s0 -i 16b d12 1 4 c2t0d0s0 c2t1d0s0 c2t2d0s0 c2t3d0s0 -i 16b d20 -p d10 -o 32 -b 204800 (master device) d30 -p d10 -o 204848 -b 245760 (Sybsystemprocsdevice) d40 -p d10 -o 450624 -b 4194304 (Datadev1) d50 -p d10 -o 4644944 -b 2097152 (Logdev1) Note: LVM didn''t allow to set stripe size below 8k. Test 4) Sybase binaries is installed on zfs (/data/sybase) Sybase databases is on emulated volumes using mirrored pool. ZFS-CONFIG pool: data state: ONLINE scrub: scrub completed with 0 errors on Fri Dec 2 10:55:21 2005 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 NAME USED AVAIL REFER MOUNTPOINT data 3.92G 29.3G 8.50K /data data/sybase 680M 29.3G 680M /data/sybase data/sybase_datadev1 2.03G 29.3G 2.03G - data/sybase_logdev1 1.01G 29.3G 1.01G - data/sybase_master 101M 29.3G 101M - data/sybase_sybsystemprocs 122M 29.3G 122M - Volblocksize is set to 2K, because Sybase in this case in reading/writing in 2K. Test 5) Sybase-binaries is installed on UFS (/opt). Sybase database is on raw devices using Softpartions on a raid 0+1 konfig LVM-LAYOUT d10 -r c2t0d0s0 c2t1d0s0 c2t2d0s0 c2t3d0s0 -k -i 16b d20 -p d10 -o 32 -b 204800 (master device) d30 -p d10 -o 204848 -b 245760 (Sybsystemprocsdevice) d40 -p d10 -o 450624 -b 4194304 (Datadev1) d50 -p d10 -o 4644944 -b 2097152 (Logdev1) Note: LVM didn''t allow to set stripe size below 8k. Test 6) Sybase binaries is installed on zfs (/data/sybase) Sybase databases is on emulated volumes using raidz pool. ZFS-CONFIG pool: data state: ONLINE scrub: scrub completed with 0 errors on Fri Dec 2 14:45:42 2005 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 AME USED AVAIL REFER MOUNTPOINT data 5.75G 60.7G 17.0K /data data/sybase 910M 60.7G 910M /data/sybase data/sybase_datadev1 3.02G 60.7G 3.02G - data/sybase_logdev1 1.51G 60.7G 1.51G - data/sybase_master 151M 60.7G 151M - data/sybase_sybsystemprocs 183M 60.7G 183M - Volblocksize is set to 2K, because Sybase in this case in reading/writing in 2K. This message posted from opensolaris.org
On Fri, Dec 02, 2005 at 06:49:09AM -0800, Patrik Gustavsson wrote:> Hi, > > I have done some performance tests on zfs to see how it behaves against LVM. > > The hardware I am using is a 2 procs Sun450 with 2G of memmory. I > have 8 internal 18G drives to play with. > > I wanted to see how zfs is performing on seq database writes so I am > using a Sybase 12.5.3 to load a sybase database with the bcp command. > The database consists of 73 tables and the total number of rows in the > tables are 1082731 rows.Interesting results; what version of OpenSolaris/Solaris are you running? i.e. what is the output of: /usr/ccs/bin/what /usr/sbin/zfs If that contains "Internal Development:", you are running DEBUG bits, and performance comparisons will not be accurate. Cheers, - jonathan -- Jonathan Adams, Solaris Kernel Development
On Fri, Dec 02, 2005 at 06:49:09AM -0800, Patrik Gustavsson wrote:> I have done some performance tests on zfs to see how it behaves against LVM.Excellent. Thanks for taking the time to share your findings.> TEST RESULTS > > 3) real 13m34.049s user 4m44.539s sys 1m23.716s > 4) real 15m11.560s user 4m44.053s sys 1m23.055s > > CONCLUSIONS > > Shouldn''t ZFS be faster ? > I think it is, but what I noticed is that ZFS is more CPU intensive when > a pool is configured with somekind of raid. Raidz is more CPU intensive > then Mirror. > But it was a surprise that ZFS with a mirrored pool didn''t perform as good as > a mirrored LVM. > ZFS''s raidz beat LVM raid-5 with miles, which shouldn''t be any surprice. > > DETAILED INFO ABOUT THE TESTS CONFIG > > Test 4) > Sybase binaries is installed on zfs (/data/sybase) > Sybase databases is on emulated volumes using mirrored pool. > > ZFS-CONFIG > pool: data > state: ONLINE > scrub: scrub completed with 0 errors on Fri Dec 2 10:55:21 2005 > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t0d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > c2t3d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c3t2d0 ONLINE 0 0 0 > c3t3d0 ONLINE 0 0 0I should point out here that the reason you most likely saw that ZFS was slower in your tests is because you didn''t create the pool config you thought. What you created was two 4-way mirrored vdevs. I think what you meant to do was this: zpool create mirror c2t0d0 c3t0d0 mirror c2t1d0 c3t1d0 \ mirror c2t2d0 c3t2d0 mirror c2t3d0 c3t3d0 This command will create 4 mirrored pairs, which is was I think you were after. Try that out and see if it makes a difference. --Bill
The zfs version is: /usr/sbin/zfs: SunOS 5.11 snv_27 October 2007 This message posted from opensolaris.org