I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and no matter what kernel tunables I make the zfs_params, I just can''t seem to get the performance from ZFS that I can get out of UFS under the Solaris Volume Manager (SVM). I am using the single LUN test because it performed better than any striping configuration I came up with. We don''t use any software RAID of any kind.. because the SAN does it all for us. One interesting test revealed better performance using the SMI label on our LUNs than that EFI label. This is true for using the fileserver, large_db_oltp_8k_uncached, and large_db_oltp_8k_cached workloads from filebench. The fileserver differences were not that great, but he db workloads performed 4x better using the SMI labeled LUNs as opposed to the EFI labeled LUNs. Does anyone know why ZFS would perform better with the SMI labeled luns that the EFI labeled luns? Is this the way it suppose to be? Thanks. -- This message posted from opensolaris.org
Eric C. Taylor
2009-Apr-23 15:18 UTC
[zfs-code] ZFS SMI vs EFI performance using filebench
(zfs-discuss would be a better forum for this than zfs-code) You don''t mention what release you''re running, but before 6532509 (which was integrated in s10u6) partitions on EFI labeled disks started on block 34, which could account for it. - Eric -- This message posted from opensolaris.org