Andre Doeking
2009-Oct-07 12:09 UTC
[zfs-discuss] Poor write / read performance on a mirrored HSV 200/300 volume
Hello everybody, we are installing a virtual storage array using Open Solaris newest developer build and Comstar / ZFS. We created a mirrored ZFS volume consisting of two 50 GB FC volumes, one from a HP EVA 4400 and one from HP EVA 4100. The connection consists of 4 GB FC; the Open Solaris server has two dual-port fc hbas, two ports are in initiator mode and two in target mode. This mirrored ZFS volume was presented to a VmWare ESX Server. The read and write performance is very slow; it''s crazy that the write performance is better than the read performance. We test the performance with using Storage vMotion and a 30 GB file. My /etc/system contains the following parameters: *set zfs:zfs_prefetch_disable = 1 *set zfs:zfs_vdev_cache_bshift = 13 *set zfs:zfs_vdev_max_pending = 10 set zfs:zfs_nocacheflush = 1 But it seems that the ZFS volume still use a cache, look at the attachement of a perfom graph. The write performance goes up and down, it looks like ZFS is still using the write flushes technology. I hope somebody has any idea; the storage array is much better than the actual performance. Yours sincerely Andr? D?king ?rztekammer Westfalen-Lippe Germany -- This message posted from opensolaris.org -------------- next part -------------- A non-text attachment was scrubbed... Name: performance.jpg Type: image/jpeg Size: 66284 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091007/59f9661a/attachment.jpg>