Effrem Norwood
2010-Jun-29 13:33 UTC
[zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance
Hi All, I created a zvol with the following options on an X4500 with 16GB of ram: zfs create -s -b 64K -V 250T tank/bkp I then enabled dedup and compression and exported it to Windows Server 2008 as iSCSI via COMSTAR. There it was formatted with a 64K cluster size which is the NTFS default for volumes of this size. IO performance in Windows is so slow that my backup jobs are timing out. Previously when I created a much smaller volume using a "-b" of 4K and NTFS cluster size of 4K performance was excellent. What I would like to know is what to look at to figure out where the performance is going, e.g. it''s the blocksize or it''s COMSTAR etc. CPU consumption according to top is very low so compression seems unlikely. The L2ARC is 11G and there is 4G + of free memory and no swapping so dedup seems unlikely as well. I have looked at the latencytop utility for clues but am not familiar enough with the code to conclude anything useful. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100629/610120d7/attachment.html>
Josh Simon
2010-Jun-29 20:45 UTC
[zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance
Have you tried creating tank/bkp without the -s option. I believe I read somewhere that the -s option can lead to poor performance on larger volumes (which doesn''t make sense to me). Also are you using a zil/log device? Josh Simon On 06/29/2010 09:33 AM, Effrem Norwood wrote:> Hi All, > > I created a zvol with the following options on an X4500 with 16GB of ram: > > zfs create -s -b 64K -V 250T tank/bkp > > I then enabled dedup and compression and exported it to Windows Server > 2008 as iSCSI via COMSTAR. There it was formatted with a 64K cluster > size which is the NTFS default for volumes of this size. IO performance > in Windows is so slow that my backup jobs are timing out. Previously > when I created a much smaller volume using a ?-b? of 4K and NTFS cluster > size of 4K performance was excellent. What I would like to know is what > to look at to figure out where the performance is going, e.g. it?s the > blocksize or it?s COMSTAR etc. CPU consumption according to top is very > low so compression seems unlikely. The L2ARC is 11G and there is 4G + of > free memory and no swapping so dedup seems unlikely as well. I have > looked at the latencytop utility for clues but am not familiar enough > with the code to conclude anything useful. > > Thanks! > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike La Spina
2010-Jul-01 02:40 UTC
[zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance
Hi Eff, There are a significant number of variables to work through with dedup and compression enabled. So the first suggestion I have is to disable those features for now so your not working with too many elements. With those features set aside an NTFS cluster operation does not = a 64k raw I/O block. As well the ZFS 64k blocksize does not = one I/O operation. We may also need to consider the overall network performance behavior and iSCSI protocol characteristics and the Windows network stack. iperf is a good tool to rule that out. What I primarily suspect the issue may be is that write I/O operations are not aligned and are waiting for a I/O completion over multiple vdevs. Alignment is important for write I/O optimization and how the I/O maps at the software raid mode will make a significant impact to the DMU and SPA operations on a specific vdev layout. You may also have an issue with write cache operations, by default large I/O calls such as 64K will not use a ZIL cache vdev, if you have one defined, but will be written directly to your array vdevs which will also include a transaction group write operation. To ensure ZIL log usage with 64k I/O''s you can apply the following: edit the /etc/system file with set zfs:zfs_immediate_write_sz = 131071 a reboot is required to activate the system file You have also not indicated what your zpool configuration looks like, that would helpful in the discussion area. It appears that your applying the x4500 as a backup target which means you should (if not already) enable write caching on the COMSTAR LU properties for this type of application. e.g stmfadm modify-lu -p wcd=false 600144F02F22800000004C1D62010001 To help triage the perf issue further you could post 2 ''kstat zfs'' + 2 ''kstat stmf'' outputs on a 5 min interval and a ''zpool iostat -v 30 5'' which would help visualize the I/O behavior. Regards, Mike http://blog.laspina.ca/ -- This message posted from opensolaris.org