I have setup a pool called vmstorage and mounted it as nfs storage in esx4i. The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms; 5 linux and 1 windows and performance is terrible. Any suggestion on improving the performance of the current setup. I have added the following vfs.zfs.prefetch_disable=1 which improved the performance slightly. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090814/cb57f374/attachment.html>
>I have setup a pool called vmstorage and mounted it as nfs storage in esx4i. >The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms; 5 linux and 1 windows and performance is terrible. > >Any suggestion on improving the performance of the current setup. > >I have added the following vfs.zfs.prefetch_disable=1 which improved the performance slightly.You''re leaving out far too much to even remotely give a useful answer. What controller are the discs on, what type of disc (model #). What are you using as a server? What does the network throughput look like when perf tanks? Is that part of the bottle neck (lots you can do to mitigate that then)? Post more info...
On Aug 14, 2009, at 2:31 PM, "Joseph L. Casale" <JCasale at activenetwerx.com > wrote:>> I have setup a pool called vmstorage and mounted it as nfs storage >> in esx4i. >> The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms; >> 5 linux and 1 windows and performance is terrible. >> >> Any suggestion on improving the performance of the current setup. >> >> I have added the following vfs.zfs.prefetch_disable=1 which >> improved the performance slightly. > > You''re leaving out far too much to even remotely give a useful answer. > What controller are the discs on, what type of disc (model #). > What are you using as a server? What does the network throughput > look like when > perf tanks? Is that part of the bottle neck (lots you can do to > mitigate that then)? > > Post more info...I might add too that unless you have something to cache synchronous writes, like either an SSD slog or NVRAM backed controller, performance over NFS by ESX is going to be terrible as ESX issues each write O_DIRECT which means bypass write-cache and go direct to disk. -Ross
CUSTOM SERVER has the following specs 3GHZ Dual Core Intel Proc 4GB RAM Intel onboard sata controller 4x WD green hard drives 250GB 2 Intel 1000GT network card Through NFS over the Gigabit network interface, the average write speed is around 32MB/sec and read speed is around 40MB/sec I should have at-least got 80MB/sec write -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090815/784c23ff/attachment.html>
Ashley Avileli wrote:> I have setup a pool called vmstorage and mounted it as nfs storage in > esx4i. > The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms; 5 > linux and 1 windows and performance is terrible. > > Any suggestion on improving the performance of the current setup. > > I have added the following vfs.zfs.prefetch_disable=1 which improved > the performance slightly.When I was still using ESX 3.5U2, it cannot use my onboard nVidia MCP SATA Controller, thus I need to use external server. That time I used Solaris nv built 101~103 (cannot remember exactly). I also had option to go NFS or iSCSI. When I tested between both, the iSCSI was faster than NFS. Thus I suggest you try also the iSCSI instead of NFS since FreeNAS can do iSCSI Target also. Just my 2 cents, Dedhi
Hi Ashley; RaidZ Group is Ok for throughput but due to the design whole RaidZ Group behavies like a single disk so your max IOPS is around 100. I''d personaly use Raid10 instead. Also you seem to have no write cache which can effect performance. Try using a log device Best regards Mertol <http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Ashley Avileli Sent: Friday, August 14, 2009 2:21 PM To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] ZFS nfs performance on ESX4i I have setup a pool called vmstorage and mounted it as nfs storage in esx4i. The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms; 5 linux and 1 windows and performance is terrible. Any suggestion on improving the performance of the current setup. I have added the following vfs.zfs.prefetch_disable=1 which improved the performance slightly. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090818/1f4ace3f/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1257 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090818/1f4ace3f/attachment.gif>