Hi i have now two pools rpool 2-way mirror ( pata ) data 4-way raidz2 ( sata ) if i access to datapool from network , smb , nfs , ftp , sftp , jne... i get only max 200 KB/s speeds compared to rpool that give XX MB/S speeds to and from network it is slow. Any ideas what reasons might be and how try to find reason. Locally datapool works reasonable fast for me. # date && mkfile 1G testfile && date Tuesday, March 23, 2010 07:52:19 AM EET Tuesday, March 23, 2010 07:52:36 AM EET Some information about system. # cat /etc/release OpenSolaris Development snv_134 X86 Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 01 March 2010 # isainfo -v 64-bit amd64 applications ahf sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications ahf sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8 tsc fpu thanks -- This message posted from opensolaris.org
Daniel Carosone
2010-Mar-23 07:20 UTC
[zfs-discuss] pool use from network poor performance
On Mon, Mar 22, 2010 at 10:58:05PM -0700, homerun wrote:> if i access to datapool from network , smb , nfs , ftp , sftp , jne... > i get only max 200 KB/s speeds > compared to rpool that give XX MB/S speeds to and from network it is slow. > > Any ideas what reasons might be and how try to find reason.Maybe a shared interrupt between the sata controller and the network card, with devices or drivers that don''t play well with others. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100323/dce3bd0a/attachment.bin>
Hi Well what is changed in system. replaced 4 sata disks with new and bigger disks. same time recreated raidz to raidz2 updated OS from b132 to 134 It used to work with old setup. Has there been some driver changes. -- This message posted from opensolaris.org
what does prstat show? We had a lot of trouble here using iscsi and zvols due to the cpu capping out with speeds less than 20mb/sec. After simply switching to Qlogic fibre HBAs and a file backed lu we went to 160mb/sec on that same test platform. -- This message posted from opensolaris.org
Hi Here is more specs. MB: K8N4-E SE - AMD Socket 754 CPU - NVIDIA? nForce? 4 4X - PCI Express Architecture - Gigabit LAN - 4 SATA RAID Ports - 10 USB2.0 Ports http://www.asus.com/product.aspx?P_ID=TBx7PakpparxrK89&templete=2 Now situation is this : with ftp : i can upload to datapool with speed ~45MB/s download from datapool only with speed ~ 750 KB/s So it is now read performance that is a problem. Could it really be that nvidia network and sata drivers now share same IRQ and that''s why performance is slow. Mar 23 19:35:01 <hostname> unix: [ID 954099 kern.info] NOTICE: IRQ20 is being shared by drivers with different interrupt levels. This is just odd as this issue come when only changed pool physical disks and also changed raidz to raidz2 and also update to build 134 -- This message posted from opensolaris.org
Hi Instaled pci addon network card and disabled nvidia onboarad network. All started to work as should. So nge and nv_sata drivers does not work in b134 when shared IRQ is used. -- This message posted from opensolaris.org