Hello everyone, This is my first mail on the mailinglist and I very much appreciate this option of getting some help. I have a question regarding zfs on freebsd. (I'm making a home server) This afternoon I did a "zpool create data mirror ad4 ad6" Now I'm copying things from my ufs2 disk into the 2TB zpool, it is very slow. I'm on freebsd 8.1 amd64 on an atom n330 with 2 sata disks, gstat tells me I'm going at around 2 mbps at near 100 %busy while the ufs2 drives are near 0. Also, ufs2 to ufs2 was much faster (I estimate about 10 times faster). How do I tune? The wiki is not helpful for amd64 users stating that the defaults should be optimal. I'm using the 8.1-stable version which has just been installed this afternoon from a minimal install iso. I'm copying from a single ufs2 pata drive into a sata zpool mirror. I have 2GB of ram. Top tells me: 761 MB Active, 790 Inactive and there is hardly any cpu usage (96-98% idle). vfs.numvnodes: around 12500 now (after several hours of copying) and stil slowly rising. Hope you can help me. Freek (from the Netherlands)
On Wed, Dec 29, 2010 at 12:56:57AM +0100, Freek van Hemert wrote:> Hello everyone, > > This is my first mail on the mailinglist and I very much appreciate this > option of getting some help. > > I have a question regarding zfs on freebsd. > (I'm making a home server) > This afternoon I did a "zpool create data mirror ad4 ad6" Now I'm copying > things from my ufs2 disk into the 2TB zpool, it is very slow. I'm on freebsd > 8.1 amd64 on an atom n330 with 2 sata disks, gstat tells me I'm going at > around 2 mbps at near 100 %busy while the ufs2 drives are near 0. Also, > ufs2 to ufs2 was much faster (I estimate about 10 times faster). How do I > tune? The wiki is not helpful for amd64 users stating that the defaults > should be optimal. I'm using the 8.1-stable version which has just been > installed this afternoon from a minimal install iso. > > I'm copying from a single ufs2 pata drive into a sata zpool mirror. I have > 2GB of ram. > > Top tells me: 761 MB Active, 790 Inactive and there is hardly any cpu usage > (96-98% idle). vfs.numvnodes: around 12500 now (after several hours of > copying) and stil slowly rising.For your system. /boot/loader.conf: # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. vm.kmem_size="2048M" vfs.zfs.arc_max="1024M" # Disable ZFS prefetching # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html # Increases overall speed of ZFS, but when disk flushing/writes occur, # system is less responsive (due to extreme disk I/O). # NOTE: Systems with 8GB of RAM or more have prefetch enabled by # default. vfs.zfs.prefetch_disable="1" # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. This # should increase throughput and decrease the "bursty" stalls that # happen during immense I/O with ZFS. # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html vfs.zfs.txg.timeout="5" /etc/sysctl.conf: # Increase number of vnodes; we've seen vfs.numvnodes reach 115,000 # at times. Default max is a little over 200,000. Playing it safe... kern.maxvnodes=250000 # Set TXG write limit to a lower threshold. This helps "level out" # the throughput rate (see "zpool iostat"). A value of 256MB works well # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB. # We're using 1GB here because of the ada3 disk which can really handle # a massive amount of write speed. vfs.zfs.txg.write_limit_override=1073741824 Note that the last entry in sysctl.conf may need to be adjusted, or commented out entirely. It's up to you. You can mess with this in real-time using sysctl. Google for vfs.zfs.txg.write_limit_override to get an idea of how to adjust it, if you want to. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |
On Tue, Dec 28, 2010 at 5:56 PM, Freek van Hemert <fvhemert@gmail.com>wrote:> Top tells me: 761 MB Active, 790 Inactive and there is hardly any cpu usage > (96-98% idle). vfs.numvnodes: around 12500 now (after several hours of > copying) and stil slowly rising. > > Hope you can help me. >I believe there were some zfs bugs/performance issues in 8.1 that were made more visible by having other file system types mounted simultaneously. Stuff like this: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/146410 This has been MFC'd so not sure why it's still open. -- Adam Vande More
On 28/12/2010 23:56, Freek van Hemert wrote:> I have a question regarding zfs on freebsd. > (I'm making a home server) > This afternoon I did a "zpool create data mirror ad4 ad6" Now I'm copying > things from my ufs2 disk into the 2TB zpool, it is very slow. I'm on freebsd > 8.1 amd64 on an atom n330 with 2 sata disks, gstat tells me I'm going at > around 2 mbps at near 100 %busy while the ufs2 drives are near 0. Also, > ufs2 to ufs2 was much faster (I estimate about 10 times faster). How do I > tune? The wiki is not helpful for amd64 users stating that the defaults > should be optimal. I'm using the 8.1-stable version which has just been > installed this afternoon from a minimal install iso.Upgrade to one of the 8.2 release candidates or to a recent RELENG_8 / stable/8 -- there has been serious work done on ZFS since 8.1-RELEASE including the import of ZFS v15, and it is a lot more performant. Or wait a few weeks and then upgrade to 8.2-RELEASE. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate JID: matthew@infracaninophile.co.uk Kent, CT11 9PW -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 267 bytes Desc: OpenPGP digital signature Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20101229/5bc1f02a/signature.pgp
I can tell you what the problem is right now, actually. ZFS performs very poorly on low performance CPUs (i.e. your Atom N330). Try the same system with a different CPU and you'll get a different result. It's not a lack of bandwidth on your bus, memory, or disks, and it's not exactly the checksumming either (although that certainly contributes to the bottleneck), the grouping of transaction groups to be flushed to the disks is what your bottleneck is. I had an issue with some fairly poor performance with a Sempron 2800+ 45W TDP CPU and replacing it with an Athlon X2 did wonders. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 834 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20101230/c6092649/attachment.pgp
On 2010-Dec-30 02:31:30 -0500, Adam Stylinski <kungfujesus06@gmail.com> wrote:>I can tell you what the problem is right now, actually. ZFS performs >very poorly on low performance CPUs (i.e. your Atom N330).I would disagree. In this case, the op's most serious problem is a bug in sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:arc_memory_throttle() which is leading to ARC starvation. The direct effect of this is very poor ZFS I/O performance. It can be identified by very high "inactive" and possibly "cache" memory (as reported by 'systat -v' or top) as well as very high kstat.zfs.misc.arcstats.memory_throttle_count This bug was fixed in r210427 on -current, r211599 on 8.x and r211623 on 7.x.> Try the >same system with a different CPU and you'll get a different result.Not until the above bug is fixed. That said, ZFS is far more CPU intensive than UFS and a more powerful CPU may help - especially if you want gzip compression and/or sha256 checksumming. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20110101/6bb7d870/attachment.pgp