I realize that this topic has been fairly well beaten to death on this forum, but I''ve also read numerous comments from ZFS developers that they''d like to hear about significantly different performance numbers of ZFS vs UFS for NFS-exported filesystems, so here''s one more. The server is an x4500 with 44 drives configured in a RAID10 zpool, and two drives mirrored and formatted with UFS for the boot device. It''s running Solaris 10u4, patched with the Recommended Patch Set from late Dec/07. The client (if it matters) is an older V20z w/ Solaris 10 3/05. No tuning has been done on either box The test involved copying lots of small files (2-10k) from an NFS client to a mounted NFS volume. A simple ''cp'' was done, both with 1 thread and 4 parallel threads (to different directories) and then I monitored to see how fast the files were accumulating on the server. ZFS: 1 thread - 25 files/second; 4 threads - 25 files/second (~6 per thread) UFS: (same server, just exported /var from the boot volume) 1 thread - 200 files/second; 4 threads - 520 files/second (~130/thread) For comparison, the same test was done to a NetApp FAS270 that the x4500 was bought to replace: 1 thread - 70 files/second; 4 threads - ~250 files/second I have been able to work around this performance hole by exporting multiple ZFS filesystems, because the workload is spread across a hashed directory structure. I then get 25 files per FS per second. Still, I thought I''d raise it here anyway. If there''s something I''m doing wrong, I''d love to hear about it. I''m also assuming that this ties into BugID 6535160 "Lock contention on zl_lock from zil_commit", so if that''s the case, please add another vote for making this fix available as a patch for S10u4 users Thanks, Steve Hillman This message posted from opensolaris.org
Steve Hillman wrote:> I realize that this topic has been fairly well beaten to death on this forum, but I''ve also read numerous comments from ZFS developers that they''d like to hear about significantly different performance numbers of ZFS vs UFS for NFS-exported filesystems, so here''s one more. > > The server is an x4500 with 44 drives configured in a RAID10 zpool, and two drives mirrored and formatted with UFS for the boot device. It''s running Solaris 10u4, patched with the Recommended Patch Set from late Dec/07. The client (if it matters) is an older V20z w/ Solaris 10 3/05. No tuning has been done on either box > > The test involved copying lots of small files (2-10k) from an NFS client to a mounted NFS volume. A simple ''cp'' was done, both with 1 thread and 4 parallel threads (to different directories) and then I monitored to see how fast the files were accumulating on the server. > > ZFS: > 1 thread - 25 files/second; 4 threads - 25 files/second (~6 per thread) > > UFS: (same server, just exported /var from the boot volume) > 1 thread - 200 files/second; 4 threads - 520 files/second (~130/thread)With this big a difference, I suspect the write cache is enabled on the disks. UFS requires this cache to be disabled or battery backed otherwise corruption can occur.> > For comparison, the same test was done to a NetApp FAS270 that the x4500 was bought to replace: > 1 thread - 70 files/second; 4 threads - ~250 files/secondI don''t know enough about that system but perhaps it has NVRAM or an SSD to service the synchronous demands of NFS. An equivalent setup could be configured with a separate intent log on a similar fast device.> > I have been able to work around this performance hole by exporting multiple ZFS filesystems, because the workload is spread across a hashed directory structure. I then get 25 files per FS per second. Still, I thought I''d raise it here anyway. If there''s something I''m doing wrong, I''d love to hear about it. > > I''m also assuming that this ties into BugID 6535160 "Lock contention on zl_lock from zil_commit", so if that''s the case, please add another vote for making this fix available as a patch for S10u4 usersI believe this is a different problem than 6535160.> > Thanks, > Steve Hillman > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes:> I realize that this topic has been fairly well beaten to death on this forum, but I''ve also read numerous comments from ZFS developers that they''d like to hear about significantly different performance numbers of ZFS vs UFS for NFS-exported filesystems, so here''s one more. > > The server is an x4500 with 44 drives configured in a RAID10 zpool, and two drives mirrored and formatted with UFS for the boot device. It''s running Solaris 10u4, patched with the Recommended Patch Set from late Dec/07. The client (if it matters) is an older V20z w/ Solaris 10 3/05. No tuning has been done on either box > > The test involved copying lots of small files (2-10k) from an NFS client to a mounted NFS volume. A simple ''cp'' was done, both with 1 thread and 4 parallel threads (to different directories) and then I monitored to see how fast the files were accumulating on the server. > > ZFS: > 1 thread - 25 files/second; 4 threads - 25 files/second (~6 per thread) > > UFS: (same server, just exported /var from the boot volume) > 1 thread - 200 files/second; 4 threads - 520 files/second (~130/thread)To get similar (lower) consistency guarantees, try disabling ZIL.. google://zil_disable .. This should up the speed, but might cause disk corruption if the server crashes while a client is writing data.. (just like with UFS) /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Tomas ?gren wrote:> On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes: > >> I realize that this topic has been fairly well beaten to death on this forum, but I''ve also read numerous comments from ZFS developers that they''d like to hear about significantly different performance numbers of ZFS vs UFS for NFS-exported filesystems, so here''s one more. >> >> The server is an x4500 with 44 drives configured in a RAID10 zpool, and two drives mirrored and formatted with UFS for the boot device. It''s running Solaris 10u4, patched with the Recommended Patch Set from late Dec/07. The client (if it matters) is an older V20z w/ Solaris 10 3/05. No tuning has been done on either box >> >> The test involved copying lots of small files (2-10k) from an NFS client to a mounted NFS volume. A simple ''cp'' was done, both with 1 thread and 4 parallel threads (to different directories) and then I monitored to see how fast the files were accumulating on the server. >> >> ZFS: >> 1 thread - 25 files/second; 4 threads - 25 files/second (~6 per thread) >> >> UFS: (same server, just exported /var from the boot volume) >> 1 thread - 200 files/second; 4 threads - 520 files/second (~130/thread) > > To get similar (lower) consistency guarantees, try disabling ZIL.. > google://zil_disable .. This should up the speed, but might cause disk > corruption if the server crashes while a client is writing data.. (just > like with UFS)Disabling the ZIL does NOT cause disk corruption. It doesn''t even cause ZFS to be inconsistent on disk. What it does to is mean that you onlonger have guaranteed synchronous write semantics - ie on crash an application might have done a synch write that never made it to stable storage. BTW there isn''t really any such think as "disk corruption" there is "data corruption" :-) -- Darren J Moffat
Hello Darren, DJM> BTW there isn''t really any such think as "disk corruption" there is DJM> "data corruption" :-) Well, if you scratch it hard enough :) -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello Darren, > > > > DJM> BTW there isn''t really any such think as "disk corruption" there is > DJM> "data corruption" :-) > > Well, if you scratch it hard enough :) >http://www.philohome.com/hammerhead/broken-disk.jpg :-)
Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> http://www.philohome.com/hammerhead/broken-disk.jpg :-)Be careful, things like this can result in "device corruption"! J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
I also test the nfs with ''zfs set sharenfs=on'' performance with a linux client. By echo zil_disable/W0t1 | mdb -kw the small files from nfs speed up 10x. about zil_disable,see Eric Kustarz''s blog: http://blogs.sun.com/erickustarz/entry/zil_disable This message posted from opensolaris.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Tomas ?gren wrote: | To get similar (lower) consistency guarantees, try disabling ZIL.. | google://zil_disable .. This should up the speed, but might cause disk | corruption if the server crashes while a client is writing data.. (just | like with UFS) No disk corruption. Only dataloss (last writes can be lost), if I recall correctly. ZFS will be consistent even with ZIL disabled. If I''m wrong, please educate :) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ ~ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBR6IT3plgi5GaxT1NAQJnZAP9FgFMMF7HVM5S2pNg03Csir+SnctfO7Jj 3ei5RtXbGLryAvZHrSAdZMYs4tITL+5F50f9Wc9iLmutTeo8fgHf/EW24kNxGPQJ UocPLmb2rQRANcaZu1JY8LR3Fv3xx2tRxvfnMkrGL7yw7/UOvYeD2w8evTHa2ZVc B0YSLXOcuB8=kqoy -----END PGP SIGNATURE-----