hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes. these are striped across in zfs. the read and write speeds local to the machine are as expected but i have noticed some performance hits in the read and write speed over nfs and samba. here is the observation: each filesystem is shared via nfs as well as samba. i am able to mount via nfs and samba on a Mac OS 10.5.2 client. i am able to only mount via nfs on a Mac OS 10.4.11 client. (there seems to be authentication/encryption issue between the 10.4.11 client and solaris box in this scenario. i know this is a bug on the client side) when writing a file via nfs from the 10.5.2 client the speeds are 60 ~ 70 MB/sec. when writing a file via samba from the 10.5.2 client the speeds are 30 ~ 50 MB/sec when writing a file via nfs from the 10.4.11 client the speeds are 20 ~ 30 MB/sec. when writing a file via samba from a Windows XP client the speeds are 30 ~ 40 MB. i know that there is an implementational difference in nfs and samba on both Mac OS 10.4.11 and 10.5.2 clients but that still does not explain the Windows scenario. i was wondering if anyone else was experiencing similar issues and if there is some tuning i can do or am i just missing something. thanx in advance. cheers, abs --------------------------------- Never miss a thing. Make Yahoo your homepage. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080327/2bdd4630/attachment.html>
Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems
2008-Mar-27 23:58 UTC
[zfs-discuss] nfs and smb performance
An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080327/755fa157/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3303 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080327/755fa157/attachment.bin>
Have you turned on the "Ignore cache flush commands" option on the xraids? You should ensure this is on when using ZFS on them. /dale On Mar 27, 2008, at 6:16 PM, abs wrote:> hello all, > i have two xraids connect via fibre to a poweredge2950. the 2 > xraids are configured with 2 raid5 volumes each, giving me a total > of 4 raid5 volumes. these are striped across in zfs. the read and > write speeds local to the machine are as expected but i have noticed > some performance hits in the read and write speed over nfs and samba. > > here is the observation: > > each filesystem is shared via nfs as well as samba. > i am able to mount via nfs and samba on a Mac OS 10.5.2 client. > i am able to only mount via nfs on a Mac OS 10.4.11 client. (there > seems to be authentication/encryption issue between the 10.4.11 > client and solaris box in this scenario. i know this is a bug on the > client side) > > when writing a file via nfs from the 10.5.2 client the speeds are 60 > ~ 70 MB/sec. > when writing a file via samba from the 10.5.2 client the speeds are > 30 ~ 50 MB/sec > > when writing a file via nfs from the 10.4.11 client the speeds are > 20 ~ 30 MB/sec. > > when writing a file via samba from a Windows XP client the speeds > are 30 ~ 40 MB. > > i know that there is an implementational difference in nfs and samba > on both Mac OS 10.4.11 and 10.5.2 clients but that still does not > explain the Windows scenario. > > > i was wondering if anyone else was experiencing similar issues and > if there is some tuning i can do or am i just missing something. > thanx in advance. > > cheers, > abs > > > > > > > Never miss a thing. Make Yahoo your > homepage._______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That is the first thing i checked. Prior to that I was getting somewhere around 1 ~ 5 MB/sec. Thank you though. Dale Ghent <daleg at elemental.org> wrote: Have you turned on the "Ignore cache flush commands" option on the xraids? You should ensure this is on when using ZFS on them. /dale On Mar 27, 2008, at 6:16 PM, abs wrote:> hello all, > i have two xraids connect via fibre to a poweredge2950. the 2 > xraids are configured with 2 raid5 volumes each, giving me a total > of 4 raid5 volumes. these are striped across in zfs. the read and > write speeds local to the machine are as expected but i have noticed > some performance hits in the read and write speed over nfs and samba. > > here is the observation: > > each filesystem is shared via nfs as well as samba. > i am able to mount via nfs and samba on a Mac OS 10.5.2 client. > i am able to only mount via nfs on a Mac OS 10.4.11 client. (there > seems to be authentication/encryption issue between the 10.4.11 > client and solaris box in this scenario. i know this is a bug on the > client side) > > when writing a file via nfs from the 10.5.2 client the speeds are 60 > ~ 70 MB/sec. > when writing a file via samba from the 10.5.2 client the speeds are > 30 ~ 50 MB/sec > > when writing a file via nfs from the 10.4.11 client the speeds are > 20 ~ 30 MB/sec. > > when writing a file via samba from a Windows XP client the speeds > are 30 ~ 40 MB. > > i know that there is an implementational difference in nfs and samba > on both Mac OS 10.4.11 and 10.5.2 clients but that still does not > explain the Windows scenario. > > > i was wondering if anyone else was experiencing similar issues and > if there is some tuning i can do or am i just missing something. > thanx in advance. > > cheers, > abs > > > > > > > Never miss a thing. Make Yahoo your > homepage._______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss--------------------------------- Never miss a thing. Make Yahoo your homepage. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080328/a080545b/attachment.html>
Sorry for being vague but I actually tried it with the cifs in zfs option, but I think I will try the samba option now that you mention it. Also is there a way to actually improve the nfs performance specifically? cheers, abs "Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems" <Peter.Brouwer at Sun.COM> wrote: Hello abs Would you be able to repeat the same tests for the cifs in zfs option instead of using samba? Would be interesting to see how the kernel cifs versus the samba performance compare. Peter abs wrote: hello all, i have two xraids connect via fibre to a poweredge2950. the 2 xraids are configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes. these are striped across in zfs. the read and write speeds local to the machine are as expected but i have noticed some performance hits in the read and write speed over nfs and samba. here is the observation: each filesystem is shared via nfs as well as samba. i am able to mount via nfs and samba on a Mac OS 10.5.2 client. i am able to only mount via nfs on a Mac OS 10.4.11 client. (there seems to be authentication/encryption issue between the 10.4.11 client and solaris box in this scenario. i know this is a bug on the client side) when writing a file via nfs from the 10.5.2 client the speeds are 60 ~ 70 MB/sec. when writing a file via samba from the 10.5.2 client the speeds are 30 ~ 50 MB/sec when writing a file via nfs from the 10.4.11 client the speeds are 20 ~ 30 MB/sec. when writing a file via samba from a Windows XP client the speeds are 30 ~ 40 MB. i know that there is an implementational difference in nfs and samba on both Mac OS 10.4.11 and 10.5.2 clients but that still does not explain the Windows scenario. i was wondering if anyone else was experiencing similar issues and if there is some tuning i can do or am i just missing something. thanx in advance. cheers, abs --------------------------------- Never miss a thing. Make Yahoo your homepage. --------------------------------- _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Regards Peter Brouwer, Sun Microsystems Linlithgow Principal Storage Architect, ABCP DRII Consultant Office: +44 (0) 1506 672767 Mobile: +44 (0) 7720 598226 Skype : flyingdutchman_,flyingdutchman_l --------------------------------- Never miss a thing. Make Yahoo your homepage. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080328/5fd95a47/attachment.html>
abs wrote:> Sorry for being vague but I actually tried it with the cifs in zfs > option, but I think I will try the samba option now that you mention > it. Also is there a way to actually improve the nfs performance > specifically?We have some recommendations for improving NFS with ZFS on the ZFS Best Practices site at solarisinternals.com. For write workloads, a separate ZIL log (slog) is a good idea. But judging from the numbers below, the client seems to be making more of a difference than the server. -- richard> > cheers, > abs > > */"Peter Brouwer, Principal Storage Architect, Office of the Chief > Technologist, Sun MicroSystems" <Peter.Brouwer at Sun.COM>/* wrote: > > Hello abs > > Would you be able to repeat the same tests for the cifs in zfs > option instead of using samba? > Would be interesting to see how the kernel cifs versus the samba > performance compare. > > Peter > > abs wrote: >> hello all, >> i have two xraids connect via fibre to a poweredge2950. the 2 >> xraids are configured with 2 raid5 volumes each, giving me a >> total of 4 raid5 volumes. these are striped across in zfs. the >> read and write speeds local to the machine are as expected but i >> have noticed some performance hits in the read and write speed >> over nfs and samba. >> >> here is the observation: >> >> each filesystem is shared via nfs as well as samba. >> i am able to mount via nfs and samba on a Mac OS 10.5.2 client. >> i am able to only mount via nfs on a Mac OS 10.4.11 client. >> (there seems to be authentication/encryption issue between the >> 10.4.11 client and solaris box in this scenario. i know this is a >> bug on the client side) >> >> when writing a file via nfs from the 10.5.2 client the speeds are >> 60 ~ 70 MB/sec. >> when writing a file via samba from the 10.5.2 client the speeds >> are 30 ~ 50 MB/sec >> >> when writing a file via nfs from the 10.4.11 client the speeds >> are 20 ~ 30 MB/sec. >> >> when writing a file via samba from a Windows XP client the speeds >> are 30 ~ 40 MB. >> >> i know that there is an implementational difference in nfs and >> samba on both Mac OS 10.4.11 and 10.5.2 clients but that still >> does not explain the Windows scenario. >> >> >> i was wondering if anyone else was experiencing similar issues >> and if there is some tuning i can do or am i just missing >> something. thanx in advance. >> >> cheers, >> abs >> >> >> >> >> >> ------------------------------------------------------------------------ >> Never miss a thing. Make Yahoo your homepage. >> <http://us.rd.yahoo.com/evt=51438/*http://www.yahoo.com/r/hs> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > -- > Regards Peter Brouwer, > Sun Microsystems Linlithgow > Principal Storage Architect, ABCP DRII Consultant > Office: +44 (0) 1506 672767 > Mobile: +44 (0) 7720 598226 > Skype : flyingdutchman_,flyingdutchman_l > > > > ------------------------------------------------------------------------ > Never miss a thing. Make Yahoo your homepage. > <http://us.rd.yahoo.com/evt=51438/*http://www.yahoo.com/r/hs> > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Fri, 28 Mar 2008, abs wrote:> Sorry for being vague but I actually tried it with the cifs in zfs > option, but I think I will try the samba option now that you mention > it. Also is there a way to actually improve the nfs performance > specifically?CIFS uses TCP. NFS uses either TCP or UDP, and usually UDP by default. In order to improve NFS client performance, it may be useful to increase the ''rsize'' and ''wsize'' client mount options to 32K. Solaris 10 defaults the buffer size to 32K but many other clients use 8K. Some clients support a ''-a'' option to specify the maximum read-ahead and tuning this value can help considerably for sequential access. Using gigabit eithernet with jumbo frames will improve performance even further. Notice that most of these tunings are for the client-side and not for the server. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Fri, 28 Mar 2008, abs wrote: > > >> Sorry for being vague but I actually tried it with the cifs in zfs >> option, but I think I will try the samba option now that you mention >> it. Also is there a way to actually improve the nfs performance >> specifically? >> > > CIFS uses TCP. NFS uses either TCP or UDP, and usually UDP by > default. >For Sun systems, NFSv3 using 32kByte [rw]size over TCP has been the default configuration for 10+ years. Do you still see clients running NFSv2 over UDP? Note that an attribute-intensive NFS workload may be sync write bound, which means your attempts to improve network bandwidth efficiency will not be rewarded. OTOH, using an slog to a low-latency, nonvolatile write storage device will be rewarded. -- richard> In order to improve NFS client performance, it may be useful to > increase the ''rsize'' and ''wsize'' client mount options to 32K. > Solaris 10 defaults the buffer size to 32K but many other clients use > 8K. Some clients support a ''-a'' option to specify the maximum > read-ahead and tuning this value can help considerably for sequential > access. Using gigabit eithernet with jumbo frames will improve > performance even further. Notice that most of these tunings are for > the client-side and not for the server. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
>> >> CIFS uses TCP. NFS uses either TCP or UDP, and usually UDP by default. > > For Sun systems, NFSv3 using 32kByte [rw]size over TCP has been > the default configuration for 10+ years. Do you still see clients running > NFSv2 over UDP?Yes, I see that TCP is the default in Solaris 9. Is it also the default in Solaris 8?. I do know that tuning mount options made a considerable difference for FreeBSD 5.X and Apple''s OS X Tiger. Apple''s OS X Leopard does not seem to need tuning like previous versions did. OS X Tiger and earlier actually sent application writes directly to NFS so that performance was very dependent on application write size regardless of client NFS tunings. Unfortunately, not everyone is using Solaris. The Solaris 10 NFS client implementation really screams. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Fri, 28 Mar 2008, abs wrote: > >> Sorry for being vague but I actually tried it with the cifs in zfs >> option, but I think I will try the samba option now that you mention >> it. Also is there a way to actually improve the nfs performance >> specifically? > > CIFS uses TCP. NFS uses either TCP or UDP, and usually UDP by > default.NFSv4 does not use UDP - it can''t and still be compliant with the protocol specification because UDP does not provide the functionality that NFSv4 requires of the transport layer. -- Darren J Moffat