Hello all... I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or what do you think about it. I must test, or there is no need to make such configurations with zfs? Thanks very much for your time! Leal. This message posted from opensolaris.org
msl wrote:> Hello all... > I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or what do you think about it. >That is the network block transfer size. The default is normally 32 kBytes. I don''t see any reason to change ZFS''s block size to match. You should follow the best practices as described at http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide If you notice a performance issue with metadata updates, be sure to check out http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide -- richard> I must test, or there is no need to make such configurations with zfs? > Thanks very much for your time! > Leal. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
If you''re running over NFS, the ZFS block size most likely won''t have a measurable impact on your performance. Unless you''ve got multiple gigabit ethernet interfaces, the network will generally be the bottleneck rather than your disks, and NFS does enough caching at both client & server end to aggregate updates into large writes. This message posted from opensolaris.org
Hello msl, Thursday, November 15, 2007, 11:13:41 PM, you wrote: m> Hello all... m> I''m migrating a nfs server from linux to solaris, and all m> clients(linux) are using read/write block sizes of 8192. That was m> the better performance that i got, and it''s working pretty well m> (nfsv3). I want to use all the zfs'' advantages, and i know i can m> have a performance loss, so i want to know if there is a m> "recomendation" for bs on nfs/zfs, or what do you think about it. m> I must test, or there is no need to make such configurations with zfs? m> Thanks very much for your time! m> Leal. IIRC Linux nfs server will commit all nfs requests once they are in memory, even for metadata ops, while solaris nfs/zfs will commit after they are written to disks - so by default you could see big perfromance difference between Linux and Solaris in nfs serving. If you want the same behavior on Solaris then disable ZIL. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com