Over the weekend I got ZFS up and running under FreeBSD and have had much the same experience with it that I have with Solaris - it works great out of the box and once configured, it is easy to forget about. So far the only real difference is anything you might tune via /etc/system (or mdb) is done via sysctl.* The environment that it is running in has less memory than I''ve used it on with Solaris before, so I went to look at how to tune the ARC, only to discover that it had already been capped to roughly half the size of the kernel kmem map (which itself was about half the size of physical RAM) and my immediate thoughts were "why can''t it be like this on Solaris?" Given that the topic of tuning the ARC size seems to be of concern for a few people, what are people''s thoughts on choosing a better value as the default size for the ARC on Solaris? And a big thanks to Pawel for making this available - its letting me use ZFS where I couldn''t before! Darren * one interesting observation with using the current ZFS snapshot is the FreeBSD kernel is compiled with "lock order reversal" detection (WITNESS) enabled and I''ve observed a few of these, although I don''t know if the lock ordering has been 100% preserved.
> Over the weekend I got ZFS up and running under FreeBSD and have > had much the same experience with it that I have with Solaris - it works > great out of the box and once configured, it is easy to forget about. > So far the only real difference is anything you might tune via /etc/system > (or mdb) is done via sysctl.*I have a server for internal use where we keep files for some months and then delete them. Usually we have some TB and I really wanted to use FreeBSD since it also run it as a mail-server and trouble-ticket-server. But ufs2 does not scale well especially if background-fsck is not possible after a unplanned restart. When zfs came along I was anxious to try it out. So I installed solaris on a test-server but managing apps on the solaris-server is a bit of a pain with my (FreeBSD) background. So last week I replaced the solaris-test-box with FreeBSD and zfs and is impressed with Pawel''s great work :-)> And a big thanks to Pawel for making this available - its letting me > use ZFS where I couldn''t before!Me too thank Pawel. Are there any tunables that I can look at to improve writes? The server is a 2-way dell pe 2850 at 2.8 GHz and 4 GB ram with HTT disabled and 2 qlogic 2300 hba''s. I boot of internal scsi-disks and have attached a nexsan atabeast with 10 TB of storage where /usr/local and /home is located. Writes are usually approx. 20-30 MB/s (zpool iostat 1) and peak at 40-45 MB/s. I have recompiled the kernel without WITNESS. The nexsan has two raid-controllers where each controller has five LUN''s. All 10 LUN''s are in a raidz2 pool. Each LUN itself is four 400 GB disks in hardware raid-5. The nexsan can''t do jbod. Being able to install apps using ports is a great relief. I can help test on my server if needed. regards Claus
Hello Darren, Monday, April 23, 2007, 9:14:35 PM, you wrote: DRSC> The environment that it is running in has less memory than I''ve used DRSC> it on with Solaris before, so I went to look at how to tune the ARC, DRSC> only to discover that it had already been capped to roughly half the DRSC> size of the kernel kmem map (which itself was about half the size DRSC> of physical RAM) and my immediate thoughts were "why can''t it DRSC> be like this on Solaris?" DRSC> Given that the topic of tuning the ARC size seems to be of concern DRSC> for a few people, what are people''s thoughts on choosing a better DRSC> value as the default size for the ARC on Solaris? I don''t know. For file servers I actually want to almost all of servers memory be consumed for caches. When you use UFS or any other file systems - do you cap it''s buffer (page) cache? Similar experience should be with ZFS and in most environments (I was dealing with) it is. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:>Hello Darren, > >Monday, April 23, 2007, 9:14:35 PM, you wrote: > >DRSC> The environment that it is running in has less memory than I''ve used >DRSC> it on with Solaris before, so I went to look at how to tune the ARC, >DRSC> only to discover that it had already been capped to roughly half the >DRSC> size of the kernel kmem map (which itself was about half the size >DRSC> of physical RAM) and my immediate thoughts were "why can''t it >DRSC> be like this on Solaris?" > >DRSC> Given that the topic of tuning the ARC size seems to be of concern >DRSC> for a few people, what are people''s thoughts on choosing a better >DRSC> value as the default size for the ARC on Solaris? > >I don''t know. For file servers I actually want to almost all of >servers memory be consumed for caches. > >When you use UFS or any other file systems - do you cap it''s buffer >(page) cache? Similar experience should be with ZFS and in most >environments (I was dealing with) it is. > >I''ve never had to because I''ve never had the same problem(s) as with ZFS and it wanting to use "all the memory". Whatever agreements UFS has with the kernel for buffer caches seem to work quite well for me - especially in mixed role environments. Darren