Hi, We have been using zfs for a couple of months now, and, overall, really like it. However, we have run into a major problem -- zfs''s memory requirements crowd out our primary application. Ultimately, we have to reboot the machine so there is enough free memory to start the application. What I would like is: 1) A way to limit the size of the cache (a gig or two would be fine for us) 2) A way to clear the caches -- hopefully, something faster than rebooting the machine. Is there any way I can do either of these things? Thanks, Tom Burns
Hello Thomas, Tuesday, September 12, 2006, 7:40:25 PM, you wrote: TB> Hi, TB> We have been using zfs for a couple of months now, and, overall, really TB> like it. However, we have run into a major problem -- zfs''s memory TB> requirements TB> crowd out our primary application. Ultimately, we have to reboot the TB> machine TB> so there is enough free memory to start the application. What exactly bad behavior did you notice? In general if app needs memory ZFS should free it - however it doesn''t always work that good now. TB> What I would like is: TB> 1) A way to limit the size of the cache (a gig or two would be fine TB> for us) You can''t. TB> 2) A way to clear the caches -- hopefully, something faster than TB> rebooting TB> the machine. export/import the pool. Eventually export a pool and unload zfs module. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Thomas Burns wrote:> Hi, > > We have been using zfs for a couple of months now, and, overall, really > like it. However, we have run into a major problem -- zfs''s memory > requirements > crowd out our primary application. Ultimately, we have to reboot the > machine > so there is enough free memory to start the application. > > What I would like is: > > 1) A way to limit the size of the cache (a gig or two would be fine for > us) > > 2) A way to clear the caches -- hopefully, something faster than rebooting > the machine. > > Is there any way I can do either of these things? > > Thanks, > Tom BurnsTom, What version of solaris are you running? In theory, ZFS should not be hogging your system memory to the point that it crowds out your primary applications... but this is still an area that we are working out the kinks in. If you could provide a core dump of the machine when it gets to the point that you can''t start your app, it would help us. As to your questions; I will give you some ways to do these things, but these are not considered best practice: 1) You should be able to limit your cache max size by setting arc.c_max. Its currently initialized to be phys-mem-size - 1GB. 2) First try unmount/remounting your file system to clear the cache. If that doesn''t work, try exporting/importing your pool. -Mark
On Tue, 12 Sep 2006, Mark Maybee wrote:> Thomas Burns wrote: > > Hi, > > > > We have been using zfs for a couple of months now, and, overall, really > > like it. However, we have run into a major problem -- zfs''s memory > > requirements > > crowd out our primary application. Ultimately, we have to reboot the > > machine > > so there is enough free memory to start the application. > > > > What I would like is: > > > > 1) A way to limit the size of the cache (a gig or two would be fine for > > us) > > > > 2) A way to clear the caches -- hopefully, something faster than rebooting > > the machine. > > > > Is there any way I can do either of these things? > > > > Thanks, > > Tom Burns > > Tom, > > What version of solaris are you running? In theory, ZFS should not > be hogging your system memory to the point that it crowds out your > primary applications... but this is still an area that we are working > out the kinks in. If you could provide a core dump of the machine > when it gets to the point that you can''t start your app, it would > help us. > > As to your questions; I will give you some ways to do these things, > but these are not considered best practice: > > 1) You should be able to limit your cache max size by setting > arc.c_max. Its currently initialized to be phys-mem-size - 1GB. > > 2) First try unmount/remounting your file system to clear the > cache. If that doesn''t work, try exporting/importing your pool.Another nasty and risky workaround is to start making copies of a large file in /tmp while watching your available swap space carefully. When you hit the low memory water mark, ZFS will free up a snitload (technical term (TM)) of memory. Then immediately rm all the files you created in /tmp. You don''t want to completely exhaust memory or you''ll probably loose the system. Remember my first line: "nasty and risky". Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:> Thomas Burns wrote: >> Hi, >> We have been using zfs for a couple of months now, and, overall, >> really >> like it. However, we have run into a major problem -- zfs''s >> memory requirements >> crowd out our primary application. Ultimately, we have to reboot >> the machine >> so there is enough free memory to start the application. >> What I would like is: >> 1) A way to limit the size of the cache (a gig or two would be >> fine for us) >> 2) A way to clear the caches -- hopefully, something faster than >> rebooting >> the machine. >> Is there any way I can do either of these things? >> Thanks, >> Tom Burns > > Tom, > > What version of solaris are you running? In theory, ZFS should not > be hogging your system memory to the point that it crowds out your > primary applications... but this is still an area that we are working > out the kinks in. If you could provide a core dump of the machine > when it gets to the point that you can''t start your app, it would > help us.We are running the jun 06 version of solaris (10/6?). I don''t have a core dump now -- but can probably get one in the next week or so. Where should I send it? Also, where do I set arc.c_max? In etc/system? Out of curiosity, why isn''t limiting arc.c_max considered best practice (I just want to make sure I am not missing something about the effect limiting it will have)? My guess is that in our case (lots of small groups -- 50 people or less -- sharing files over the web) that file system caches are not that useful. The small groups mean that no one file gets used that often and, since access is over the web, their response time will be largely limited by their internet connection. Thanks a lot for the response!> > As to your questions; I will give you some ways to do these things, > but these are not considered best practice: > > 1) You should be able to limit your cache max size by setting > arc.c_max. Its currently initialized to be phys-mem-size - 1GB. > > 2) First try unmount/remounting your file system to clear the > cache. If that doesn''t work, try exporting/importing your pool. > > -MarkTom Burns
> 1) You should be able to limit your cache max size by > setting arc.c_max. Its currently initialized to be > phys-mem-size - 1GB.Mark''s assertion that this is not a best practice is something of an understatement. ZFS was designed so that users/administrators wouldn''t have to configure tunables to achieve optimal system performance. ZFS performance is still a work in progress. The problem with adjusting arc.c_max is that its definition may change from one release to another. It''s an internal kernel variable, its existence isn''t guaranteed. There are also no guarantees about the semantics of what a future arc.c_max might mean. It''s possible that future implementations may change the definition such that reducing c_max has other unintended consequences. Unfortunately, at the present time this is probably the only way to limit the cache size. Mark and I are working on strategies to make sure that ZFS is a better citizen when it comes to memory usage and performance. Mark has recently made a number of changes which should help ZFS reduce its memory footprint. However, until these changes and others make it into a production build we''re going to have to live with this inadvisable approach for adjusting the cache size. -j This message posted from opensolaris.org
Thomas Burns wrote:> > On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote: > >> Thomas Burns wrote: >> >>> Hi, >>> We have been using zfs for a couple of months now, and, overall, really >>> like it. However, we have run into a major problem -- zfs''s memory >>> requirements >>> crowd out our primary application. Ultimately, we have to reboot >>> the machine >>> so there is enough free memory to start the application. >>> What I would like is: >>> 1) A way to limit the size of the cache (a gig or two would be fine >>> for us) >>> 2) A way to clear the caches -- hopefully, something faster than >>> rebooting >>> the machine. >>> Is there any way I can do either of these things? >>> Thanks, >>> Tom Burns >> >> >> Tom, >> >> What version of solaris are you running? In theory, ZFS should not >> be hogging your system memory to the point that it crowds out your >> primary applications... but this is still an area that we are working >> out the kinks in. If you could provide a core dump of the machine >> when it gets to the point that you can''t start your app, it would >> help us. > > > We are running the jun 06 version of solaris (10/6?). I don''t have a core > dump now -- but can probably get one in the next week or so. Where should > I send it? >You can drop cores via ftp to: sunsolve.sun.com login as "anonymous" or "ftp" deposit into /cores> Also, where do I set arc.c_max? In etc/system? Out of curiosity, why > isn''t > limiting arc.c_max considered best practice (I just want to make sure I am > not missing something about the effect limiting it will have)? My > guess is > that in our case (lots of small groups -- 50 people or less -- sharing > files > over the web) that file system caches are not that useful. The small > groups > mean that no one file gets used that often and, since access is over > the web, > their response time will be largely limited by their internet connection. >We don''t want users to need to tune a bunch of knobs to get performance out of ZFS. We want it to work well "out of the box". So we are trying to discourage using these tunables, and instead figure out what the root problem is and fix it. There is really no reason why zfs shouldn''t be able to adapt itself appropriately to the available memory.> Thanks a lot for the response! > >> >> As to your questions; I will give you some ways to do these things, >> but these are not considered best practice: >> >> 1) You should be able to limit your cache max size by setting >> arc.c_max. Its currently initialized to be phys-mem-size - 1GB. >> >> 2) First try unmount/remounting your file system to clear the >> cache. If that doesn''t work, try exporting/importing your pool. >> >> -Mark > > > Tom Burns > >
>> Also, where do I set arc.c_max? In etc/system? Out of >> curiosity, why isn''t >> limiting arc.c_max considered best practice (I just want to make >> sure I am >> not missing something about the effect limiting it will have)? >> My guess is >> that in our case (lots of small groups -- 50 people or less -- >> sharing files >> over the web) that file system caches are not that useful. The >> small groups >> mean that no one file gets used that often and, since access is >> over the web, >> their response time will be largely limited by their internet >> connection. > > We don''t want users to need to tune a bunch of knobs to get > performance > out of ZFS. We want it to work well "out of the box". So we are > trying > to discourage using these tunables, and instead figure out what the > root > problem is and fix it. There is really no reason why zfs shouldn''t be > able to adapt itself appropriately to the available memory.Ah, the ZFS philosophy that I love (not have to tune a bunch of knobs)! Seems like you need a way for the kernal to say "I would like some memory back now". I don''t have the slightest idea how practical that is though... BTW -- did I guess right wrt where I need to set arc.c_max (etc/system)? Thanks, Tom
On 9/13/06, Thomas Burns <tombu at schoolloop.com> wrote:> BTW -- did I guess right wrt where I need to set arc.c_max (etc/system)?I think you need to use mdb. As Mark and Johansen mentioned, only do this as your last resort. # mdb -kw> arc::print -a c_maxd3b0f874 c_max = 0x1d0fe800> d3b0f874 /W 0x10000000arc+0x34: 0x1d0fe800 = 0x10000000> arc::print -a c_maxd3b0f874 c_max = 0x10000000> $q-- Just me, Wire ...
I am running Solaris U4 x86_64. Seems that something is changed regarding mdb: # mdb -k Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp crypto ptm ]> arc::print -a c_maxmdb: failed to dereference symbol: unknown symbol name> ::arc -a{ hits = 0x6baba0 misses = 0x25ceb demand_data_hits = 0x2f0bb9 demand_data_misses = 0x92bc demand_metadata_hits = 0x2b50db demand_metadata_misses = 0x14c20 prefetch_data_hits = 0x5bfe prefetch_data_misses = 0x1d42 prefetch_metadata_hits = 0x10f30e prefetch_metadata_misses = 0x60cd mru_hits = 0x62901 mru_ghost_hits = 0x9dd5 mfu_hits = 0x545ea4 mfu_ghost_hits = 0xb9aa deleted = 0xcb5a3 recycle_miss = 0x131fb mutex_miss = 0x1520 evict_skip = 0x0 hash_elements = 0x1ea54 hash_elements_max = 0x40fac hash_collisions = 0x138464 hash_chains = 0x92c7 [..skipped..] How can I set/view arc.c_max now? This message posted from opensolaris.org
On Fri, 14 Sep 2007, Sergey wrote:> I am running Solaris U4 x86_64. > > Seems that something is changed regarding mdb: > > # mdb -k > Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp crypto ptm ] >> arc::print -a c_max > mdb: failed to dereference symbol: unknown symbol nameSee the comments at the bottom of: http://bugs.opensolaris.org/view_bug.do?bug_id=6510807 Best regards, FrankH.> > >> ::arc -a > { > hits = 0x6baba0 > misses = 0x25ceb > demand_data_hits = 0x2f0bb9 > demand_data_misses = 0x92bc > demand_metadata_hits = 0x2b50db > demand_metadata_misses = 0x14c20 > prefetch_data_hits = 0x5bfe > prefetch_data_misses = 0x1d42 > prefetch_metadata_hits = 0x10f30e > prefetch_metadata_misses = 0x60cd > mru_hits = 0x62901 > mru_ghost_hits = 0x9dd5 > mfu_hits = 0x545ea4 > mfu_ghost_hits = 0xb9aa > deleted = 0xcb5a3 > recycle_miss = 0x131fb > mutex_miss = 0x1520 > evict_skip = 0x0 > hash_elements = 0x1ea54 > hash_elements_max = 0x40fac > hash_collisions = 0x138464 > hash_chains = 0x92c7 > [..skipped..] > > How can I set/view arc.c_max now? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Please see the following link: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Hth, Victor Sergey ?????:> I am running Solaris U4 x86_64. > > Seems that something is changed regarding mdb: > > # mdb -k > Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp crypto ptm ] >> arc::print -a c_max > mdb: failed to dereference symbol: unknown symbol name > > >> ::arc -a > { > hits = 0x6baba0 > misses = 0x25ceb > demand_data_hits = 0x2f0bb9 > demand_data_misses = 0x92bc > demand_metadata_hits = 0x2b50db > demand_metadata_misses = 0x14c20 > prefetch_data_hits = 0x5bfe > prefetch_data_misses = 0x1d42 > prefetch_metadata_hits = 0x10f30e > prefetch_metadata_misses = 0x60cd > mru_hits = 0x62901 > mru_ghost_hits = 0x9dd5 > mfu_hits = 0x545ea4 > mfu_ghost_hits = 0xb9aa > deleted = 0xcb5a3 > recycle_miss = 0x131fb > mutex_miss = 0x1520 > evict_skip = 0x0 > hash_elements = 0x1ea54 > hash_elements_max = 0x40fac > hash_collisions = 0x138464 > hash_chains = 0x92c7 > [..skipped..] > > How can I set/view arc.c_max now? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss