Hi Anthony - I don''t get this. How does the presence (or absence) of the ARC change the methodology for doing memory capacity planning? Memory capacity planning is all about identifying and measuring consumers. Memory consumers; - The kernel. - User processes. - The ZFS ARC, which is technically part of the kernel, but it should be tracked and measured seperately. prstat, pmap, kstat -n arcstats and "echo ::memstat | mdb -k" will tell you pretty much everything you need to know about how much memory is being consumed by the consumers. Now granted, it is harder than I''m making it sound here. User processes can be tricky if they share memory, but typically that''s not a big deal unless it''s a database load or an application that explicitly uses shared memory as a form of IPC. The ARC stuff is also tricky, because it''s really hard to determine what the active working set is for file system data, you want the ARC big enough to deliver acceptable performance, but not so big as to potentially cause short term memory shortfalls. It requires some observation and period collection of statistics, the most important statistic being the level of performance of the customers workload. As an aside, there''s nothing about this that requires it be posted to zfs-discuss-confidential. I posted to zfs-discuss at opensolaris.org. Thanks, /jim Anthony Benenati wrote:> Jim, > > The issue with using scan rate alone is if you are looking for why you > have significant performance degradation and scan rate is high it''s a > good indicator that you may have a memory issue however it doesn''t > help if you want to preemptively prevent future system degradation > since it''s not predictive. There are no thresholds that can be > correlated to memory size for capacity planning. > > I should be more clear with my question. How does a client determine > when and how much memory they need in the future if they can''t track > memory utilization without including ARC? Most customers monitor > their memory utilization and take action when they see memory > utilization at a policy determined threshold to prevent potential > future performance degradation or for capacity planning. > > From what I''ve read searching the aliases, there doesn''t seem to be a > good way to determine how much memory is being used by the system > without including ARC. If that''s the case, it seems to me we either > need to give them that capability or offer an alternative for capacity > planning. > > Tony > > On Dec 23, 2009, at 12:45 PM, Jim Laurent wrote: > >> I believe that the SR (scan rate) column in vmstat is still the best >> indicator of when the applications space is limited. The scan rate >> indicates that the page scanner is running and looking for >> applications pages to move out of physical RAM onto SWAP. >> >> More information about ZFS memory usage is available at: >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations >> >> >> On Dec 22, 2009, at 6:02 PM, Anthony.Benenati at Sun.COM >> <mailto:Anthony.Benenati at Sun.COM> wrote: >> >>> My customer asks the following: >>> >>> "Since we started using ZFS to manage our systems root file systems >>> we''ve noticed that the memory utilization on those systems is near >>> 100%. We understand that this is ZFS''s arc cache taking advantage of >>> the unused system memory however it doesn''t look very good for our >>> system monitors & graphs. >>> Is there anyway to report the memory utilization of these system >>> without taking into account ZFS''s arc cache memory utilization? " >>> >>> While I have a couple of not perfect ideas on answering his >>> question as stated, however the reason behind these request are to >>> determine when the system is reaching its maximum memory utilization >>> where by they may need to add more memory, or to help resolve a >>> performance issue which may or may not be caused by a memory >>> deficiency. Since with ZFS memory utilization is no longer a good >>> indicator for memory deficiency, what should you be looking at to >>> determine if you have a memory deficiency. Is it similar to UFS such >>> as high scan rate, excessive paging, etc? Is so, how do you >>> determine thresholds. >>> >>> Any documents, comments or opinions would be welcome. >>> >>> Thanks, >>> Tony >> >> <http://www.sun.com/solaris> * Jim Laurent * >> Architect >> >> Phone x24859/+1 703 204 4859 >> Mobile 703-624-7000 >> Fax 703-208-5858 >> Email Jim.Laurent at Sun.COM <mailto:jim.laurent at sun.com> >> <mailto:jim.laurent at sun.com> >> *Sun Microsystems, Inc.* >> <http://www.sun.com> >> >> 7900 Westpark Dr, A110 >> McLean, VA 22102 US >> >> >
Think he''s looking for a single, intuitively obvious, easy to acces indicator of memory usage along the lines of the vmstat free column (before ZFS) that show the current amount of free RAM. On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote:> Hi Anthony - > > I don''t get this. How does the presence (or absence) of the ARC change > the methodology for doing memory capacity planning? > > Memory capacity planning is all about identifying and measuring consumers. > Memory consumers; > - The kernel. > - User processes. > - The ZFS ARC, which is technically part of the kernel, but it should be > tracked and measured seperately. > > prstat, pmap, kstat -n arcstats and "echo ::memstat | mdb -k" will tell you > pretty much everything you need to know about how much memory > is being consumed by the consumers. > > Now granted, it is harder than I''m making it sound here. > User processes can be tricky if they share memory, but typically > that''s not a big deal unless it''s a database load or an application that > explicitly uses shared memory as a form of IPC. > > The ARC stuff is also tricky, because it''s really hard to determine what > the active working set is for file system data, you want the ARC big > enough to deliver acceptable performance, but not so big as to > potentially cause short term memory shortfalls. It requires some > observation and period collection of statistics, the most important > statistic being the level of performance of the customers workload. > > As an aside, there''s nothing about this that requires it be posted > to zfs-discuss-confidential. I posted to zfs-discuss at opensolaris.org. > > > Thanks, > /jim > > > Anthony Benenati wrote: >> Jim, >> >> The issue with using scan rate alone is if you are looking for why you have significant performance degradation and scan rate is high it''s a good indicator that you may have a memory issue however it doesn''t help if you want to preemptively prevent future system degradation since it''s not predictive. There are no thresholds that can be correlated to memory size for capacity planning. >> >> I should be more clear with my question. How does a client determine when and how much memory they need in the future if they can''t track memory utilization without including ARC? Most customers monitor their memory utilization and take action when they see memory utilization at a policy determined threshold to prevent potential future performance degradation or for capacity planning. >> From what I''ve read searching the aliases, there doesn''t seem to be a good way to determine how much memory is being used by the system without including ARC. If that''s the case, it seems to me we either need to give them that capability or offer an alternative for capacity planning. >> >> Tony >> >> On Dec 23, 2009, at 12:45 PM, Jim Laurent wrote: >> >>> I believe that the SR (scan rate) column in vmstat is still the best indicator of when the applications space is limited. The scan rate indicates that the page scanner is running and looking for applications pages to move out of physical RAM onto SWAP. >>> >>> More information about ZFS memory usage is available at: >>> >>> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations >>> >>> >>> On Dec 22, 2009, at 6:02 PM, Anthony.Benenati at Sun.COM <mailto:Anthony.Benenati at Sun.COM> wrote: >>> >>>> My customer asks the following: >>>> >>>> "Since we started using ZFS to manage our systems root file systems we''ve noticed that the memory utilization on those systems is near 100%. We understand that this is ZFS''s arc cache taking advantage of the unused system memory however it doesn''t look very good for our system monitors & graphs. >>>> Is there anyway to report the memory utilization of these system without taking into account ZFS''s arc cache memory utilization? " >>>> >>>> While I have a couple of not perfect ideas on answering his question as stated, however the reason behind these request are to determine when the system is reaching its maximum memory utilization where by they may need to add more memory, or to help resolve a performance issue which may or may not be caused by a memory deficiency. Since with ZFS memory utilization is no longer a good indicator for memory deficiency, what should you be looking at to determine if you have a memory deficiency. Is it similar to UFS such as high scan rate, excessive paging, etc? Is so, how do you determine thresholds. >>>> >>>> Any documents, comments or opinions would be welcome. >>>> >>>> Thanks, >>>> Tony >>> >>> <http://www.sun.com/solaris> * Jim Laurent * >>> Architect >>> >>> Phone x24859/+1 703 204 4859 >>> Mobile 703-624-7000 >>> Fax 703-208-5858 >>> Email Jim.Laurent at Sun.COM <mailto:jim.laurent at sun.com> <mailto:jim.laurent at sun.com> >>> *Sun Microsystems, Inc.* >>> <http://www.sun.com> >>> >>> 7900 Westpark Dr, A110 >>> McLean, VA 22102 US >>> >>> >>Jim Laurent Architect Phone x24859/+1 703 204 4859 Mobile 703-624-7000 Fax 703-208-5858 Email Jim.Laurent at Sun.COM Sun Microsystems, Inc. 7900 Westpark Dr, A110 McLean, VA 22102 US -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091223/0bb8ba1c/attachment.html>
Hi Jim, I think Tony was asking a very valid question. It reminds me http://developers.sun.com/solaris/articles/sol8memory.html#where. Regards, Tiger Jim Mauro wrote:> Hi Anthony - > > I don''t get this. How does the presence (or absence) of the ARC change > the methodology for doing memory capacity planning? > > Memory capacity planning is all about identifying and measuring > consumers. > Memory consumers; > - The kernel. > - User processes. > - The ZFS ARC, which is technically part of the kernel, but it should be > tracked and measured seperately. > > prstat, pmap, kstat -n arcstats and "echo ::memstat | mdb -k" will > tell you > pretty much everything you need to know about how much memory > is being consumed by the consumers. > > Now granted, it is harder than I''m making it sound here. > User processes can be tricky if they share memory, but typically > that''s not a big deal unless it''s a database load or an application that > explicitly uses shared memory as a form of IPC. > > The ARC stuff is also tricky, because it''s really hard to determine what > the active working set is for file system data, you want the ARC big > enough to deliver acceptable performance, but not so big as to > potentially cause short term memory shortfalls. It requires some > observation and period collection of statistics, the most important > statistic being the level of performance of the customers workload. > > As an aside, there''s nothing about this that requires it be posted > to zfs-discuss-confidential. I posted to zfs-discuss at opensolaris.org. > > > Thanks, > /jim > > > Anthony Benenati wrote: >> Jim, >> >> The issue with using scan rate alone is if you are looking for why >> you have significant performance degradation and scan rate is high >> it''s a good indicator that you may have a memory issue however it >> doesn''t help if you want to preemptively prevent future system >> degradation since it''s not predictive. There are no thresholds that >> can be correlated to memory size for capacity planning. >> >> I should be more clear with my question. How does a client determine >> when and how much memory they need in the future if they can''t track >> memory utilization without including ARC? Most customers monitor >> their memory utilization and take action when they see memory >> utilization at a policy determined threshold to prevent potential >> future performance degradation or for capacity planning. >> From what I''ve read searching the aliases, there doesn''t seem to be a >> good way to determine how much memory is being used by the system >> without including ARC. If that''s the case, it seems to me we either >> need to give them that capability or offer an alternative for >> capacity planning. >> >> Tony >> >> On Dec 23, 2009, at 12:45 PM, Jim Laurent wrote: >> >>> I believe that the SR (scan rate) column in vmstat is still the best >>> indicator of when the applications space is limited. The scan rate >>> indicates that the page scanner is running and looking for >>> applications pages to move out of physical RAM onto SWAP. >>> >>> More information about ZFS memory usage is available at: >>> >>> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations >>> >>> >>> >>> On Dec 22, 2009, at 6:02 PM, Anthony.Benenati at Sun.COM >>> <mailto:Anthony.Benenati at Sun.COM> wrote: >>> >>>> My customer asks the following: >>>> >>>> "Since we started using ZFS to manage our systems root file systems >>>> we''ve noticed that the memory utilization on those systems is near >>>> 100%. We understand that this is ZFS''s arc cache taking advantage >>>> of the unused system memory however it doesn''t look very good for >>>> our system monitors & graphs. >>>> Is there anyway to report the memory utilization of these system >>>> without taking into account ZFS''s arc cache memory utilization? " >>>> >>>> While I have a couple of not perfect ideas on answering his >>>> question as stated, however the reason behind these request are to >>>> determine when the system is reaching its maximum memory >>>> utilization where by they may need to add more memory, or to help >>>> resolve a performance issue which may or may not be caused by a >>>> memory deficiency. Since with ZFS memory utilization is no longer a >>>> good indicator for memory deficiency, what should you be looking at >>>> to determine if you have a memory deficiency. Is it similar to UFS >>>> such as high scan rate, excessive paging, etc? Is so, how do you >>>> determine thresholds. >>>> >>>> Any documents, comments or opinions would be welcome. >>>> >>>> Thanks, >>>> Tony >>> >>> <http://www.sun.com/solaris> * Jim Laurent * >>> Architect >>> >>> Phone x24859/+1 703 204 4859 >>> Mobile 703-624-7000 >>> Fax 703-208-5858 >>> Email Jim.Laurent at Sun.COM <mailto:jim.laurent at sun.com> >>> <mailto:jim.laurent at sun.com> >>> *Sun Microsystems, Inc.* >>> <http://www.sun.com> >>> >>> 7900 Westpark Dr, A110 >>> McLean, VA 22102 US >>> >>> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Yanjun (Tiger) Hu - Sun Professional Services Canada Cell: 416-892-0999
On Wed, 23 Dec 2009, Yanjun (Tiger) Hu wrote:> Hi Jim, > > I think Tony was asking a very valid question. It reminds me > http://developers.sun.com/solaris/articles/sol8memory.html#where.The question is valid, but the answer will be misleading. Regardless of if a memory page represents part of a memory mapped file, traditional filesystem cache, an application heap/stack, or zfs ARC, it is still caching data that the application has used and may use again. The zfs ARC is smarter so it is better at discarding the data which is least likely to be used if there is memory pressure. Even though the ARC is smarter, there is still an expensive disk access required to restore that data if the application accesses it again. I find the ''arc_summary.pl'' script available from http://cuddletech.com/arc_summary/ to be a quite useful tool to see a memory breakdown, including ARC sizes. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/