Greeting All I know this topic have been beaten to death however, something is really confusing on my part !! I set the max_arc size = 512 mb I ran my benchmark workload that loads a file with a specific size into ARC ... if the file size is less than 200MB , it gets loaded entirely in to ARC and workload throughput is booming ... if the file is 250MB (still less than ARC size) , the file did not get loaded entirely into the FREE ARC !! and the throughput degrades ?? To make the long story short. is there a way to know the REAL ARC size that is available to the application ??? my understanding is that ARC Size is consumed by some ARC data structures and other caching list . But what is really remaining for tha application to use ?? Any feed back ??? -- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100301/0da4bb2f/attachment.html>
On Mar 1, 2010, at 12:21 AM, Abdullah Al-Dahlawi wrote:> Greeting All > > I know this topic have been beaten to death however, something is really confusing on my part !! > > I set the max_arc size = 512 mb > > I ran my benchmark workload that loads a file with a specific size into ARC ... > > if the file size is less than 200MB , it gets loaded entirely in to ARC and workload throughput is booming ... > > if the file is 250MB (still less than ARC size) , the file did not get loaded entirely into the FREE ARC !! and the throughput degrades ?? > > To make the long story short. > > is there a way to know the REAL ARC size that is available to the application ??? my understanding is that ARC Size is consumed by some ARC data structures and other caching list . But what is really remaining for tha application to use ?? > > Any feed back ???maxphys is used by UFS, but not by ZFS. maxphys is also used for the sd driver, but not for x86 architectures where the equivalent limit is set to 256KB. With ZFS on Solaris, you can blissfully forget about maxphys. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
On Mar 1, 2010, at 12:21 AM, Abdullah Al-Dahlawi wrote:> Greeting All > > I know this topic have been beaten to death however, something is really confusing on my part !! > > I set the max_arc size = 512 mbok> I ran my benchmark workload that loads a file with a specific size into ARC ... > > if the file size is less than 200MB , it gets loaded entirely in to ARC and workload throughput is booming ... > > if the file is 250MB (still less than ARC size) , the file did not get loaded entirely into the FREE ARC !! and the throughput degrades ??arc_max is an upper limit to the target size of the ARC. The actual size of the ARC can be limited by other, dynamic factors. The current target size is "c" and the actual size is "size" as shown by the kstats (kstat -n arcstats) How much physical RAM is in the machine?> To make the long story short. > > is there a way to know the REAL ARC size that is available to the application ??? my understanding is that ARC Size is consumed by some ARC data structures and other caching list . But what is really remaining for tha application to use ?? > > Any feed back ???There are a lot of moving parts here, if you want to fully understand the details, then take a look at the source: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c [richard wonders why arc_meta_used is not a kstat...] -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
Hi Richard I have 4GB RAM , However, I deliberately set ARC_MAX = 500M and then start reading the file into ARC. My assumption was the ARC will cache a file of a size close to 500M . but like I said , I have noticed that file size of 200M are cached completely , larger file will start generating physical I/O. When I looked at "kstat -m zfs" I have noticed that C=500M who is using my remaining 300M ??? Can I predict (or hopefully calculate) how much ARC is available in a given system before running my application ??? On Mon, Mar 1, 2010 at 1:56 PM, Richard Elling <richard.elling at gmail.com>wrote:> On Mar 1, 2010, at 12:21 AM, Abdullah Al-Dahlawi wrote: > > > Greeting All > > > > I know this topic have been beaten to death however, something is really > confusing on my part !! > > > > I set the max_arc size = 512 mb > > ok > > > I ran my benchmark workload that loads a file with a specific size into > ARC ... > > > > if the file size is less than 200MB , it gets loaded entirely in to ARC > and workload throughput is booming ... > > > > if the file is 250MB (still less than ARC size) , the file did not get > loaded entirely into the FREE ARC !! and the throughput degrades ?? > > arc_max is an upper limit to the target size of the ARC. The actual size > of the > ARC can be limited by other, dynamic factors. The current target size is > "c" > and the actual size is "size" as shown by the kstats (kstat -n arcstats) > > How much physical RAM is in the machine? > > > To make the long story short. > > > > is there a way to know the REAL ARC size that is available to the > application ??? my understanding is that ARC Size is consumed by some ARC > data structures and other caching list . But what is really remaining for > tha application to use ?? > > > > Any feed back ??? > > There are a lot of moving parts here, if you want to fully understand the > details, then take a look at the source: > > http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c > > [richard wonders why arc_meta_used is not a kstat...] > > -- richard > > ZFS storage and performance consulting at http://www.RichardElling.com > ZFS training on deduplication, NexentaStor, and NAS performance > http://nexenta-atlanta.eventbrite.com (March 16-18, 2010) > > > > >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100301/b839fa77/attachment.html>
On Mar 1, 2010, at 1:17 PM, Abdullah Al-Dahlawi wrote:> Hi Richard > > I have 4GB RAM , However, I deliberately set ARC_MAX = 500M and then start reading the file into ARC. > My assumption was the ARC will cache a file of a size close to 500M . but like I said , I have noticed that file size of 200M are cached completely , larger file will start generating physical I/O. > > When I looked at "kstat -m zfs" I have noticed that C=500MThe ARC is divided into a MRU and a MFU cache. The target size of the MRU is "p" so if you only touch the data once, it will only be in the MRU. "p" will dynamically changed, based on the load, but is usually somewhere around c/2. An entry is placed on the MFU side if the hit occurs within 62 milliseconds.> who is using my remaining 300M ??? > > Can I predict (or hopefully calculate) how much ARC is available in a given system before running my application ???Applications which manage their own caches don''t really care (firefox, databases, etc.) -- richard> > On Mon, Mar 1, 2010 at 1:56 PM, Richard Elling <richard.elling at gmail.com> wrote: > On Mar 1, 2010, at 12:21 AM, Abdullah Al-Dahlawi wrote: > > > Greeting All > > > > I know this topic have been beaten to death however, something is really confusing on my part !! > > > > I set the max_arc size = 512 mb > > ok > > > I ran my benchmark workload that loads a file with a specific size into ARC ... > > > > if the file size is less than 200MB , it gets loaded entirely in to ARC and workload throughput is booming ... > > > > if the file is 250MB (still less than ARC size) , the file did not get loaded entirely into the FREE ARC !! and the throughput degrades ?? > > arc_max is an upper limit to the target size of the ARC. The actual size of the > ARC can be limited by other, dynamic factors. The current target size is "c" > and the actual size is "size" as shown by the kstats (kstat -n arcstats) > > How much physical RAM is in the machine? > > > To make the long story short. > > > > is there a way to know the REAL ARC size that is available to the application ??? my understanding is that ARC Size is consumed by some ARC data structures and other caching list . But what is really remaining for tha application to use ?? > > > > Any feed back ??? > > There are a lot of moving parts here, if you want to fully understand the > details, then take a look at the source: > http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c > > [richard wonders why arc_meta_used is not a kstat...] > > -- richard > > ZFS storage and performance consulting at http://www.RichardElling.com > ZFS training on deduplication, NexentaStor, and NAS performance > http://nexenta-atlanta.eventbrite.com (March 16-18, 2010) > > > > > > > > -- > Abdullah Al-Dahlawi > PhD Candidate > George Washington University > Department. Of Electrical & Computer Engineering > ---- > Check The Fastest 500 Super Computers Worldwide > http://www.top500.org/list/2009/11/100ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)