I''m sorry to be asking such a basic question that would seem to be easily found on Google, but after 30 minutes of "googling" and looking through this lists'' archives, I haven''t found a definitive answer. Is the L2ARC caching scheme based on files or blocks? The reason I ask: We have several databases that are stored in single large files of 500GB or more. So, is L2ARC doing us any good if the entire file can''t be cached at once? We''re looking at buying some additional SSD''s for L2ARC (as well as additional RAM to support the increased L2ARC size) and I''m wondering if we NEED to plan for them to be large enough to hold the entire file or if ZFS can cache the most heavily used parts of a single file. After watching arcstat (Mike Harsch''s updated version) and arc_summary, I''m still not sure what to make of it. It''s rare that the l2arc (14Gb) hits double digits in %hit whereas the ARC (3Gb) is frequently >80% hit. TIA matt
mattbanks at gmail.com said:> We''re looking at buying some additional SSD''s for L2ARC (as well as > additional RAM to support the increased L2ARC size) and I''m wondering if we > NEED to plan for them to be large enough to hold the entire file or if ZFS > can cache the most heavily used parts of a single file. > > After watching arcstat (Mike Harsch''s updated version) and arc_summary, I''m > still not sure what to make of it. It''s rare that the l2arc (14Gb) hits > double digits in %hit whereas the ARC (3Gb) is frequently >80% hit.I''m not sure of the answer to your initial question (file-based vs block-based), but I may have an explanation for the stats you''re seeing. We have a system here with 96GB of RAM and also the Sun F20 flash accelerator card (96GB), most of which is used for L2ARC. Note that data is not written into the L2ARC until it is evicted from the ARC (e.g. when something newer or more frequently used needs ARC space). So, my interpretation of the high hit rates on the in-RAM ARC, and low hit rates on the L2ARC, is that the working set of data fits mostly in RAM, and the system seldom needs to go to the L2ARC for more. Regards, Marion
On Fri, Jan 13, 2012 at 4:49 PM, Matt Banks <mattbanks at gmail.com> wrote:> I''m sorry to be asking such a basic question that would seem to be easily > found on Google, but after 30 minutes of "googling" and looking through > this lists'' archives, I haven''t found a definitive answer. > > Is the L2ARC caching scheme based on files or blocks? >Blocks.> The reason I ask: We have several databases that are stored in single > large files of 500GB or more. > > So, is L2ARC doing us any good if the entire file can''t be cached at once? >It will, if your working set is not larger than the L2ARC. --matt -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120116/98e0c608/attachment.html>