Michael McGlothlin wrote:
> I've been asked to cache some high traffic files on one of our server.
> Is there an easy way to get ext3/ext4 filesystems to cache several GB
> of files in memory at once? I'd like writes to happen normally but
reads
> to happen from RAM. (We have plenty of RAM so that isn't an issue.)
> If that isn't possible I can cache the files myself. Does the
filesystem
> keep a cache in memory of the file attributes such as modification time?
> So if I check for a change will the disk actually have to physically move
> to check the mod time?
I would first investigate whether your web server has some specific way to
do this. Failing that, I strongly recommend just letting the disk cache do
its job. If they really are frequently-accessed, they will stay in cache if
sufficient RAM is available anyway. I would only suggest going further if
you have specific latency requirements.
If you do, I'd recommend simply using a separate program to map the files
and then lock the pages in memory. The 'memlockd' program can do this.
I'm
not sure how well it handles file changes, but it shouldn't be difficult to
modify it to restart if any file changes.
The other possibility is to put the files on a ramdisk. You can use a
scheduled script to update them from an on-disk copy if needed.
Linux has good stat caching, so the need to move the disk to check the
modification time will only occur if that information was pushed out of
cache.
DS