Hello!
I am looking for a little clarification on the IO-Cache performance
translator. More specifically, how it behaves with large files.
Here is what I intended to architect: large SATA arrays (24T) that
also have somewhat large (60-360G) SSD storage drives for caching. I
wanted to use the SSD drives as both an XFS journal and also as a
GlusterFS cache. The problem is that the files I'm serving are qcow2
(virtual disk) files anywhere from 2GB to 400G files. I'm wondering
how to get GlusterFS to utilize the SSD to speed performance.
>From looking at the io-cache.c source, it's not clear that the entire
file would be cached, byte ranges of heavily accessed portions of
those files, or what. Also, you can't specify a storage device to use
as a cache target like /dev/sdb1 so I have to assume it reserves some
memory and then keeps the cached files there.
This brings me to three questions:
1) In order to use the SSD device would I create a really large swap
file on the SSD device and then allocate the size of the SSD to
IO-Cache? Because I'm not that familiar with memory caching I'm
wondering if the calls it makes to place data it wants into memory
would stop at the point the system begins to swap, no matter what
number I provide. In this case, because of the SSD speed I don't want
it to stop at the system begins to swap. Swapping becomes a good
thing.
2) Would IO-Cache attempt to cache the entire 2GB to 400G file,
skipping those over my IO-Cache allocation size (lets say 60G) or
would it just cache the byte ranges of the most heavily accessed parts
of those files? For instance, if someone is running a database server
within one of the qcow2 files would and is hitting the "foobar" table
with selects really hard, would IO-Cache cache the data being
referenced (inodes X, Y, Z in file a.qcow2) by the select statements
or try to cache the entire qcow2 file?
3) IO-Cache only helps with speeding up reads, right? In my mind, the
world is very bursty. It would be incredibly cool if we had the
ability to write to the SSD drive during bursts to return success to
the client quickly and then stream those writes at steady speed to the
SATA array. I realize this is more complicated then it sounds in
practice because incoming reads expect coherent data even while the
data is being slowly written to SATA but just curious if this is out
there.
Thanks!