Hello,
I am doing some tests using ZFS for the data files of a database
system, and ran into memory problems which has been discussed in a
thread here a few weeks ago.
When creating a new database, the data files are first initialized to
their configured size (written in full), then the servers are started.
They will then need to allocate shared memory for database cache. I
am running two database nodes per host, trying to use 512Mb memory
each.
They are using so-called "Intimate Shared Memory" which requires that
the requested amount is available in physical memory. Since ZFS has
just gobbled up memory for cache, it is not available and the database
won''t start.
This was on a host with 2Gb memory.
I gave up and switched to other hosts having 8Gb memory each. They are
running Solaris (Sparc) 10 U3. Based on what was said in the previous
thread (and the source code!), I assumed that ZFS would use up to 7Gb
for caching, which would be used up if the database files that were
written were large enough.
But this does not happen. Now I''m running with database files of 5Gb
and database memory cache of 1Gb, plus some smaller files and shared
memory segments, per node (and two nodes per host). And it works
fine. Even if I increase the file size to 10Gb each, and also
increase the memory cache, it seems that the system stabilizes at
around 3/4 Gb free memory (according to vmstat).
Now I can see with mdb that the value of arc.c (which is the amount
ZFS will use for cache) is actually only about half of arc.c_max:
---> arc::print -a "struct arc" c_max
70400370 c_max = 0x1bca74000> arc::print -a "struct arc" c
70400360 c = 0xe86b6be1
---
How did this happen, and what''s the rule here? What will happen if I
had 4Gb of memory? I would like to come up with some requirements or
limitations for running with ZFS.
--
Bjorn Munch
Sun Microsystems, Trondheim, Norway
This message posted from opensolaris.org