Mag Gam wrote:> Currently at my university we are running many servers with Centos 4.6
> and everything is okay, but when a user does a large operation such as
> 'cp' or 'tar' or 'rsync' all of the physical memory
gets exhausted.
cp or tar won't exhaust your memory. rsync can consume a lot if your
copying a lot of files but it's still fractional compared to
the ram you have installed(from a test I'm running now it took
about 10MB of memory for 100,000 files, 38MB for 500,000 files)
It is common to confuse "free" memory with memory being used
for buffers. If your doing any heavy disk I/O linux will
automatically consume all available memory it can for disk
buffers. If the memory is needed for something else it will
re-allocate it automatically away from buffers to the
application that is requesting it.
It sounds like you might be running a 32-bit version of the
OS with large memory support. If this is the case performance
can really suffer if you go above 3GB of memory usage in
a memory intensive operation due to the massive overhead
of PAE(hardware function, nothing to do with linux itself).
So...
- Confirm you are using a 64-bit kernel (uname -a)
- Confirm that you are not confusing free memory with
memory that is being used by buffers
- Confirm that your not already using a very large amount
of memory before the I/O intensive operation occurs
You can calculate actual physical memory usage by doing:
(total memory) - (memory buffers) - (memory cache) - (memory free)
Then of course subtract physical memory usage from total
memory to get available memory.
Don't trust the "free memory" readings by tools like top or
free as they are useless, and misleading.
nate