I'm looking for a way to deliberately copy a large directory tree
of files somewhat slowly, rather than as fast as the hardware
will allow. The intent is to avoid killing the hardware,
especially as I copy multi-gigabyte disk image files.
If I copy over the network, say via ssh, I can use --bwlimit.
But I'm asking myself if I can specify --no-whole-file and perhaps
some other options to force a local disk to disk copy
to implement a rate limit, for something near the block level
(rather than, say, an average over files).
Is this feasible?
I've got rsync 3.0.7 built on the two systems potentially involved
which running MacOSX 10.4.11 and 10.6.4. (I can do the copy
on either box or over a short ethernet between them.)
It's important to preserve MacOS resource forks, despite their
being out of vogue.
--
Albert Lunde albert-lunde at northwestern.edu
atlunde at panix.com (address for personal mail)