Robert DuToit <rdutoit at comcast.net> wrote:
> Mike Bombich has a good piece on benchmarks for various source/destination
scenarios with rsync.
>
> https://bombich.com/kb/ccc3/how-long-should-clone-or-backup-take
I hadn't seen that link, thanks.
There's an interesting anomaly in the first chart. Not unsurprisingly, for a
single source connection method - increasing the speed of the destination
connection increases throughput. And vice-versa, for a given destination
connection type, faster source connections give more throughput.
Almost !
The fact that internal SATA to internal SATA is slower than SATA-FW800 (either
way, but SATA to FW800 is most pronounced) suggests that there is an internal
bottleneck when using two internal SATA devices simultaneously on the machine
used for the tests.
Also, internal SATA <-> FW800 is the only combination where there's
significant asymmetry in rates, and it does look like there is a write speed
issue on the internal SATA. It would be interesting to see what speed something
like "dd if=/dev/zero of=/dev/${device} bs=1m" gets.
What I can say is his figures for SATA -> FW800 are in the same ballpark as I
get (with a different backup package) on big files.
> Network backups are always slower but you can see that sparseimage on
network volume (AFP) is better according to Mike’s chart.
That's not all that surprising really. If you think about it, it means the
source computer can cache filesystem metadata and dirty data locally and only
deals with the host for a relatively small number of relatively large files. So
it really just has to throw a list of blocks to write at the destination, and
the writes can be queued and (presumably if async options are set) overlap
requests to keep the pipes full.
Much reduced network overhead compared with dealing with a large number of
remote file operations - not so much the bandwidth required, but the latency of
all the round trip network conversations that need to take place.