For those who saw my earlier posts regarding reliability and robustness,
there were several problems;
1. rsync does not retry when there are transmission errors such as
timeouts. I suggest it ought (as wget does): doing so would have enabled
a clean recovery from the network problems we had.
2. The DSL link was actually bad. After much stupidity from the spider
folk, the DSL link finally got fixed and is working as it should. The
bad DSL link meant the connexion dropped from time to time.
I installed and configured openvpn (which for other reasons). Openvpn
provides the robustness missing from the DSL connexion and rsync.
Indeed, I use openvpn from my home dialup to work, and TCP cunnexions
tunnelled through it often survive getting hung up on and having to
redial; I have had connexions survive for hours:-).
I am using rsync version 2.6.2 protocol version 28 (sending) on Debian
(Woody) and rsync version 2.6.2 protocol version 28 on RHL 7.3
(receiving).
I am transferring files from Woody using these options:
rsync --recursive --links --hard-links --perms --owner --group --devices
--times --sparse --one-file-system --rsh=/usr/bin/ssh --delete
--delete-excluded --relative --stats --numeric-ids --timeout=7200
--exclude-from=/etc/local/backup/system-backup.excludes <source>
<dest>
Source is on the local machine, dest another.
Occasionally I mistakenly get a larger file than I intended to transfer;
at the moment I've accidently included an ISO image.
The ISO image has five links, and all are in subdirectories of <source>.
It seems beyond possibility of misinterpretation that rsync is busily
transferring the file for the fifth time. It's been at if for days (no
joke, it might even be a week).
Is there something in the above options I should add or remove?
Rather than transferring a directory and all its contents amounting to
some tens of thousands of files, would I be better off making a
filesystem in a file:
dd if=/dev/zero seek=$((20*1024*1024)) count=0 bs=1024 of=20-gig
mke2fs -F -q 20-gig
and then transferring that?
I imagine that then directory sizes, devices, permissions, {hard,soft}
links and everything else would become non-issues.
I appreciate this needs 40 Gb at the destination.