I have started investigating the memory usage of the rsync algorithm since installing OpenWRT on a Linksys WRT160NL router with only 32MB of RAM. I have read several discussions about rsync's use of memory. Rsync consumes all of the memory on my router even when I ensure incremental scanning is used. This is not what I expect. The directory I am recursively synchronizing contains 56,545 directories in its tree. However, the maximum number of entries in any one directory is only 2,783. I have been looking into working around this by breaking up the synchronization task. Has anyone else come up with a good technique to do this? I have seen references to this idea, but no concrete code. What I am doing right now is: find . -type d -print -exec sh -c "rsync --dirs -vv -l -p -e ssh "\{\}"/* root at host.example.com:/DEST-DIR/"\{\} \; This is a bit slow because I need to establish SSH connections and initialize rsync 56,545 times. I'd like to hear other people's solutions. Mike
On Tue, 2010-02-09 at 21:46 -0500, W. Michael Petullo wrote:> I have started investigating the memory usage of the rsync algorithm since > installing OpenWRT on a Linksys WRT160NL router with only 32MB of RAM.Wow, that's pretty tight.> Rsync consumes all of the memory on my router even when I ensure > incremental scanning is used. This is not what I expect. The directory I > am recursively synchronizing contains 56,545 directories in its tree. > However, the maximum number of entries in any one directory is only 2,783.See: http://lists.samba.org/archive/rsync/2007-August/018193.html The number of files that rsync tries to maintain in the active file lists at one time is controlled by {MIN,MAX}_FILECNT_LOOKAHEAD in rsync.h. I did some tests with "ulimit -v", and reducing those values helped significantly. That still may not be enough to fit rsync into 32 MB. -- Matt