hi, All. I just compiled the latest rsync (2.6.9). but I'm getting an error when I use the -C option. eg: $ rsync -aCv host1:/home/john/data/ /home/john/data receiving file list ... ERROR: out of memory in add_rule [sender] rsync error: error allocating core memory buffers (code 22) at util.c(115) [sender=2.6.9] rsync: connection unexpectedly closed (8 bytes received so far) [receiver] rsync error: error in rsync protocol data stream (code 12) at io.c(453) [receiver=2.6.9] and if I remove the -C option. everything is fine. the directory structure is very simple with no sym/hard links. and total number of files in there is about 30. running the command on local file system has no problem (replicating local dir with -C option). I did truss the remote sshd and see the client connected successfully. forked child sshd which forked again and rsync exec'd: 137: execve("/usr/bin/rsync", 0x0004509C, 0x000450F8) argc = 6 137: argv: rsync --server --sender -vlogDtprC . /home/john/data 137: envp: _=/usr/bin/rsync Both end of rsync is version 2.6.9. and both system are running solaris 9. binary compiled with Sun's cc (not gcc). The thing that makes me think that this is a bug in rsync is that I also tried rsync to another solaris 9 machine which is running rsync 2.5.0. and it works with no problem (with the -C option). I goggled around.. and most people who are getting this error was dealing with large number of file/directory. Any info on this would be helpful. thanks.
On 7/19/07, Paul Cui <ycui1@bloomberg.com> wrote:> hi, All. > > I just compiled the latest rsync (2.6.9). but I'm getting an error when > I use the -C option. > eg: > $ rsync -aCv host1:/home/john/data/ /home/john/data > receiving file list ... ERROR: out of memory in add_rule [sender] > rsync error: error allocating core memory buffers (code 22) at util.c(115) [sender=2.6.9] > rsync: connection unexpectedly closed (8 bytes received so far) [receiver] > rsync error: error in rsync protocol data stream (code 12) at io.c(453) [receiver=2.6.9]The remote rsync is running out of memory as it collects the CVS ignore rules (add_rule). Something could be awry with the CVS ignore rules, or the machine could just be low on memory. Please run rsync again with -vvv (three verbose options), which will make the remote rsync show (among other useful information) the individual calls to add_rule, and send the resulting output to the list. Matt
On 7/20/07, Paul Cui <ycui1@bloomberg.com> wrote:> [sender] add_rule(:C .cvsignore) > ERROR: out of memory in add_rule [sender]So the problem is happening on the very first call to add_rule. That's very bizarre. Since I can't reproduce the problem on my computer, I can't do anything more on my side to investigate it; sorry. If you want to get to the root of the problem, I suggest that you add some debug output ( rprintf(FINFO, "blah...\n"); ) to the remote rsync. A good first step would be to determine which of the several out_of_memory("add_rule") calls in add_rule is being triggered. Matt
Seemingly Similar Threads
- [ycui1@bloomberg.com: Re: rsync bug?? (rsync fails when -C is used).]
- Error 11: can not backup /var/lib/zope2.9
- Cross validation, one more time (hopefully the last)
- indexed expression
- Bug#306368: filter rules are too modern for remote rsync (which is 2.5.6)