On Thu, Dec 17, 2009 at 6:49 AM, Joel Peabody <jpeabody at bnserve.com>
wrote:>?However, in looking at the output of one particular job it looks
> like it?s just putting out directories when it gets to a new one, and not
> the filename of everything it?s transferring.
No, that's pretty impossible. If you're losing output, it's more
likely to be something like an fd somehow getting set to non-blocking
I/O, which can cause some flushed data to be lost (since stdio
libraries expect their file handles to be blocking). There was a bug
a while back where ssh would cause such a problem when stdout and
stderr were joined together, but I haven't seen this before for just a
redirected stdout. The "fix" was a kluge where rsync sets the output
back to blocking after it (hopefully) waited long enough for ssh to
have finished its fiddling. You might try adding a
set_blocking(STDOUT_FILENO) right after the
set_blocking(STDERR_FILENO) line in main.c just to see if that helps
anything.
Also, if all you want is the per-file output, using -i its own (or
--log-format on its own) without --progress will give you that, which
will help cut down on all the other per-file output that --progress
generates.
..wayne..