>Is there more than one rsync running? In 2.5.4 a failure in another rsync
>process could kill your rsync. I haven't studied the code recently, but
I
>don't think there are any calls to fork() after it has started
transfering
>files.
I recall
bash$ ps -aux
showed 2 rsync processes, maybe three when the rsync was executing on the
other console. I have plenty more available:
bash$ ulimit -a
core file size (blocks) 0
data seg size (kbytes) unlimited
file size (blocks) unlimited
max locked memory (kbytes) unlimited
max memory size (kbytes) unlimited
open files 1024
pipe size (512 bytes) 8
stack size (kbytes) unlimited
cpu time (seconds) unlimited
max user processes 768
virtual memory (kbytes) unlimited
bash$
> bash$ ps U `whoami` | wc -l
40
>Then see the total number of running processes with ps
> bash$ ps ax | wc -l
55
>My system is a long way from running out of processes.
and mine
>I ran into the bug because we had a script that invoked a script that ...
>invoked rsync which invoked ssh, for a total of something like 8 processes
>per client, and we ran it for 20 clients, we misconfigured ssh so it hung
>on all the clients, leaving all 160 processes running, then cron came
>along and started a whole new set of 160 processes. Which didn't work,
>because the limit was 255. Fortunately, it wasn't running as root.
Only one client (root) (maybe wwwrun etc) so there shouldn't be a problem
there. Maybe it iis the way that the kill() error trap is reached is
different in your case and my case, but it still may have been the sme
trap. I'll have to look at the code.
I am not sure that the ulimit display (above) is not a warning - not sure
that virtual memory (kbytes) should be unlimited. I have a finite swap
partition. Maybe a kernel problem there...
Thanks for your help. I will keep looking...
..Trevor..