Waters, Michael R [IT]
2001-Oct-31 05:31 UTC
Multiples rsyncs with multiple sshs...CPU overload
Hello Folks, I am using rsync 2.4.6 over ssh on Solaris 2.6 machines. It's been working great for months keeping three DMZ ftp servers in sync...now, though, I am trying to implement a new solution with DMZ and "inside" ftp servers. Basically, I want to sync files being ftp'ed to the DMZ server over to an "inside" machine, and since some processing (decryption) then occurs, I need to also send the last line of the file transfer log so it knows it needs to do something (another process checks for new entries to the log). I need to use ssh, because rsh is not permitted in our environment (so I understand rsync server is not an option). This all works fine with the nifty bit of running a command on the remote machine via the rsync call. Problem is it works for one transfer, and even a few. When I try to do a stress test though, averaging 10 transfers a minute, it kills the CPU to the point where some things are never completed. I know this is on account of running an ssh session for each new file transfer. On account of the second part of sending the last entry of the file transfer log, though, the situation doesn't really lend itself to doing an rsync every five minutes or so. So, I am wondering if there is a way to open up a *single* ssh session and have *all* rsyncs use that "persistent" pipeline for all rsyncs between the DMZ and inside server, instead of a new ssh each time. From what I have read, I am pessimistic, but I figured it can't hurt to ask. If not, I'll have to work out something with the file transfer log, but it sure would be great to get this working...this has greatly improved our redundant capabilities... Thanks for any suggestions. Mike
One thing you could try would be to setup a port-forwarding ssh for port 873 and run an rsync daemon setup. The daemon mode does not use rsh as you thought: it uses its own (unencrypted without external help) sockets connections. ..wayne..
tim.conway@philips.com
2001-Oct-31 05:54 UTC
Multiples rsyncs with multiple sshs...CPU overload
You're in luck, Mr. Walters. If you're already using ssh, put up a rsyncd on the dmz machine, limit it to 127.0.0.1, and use ssh port redirection to make a port inside be an access to the rsyncd port on the dmz side. one ssh persists, used by many, possibly concurrent, rsync sessions.the same ssh can remain interactive, and used as a pipe to do your remote commands. Tim Conway tim.conway@philips.com 303.682.4917 Philips Semiconductor - Longmont TC 1880 Industrial Circle, Suite D Longmont, CO 80501 Available via SameTime Connect within Philips, n9hmg on AIM perl -e 'print pack(nnnnnnnnnnnn, 19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970), ".\n" ' "There are some who call me.... Tim?" "Waters, Michael R [IT]" <michael.r.waters@ssmb.com> Sent by: rsync-admin@lists.samba.org 10/30/2001 11:31 AM To: "'rsync@lists.samba.org'" <rsync@lists.samba.org> cc: (bcc: Tim Conway/LMT/SC/PHILIPS) Subject: Multiples rsyncs with multiple sshs...CPU overload Classification: Hello Folks, I am using rsync 2.4.6 over ssh on Solaris 2.6 machines. It's been working great for months keeping three DMZ ftp servers in sync...now, though, I am trying to implement a new solution with DMZ and "inside" ftp servers. Basically, I want to sync files being ftp'ed to the DMZ server over to an "inside" machine, and since some processing (decryption) then occurs, I need to also send the last line of the file transfer log so it knows it needs to do something (another process checks for new entries to the log). I need to use ssh, because rsh is not permitted in our environment (so I understand rsync server is not an option). This all works fine with the nifty bit of running a command on the remote machine via the rsync call. Problem is it works for one transfer, and even a few. When I try to do a stress test though, averaging 10 transfers a minute, it kills the CPU to the point where some things are never completed. I know this is on account of running an ssh session for each new file transfer. On account of the second part of sending the last entry of the file transfer log, though, the situation doesn't really lend itself to doing an rsync every five minutes or so. So, I am wondering if there is a way to open up a *single* ssh session and have *all* rsyncs use that "persistent" pipeline for all rsyncs between the DMZ and inside server, instead of a new ssh each time. From what I have read, I am pessimistic, but I figured it can't hurt to ask. If not, I'll have to work out something with the file transfer log, but it sure would be great to get this working...this has greatly improved our redundant capabilities... Thanks for any suggestions. Mike -------------- next part -------------- HTML attachment scrubbed and removed