hello matt,
thank you for your reply.
as is see, the method you describe is just "theoretical", because it
won`t work due to buffering issue.
furthermore, it still needs ssh or maybe another remote shell.
i'd like to leave out ssh or probably any remote shell entirely because
encryption is slow and needs cpu.
i read that there is a patch to openssh that you can use -c none, i.e. Cipher
none for using ssh without
encryption, but this is no option because i cannot install patched ssh on all
machines.
> The next best thing is to use rsync to generate a batch file on the
> sender, bring the batch file to the receiver by hand (compressing and
> decompressing), and apply the batch file on the receiver using rsync.
> This effectively lets you compress the mainstream file-data part of the
> transmission, but much of the metadata must still go over the wire
> during the creation of the batch file. See the man page for details.
$ rsync --write-batch=pfx -a /source/dir/ /adest/dir/
$ rcp pfx.rsync_* remote:
$ ssh remote rsync --read-batch=pfx -a /bdest/dir/
# or alternatively
$ ssh remote ./pfx.rsync_argvs /bdest/dir/
ah - i understand. quite interesting and cool stuff, but unfortunately it should
need tons
of temporary space if you do the first rsync or if there is significant diff
between the two
hosts.
conclusion:
both methods seem to be no option for me at the moment.
any chance to see "pluggable compression" in rsync one day ? maybe
it`s a planned feature ?
regards
roland
Matt McCutchen <hashproduct@verizon.net> schrieb am 09.02.06
02:02:47:>
> On Fri, 2005-12-09 at 00:46 +0100, roland wrote:
> > I`m trying to find a way to use lzo compression for the data being
> > transferred by rsync.
>
> It's easy set this up. Consider this (for gzip but it's easy to do
the
> same for any compression program):
>
> /usr/local/bin/gzip-wrap:
> #!/bin/bash
> gzip | "$@" | gunzip
>
> /usr/local/bin/gunzip-wrap:
> #!/bin/bash
> gunzip | "$@" | gzip
>
> Then run:
> rsync --rsh='gzip-wrap ssh' --rsync-path='gunzip-wrap
rsync'
> <options source dest>
>
> As elegant as this technique is, it fails because compression programs
> perform internal buffering. One rsync will send a block of data and
> wait for an acknowledgement that the other side has received it, but
> since the end of the data is buffered in the compression program, the
> other side never responds and deadlock results. There might be a way
> around this, but I can't think of one.
>
> The next best thing is to use rsync to generate a batch file on the
> sender, bring the batch file to the receiver by hand (compressing and
> decompressing), and apply the batch file on the receiver using rsync.
> This effectively lets you compress the mainstream file-data part of the
> transmission, but much of the metadata must still go over the wire
> during the creation of the batch file. See the man page for details.
> --
> Matt McCutchen
> hashproduct@verizon.net
> http://mysite.verizon.net/hashproduct/
>
______________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193