I don't know if this is the propper place to ask for new funcionality, but after searching the website i didn't find another place to do it. I'm very sorry if I`m asking this in the wrong place. It would be greatful to have crypting funcionality added to rsync, appart from using ssh to crypt transmissions. I know this is not the original purpose for rsync, but anyway it's a fact that many people need (at least i need, I hope I'm not alone) additional security features for offsite backups, like having the data crypted at remote, but not at source, so the problem is crypting the data and then syncronizing it. I've programmed a wrapper in java that do it, but it's very time and resources consuming. The idea was to copy file by file in a temp dir, crypt the file, rsync it, and delete the file. I go from top to down sending the source (subdirs). About sending the data file by file, the question is many times we have very large files and/or very large dirs, so imposing the user to have the same amount of free space that he/she wants to send it's not a good idea I guess. It isn't a bad solution, but neither a good one. Perfect should be having support to it from rsync. It is just an idea to improve rsync. Thank you very much in advance and again sorry if this wasn't the list to post a request. -------------- next part -------------- HTML attachment scrubbed and removed
On Tue, 4 Mar 2008, david reinares wrote:> It would be greatful to have crypting funcionality added to rsync, appart > from using ssh to crypt transmissions. > > I know this is not the original purpose for rsync, but anyway it's a fact > that many people need (at least i need, I hope I'm not alone) > additional security features for offsite backups, like having the data > crypted at remote, but not at source, so the problem is crypting the data > and then syncronizing it.There are a couple of things which try to do this, eg. duplicity, but none which fully hit the nail on the head. The killing factor for duplicity is that to expire increments you need to do a new full backup. rsyncrypto is probably what comes closest to the goal, it would be ideal though if rdiff-backup was to be combined with rsyncrypto to do it all in one - I personally use a three stage process using tar, rsyncrypto and then rdiff-backup (and then rsync). When rsyncrypto gets working stdin/stdout support then the tar/rsyncrypto stage could be combined into one. None of this is directly related to rsync itself though.
rsyncrypto looks fine, but still not which we're looking for. Having a complete file updated if a little change happens doesn't bother me. We're performing daily rsync, so not many files can be changed in a day. The real problem is about space and performance. If you want good performance yo have to sacrifice space, and vice versa. We decided to save space for client. so we copy file by file, crypt it, rsync it, and then delete...a hell for performance, starting a rsync connection for each file. And worst of all, we loose -b functionality, that was really good (having not just a copy of the day before but an extra day)...having a previous version of destination data in a file by file basis is not a god idea. And to have the --delete functionality we need to play a trick at the end with a new rsync passing all directories and --ignore-existing, --ignore-non-existing, --delete, just to have same files in source and destination (thank you very much Matt McCutchen). About duplicity is not a good idea, at least for us. We don't want to have tar directories and such things. Any idea to get the -b funcionality back again and obtain a compromise between space and performance? ------------------------------------------------------------------------------------ On Tue, 4 Mar 2008, david reinares wrote:> It would be greatful to have crypting funcionality added to rsync, appart > from using ssh to crypt transmissions. > > I know this is not the original purpose for rsync, but anyway it's a fact > that many people need (at least i need, I hope I'm not alone) > additional security features for offsite backups, like having the data > crypted at remote, but not at source, so the problem is crypting the data > and then syncronizing it.There are a couple of things which try to do this, eg. duplicity, but none which fully hit the nail on the head. The killing factor for duplicity is that to expire increments you need to do a new full backup. rsyncrypto is probably what comes closest to the goal, it would be ideal though if rdiff-backup was to be combined with rsyncrypto to do it all in one - I personally use a three stage process using tar, rsyncrypto and then rdiff-backup (and then rsync). When rsyncrypto gets working stdin/stdout support then the tar/rsyncrypto stage could be combined into one. -------------- next part -------------- HTML attachment scrubbed and removed
Very good this patch...thank you I've been testing this after patching rsync, and works fine to backup...but when I'm restoring the crypted data after a while rsync shows rsync: Failed to close: Bad file descriptor (9) rsync: Failed dup/close: Bad file descriptor (9) rsync error: error in IPC code (code 14) at pipe.c(208) [receiver=3.0.0] rsync error: error in IPC code (code 14) at pipe.c(195) [receiver=3.0.0] rsync: connection unexpectedly closed (55 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [generat or=3.0.0] It's a bit strange. It restores some files before failing, and they are perfectly decrypted...i'm using openssl as command ------------------------------------------------------------------------------------------------------- On Sat, 2008-03-08 at 18:33 +0100, david reinares wrote:> rsyncrypto looks fine, but still not which we're looking for. > > Having a complete file updated if a little change happens doesn't > bother me. We're performing daily rsync, so not many files can be > changed in a day. > > The real problem is about space and performance. If you want good > performance yo have to sacrifice space, and vice versa. > > We decided to save space for client. so we copy file by file, crypt > it, rsync it, and then delete...a hell for performance, starting a > rsync connection for each file. > And worst of all, we loose -b functionality, that was really good > (having not just a copy of the day before but an extra day)...having a > previous version of destination data > in a file by file basis is not a god idea.I don't understand what problem you are having with -b; could you please clarify? Suffixed backups should work exactly the same way when rsyncing one file at a time as they did when you rsynced all the files at once. The same is true of backups with --backup-dir, provided that you use --relative so you can specify the same destination argument for every run.> Any idea to get the -b funcionality back again and obtain a compromise > between space and performance?To fix the performance while keeping the space usage low, look into the "source-filter_dest-filter" branch of rsync: http://rsync.samba.org/ftp/rsync/dev/patches/source-filter_dest-filter.diff You could run rsync once for all the files and specify your encryption program as the --source-filter, and rsync would call your encryption program once per file as needed. Matt -------------- next part -------------- HTML attachment scrubbed and removed
After testing a bit more i discovered that fails when i pass the command to restore and decrypt with dest-filter (in the client side). Always the same file, no matter how many times i execute rsync. But after testing diferent folders, i can't see the conection between the failed files. html, java, etc, but all of them with more files exactly like them in the folder but rsync'd and decrypted perfectly. If i do the same with source-filter (server side) it seems to work ok, i can restore all files. But that is a problem, because we don't want to have the files decrypted in the server not even for a second, appart of the fact that having a big bunch of clients restoring at the same time with all the hard work of decrypting in the server side would overload the server. -------------------------------------------------------------------------------------------------------------------------------------------------- Very good this patch...thank you I've been testing this after patching rsync, and works fine to backup...but when I'm restoring the crypted data after a while rsync shows rsync: Failed to close: Bad file descriptor (9) rsync: Failed dup/close: Bad file descriptor (9) rsync error: error in IPC code (code 14) at pipe.c(208) [receiver=3.0.0] rsync error: error in IPC code (code 14) at pipe.c(195) [receiver=3.0.0] rsync: connection unexpectedly closed (55 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [generat or=3.0.0] It's a bit strange. It restores some files before failing, and they are perfectly decrypted...i'm using openssl as command ------------------------------------------------------------------------------------------------------- On Sat, 2008-03-08 at 18:33 +0100, david reinares wrote:> rsyncrypto looks fine, but still not which we're looking for. > > Having a complete file updated if a little change happens doesn't > bother me. We're performing daily rsync, so not many files can be > changed in a day. > > The real problem is about space and performance. If you want good > performance yo have to sacrifice space, and vice versa. > > We decided to save space for client. so we copy file by file, crypt > it, rsync it, and then delete...a hell for performance, starting a > rsync connection for each file. > And worst of all, we loose -b functionality, that was really good > (having not just a copy of the day before but an extra day)...having a > previous version of destination data > in a file by file basis is not a god idea.I don't understand what problem you are having with -b; could you please clarify? Suffixed backups should work exactly the same way when rsyncing one file at a time as they did when you rsynced all the files at once. The same is true of backups with --backup-dir, provided that you use --relative so you can specify the same destination argument for every run.> Any idea to get the -b funcionality back again and obtain a compromise > between space and performance?To fix the performance while keeping the space usage low, look into the "source-filter_dest-filter" branch of rsync: http://rsync.samba.org/ftp/rsync/dev/patches/source-filter_dest-filter.diff You could run rsync once for all the files and specify your encryption program as the --source-filter, and rsync would call your encryption program once per file as needed. Matt -------------- next part -------------- HTML attachment scrubbed and removed
I did as you said..the program calls openssl after rsync to decrypt restored data... But the patch has serious problems...besides the dest-filter error, testing large amount of data (well, not such large, 70 Mbs), i received this message.. html/lib/perl5db.html 9 [main] rsync 2760 _cygtls::handle_exceptions: Exception: STATUS_ACCESS_V IOLATION 608 [main] rsync 2760 open_stackdumpfile: Dumping stack trace to rsync.exe.s tackdump 1870590 [main] rsync 2760 _cygtls::handle_exceptions: Exception: STATUS_ACCESS_V IOLATION 1909558 [main] rsync 2760 _cygtls::handle_exceptions: Error while dumping state (probably corrupted stack) This patch needs really hard work and lots of testing...reading the list i think i saw something similar rsync error: STATUS_ACCESS_VIOLATION Rob Bosch Thu, 25 Oct 2007 17:56:19 -0700 I don't know if it's the same failure or the same cause... I guess i'll have to figure out another way to rsync and crypt in a efficient way..another idea? By the way thank you very much and sorry for the inconveniences --------------------------------------------------------------------- On Mon, 2008-03-10 at 22:55 +0100, david reinares wrote:> After testing a bit more i discovered that fails when i pass the > command to restore and decrypt with dest-filter (in the client side). > Always the same file, no matter how many times i execute rsync. But > after testing diferent folders, i can't see the conection between the > failed files. html, java, etc, but all of them with more files exactly > like them in the folder but rsync'd and decrypted perfectly. > If i do the same with source-filter (server side) it seems to work ok, > i can restore all files. But that is a problem, because we don't want > to have the files decrypted in the server not even for a second, > appart of the fact that having a big bunch of clients restoring at the > same time with all the hard work of decrypting in the server side > would overload the server. > > > -------------------------------------------------------------------------------------------------------------------------------------------------- > Very good this patch...thank you > > I've been testing this after patching rsync, and works fine to backup...but > > when I'm restoring the crypted data after a while rsync shows > rsync: Failed to close: Bad file descriptor (9) > rsync: Failed dup/close: Bad file descriptor (9) > rsync error: error in IPC code (code 14) at pipe.c(208) [receiver=3.0.0] > > rsync error: error in IPC code (code 14) at pipe.c(195) [receiver=3.0.0] > rsync: connection unexpectedly closed (55 bytes received so far) [generator] > rsync error: error in rsync protocol data stream (code 12) at io.c(600) > > [generat > or=3.0.0] > > It's a bit strange. It restores some files before failing, and they are > perfectly decrypted...i'm using openssl as commandI will look into the problem with the patch if I get a chance. One workaround for restoring files would be to rsync the desired files to the target machine without filtering and then decrypt them there (with rsync and --source-filter or just a shell script). Matt -------------- next part -------------- HTML attachment scrubbed and removed