search for: rsyncable

Displaying 20 results from an estimated 32 matches for "rsyncable".

2003 Aug 28
1
Fw: Re: GZIP, ZIP, ISO, RPM files and rsync, tar, cpio
...ave been updated impacting > > the rsyncabilty of the package file. In any case changing > > even one internal file of a compressed package can disrupt > > rsyncing the entire package file. The only possible > > amelioration of this would be the use of the gzip > > --rsyncable option (which requires a patched gzip) by the > > package builders--assuming they use gzip for package > > compression. Given the effect of improving rsyncability and > > thereby reducing bandwidth requirements such a change to > > their package build scripts could well be...
2004 May 03
1
gzip-rsyncable.diff
Hi, I currently create large (500MB-2.5GB) tar.gz files that I am trying to get to rsync better. After doing some research by reading a lecture presented by the original author of rsync and googling the list archives, I have concluded that the gzip-rsyncable.diff is the best way to get gzipped files to rsync nicely. The only version I could find of this patch is available here: http://rsync.samba.org/ftp/unpacked/rsync/patches/gzip-rsyncable.diff Since the patch was created a while ago and is not part of the default gzip yet, I was wondering if r...
2005 Feb 18
0
Patch for rsyncable zlib with new rolling checksum
...slightly more sophisticated checksum calculation. The attached patch has the required changes (in hindsight, I should have compressed this using zlib with the new algorithm :-) ). Some things to know about the patch: First, it is against the zlib library - NOT the gzip application. By default, rsyncable computations are turned on, and the default behavior is to use the new rolling checksum algorithm. The window and reset block sizes are set to 30 bytes and 4096 bytes respectively. I've found that this gets much better rsync performance when used with the Z_RSYNCABLE_RSSUM checksum algorithm....
2009 Jul 06
3
How to make big MySQL database more diffable/rsyncable? (aka rsyncing big files)
Hello group, I'm having a very hard time rsyncing efficiently a MySQL database which contains very large binary blobs. (Actually, it's the database of Mantis bug tracker [http://www.mantisbt.org/], with file attachments stored directly in the table rows. I know it's a bad idea from many other reasons, but let's say it was given to me as such.) First, I was dumping the
2009 Jan 19
5
file compression on target side
Hello All, I have been using rsync to backup several filesystems by using Mike Rubel's hard link method (http://www.mikerubel.org/computers/rsync_snapshots/). The problem is, I am backing up a lot of ASCII .log, csv, and .txt files. These files are large and can range anywhere from 1GB to 30GB. I was wondering if on the target side (the backup side), if I can use some sort of compression. I
2005 Feb 04
2
rsync huge tar files
Hi folks, Are there any tricks known to let rsync operate on huge tar files? I've got a local tar file (e.g. 2GByte uncompressed) that is rebuilt each night (with just some tiny changes, of course), and I would like to update the remote copies of this file without extracting the tar files into temporary directories. Any ideas? Regards Harri
2019 Jun 19
2
libvirtd does not update VM .xml configuration on filesystem after virsh blockcommit
...server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW" /usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW" /usr/bin/pigz --best --rsyncable somedomain.com.ncloud.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-swap.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data1.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data2.qcow2 There is no error on script (commands) execution: [root@server1 ~]#...
2019 Jun 19
0
libvirtd does not update VM .xml configurations on filesystem after virsh snapshot/blockcommit
...server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW" /usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW" /usr/bin/pigz --best --rsyncable somedomain.com.ncloud.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-swap.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data1.qcow2 /usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data2.qcow2 There is no error on script (commands) execution: [root@server1 ~]#...
2016 May 15
1
--inplace option seems sending whole file
Hi I'm having issues sendig a lot of tar.gz backup files to a ZFS remote filesystem server. This files are compressed with the --rsyncable option. Sending without --inplace option rsync works well and send only the differences, but to create a temporary file and rewrite the file destination, zfs snapshots contain the full size of the backup, not only differences block. I've tried with the --inplace option but seems not workig...
2008 Mar 03
1
PST Rsync Issues
...and purposes the entire file is transferred. There are other files on this system that rsync more typically, so I'm fairly certain this isn't a switch or command error on the calling end. I have also verified that the PST is closed at the time of backup. I'm curious if PSTs are just un-rsyncable due to their makeup or if there is something I can do to pare down this transfer and subsequent storage. Any tips or ideas are welcome. Thanks, Jon -------------- next part -------------- HTML attachment scrubbed and removed
2003 Mar 05
2
compressed archives
Suppose I have a particular version of a largish compressed archive, most likely a .tgz or .tbz2, and that a remote machine has a newer, and only slightly different, version of the same archive, where most of the content hasn't actually changed much. I might attempt to obtain a copy of the newer archive by first copying my local older copy to the newer name as a file to update from. My
2003 Aug 28
1
GZIP, ZIP, ISO, RPM files and rsync, tar, cpio
I noticed with rsync and compressed files or package files the transfer efficiency drops considerably. Eg. rsync an ISO image of a distribution will give you between 30% and 60% of the original transfer although from Beta1-Beta2 the change could not have been that great. The same thing happens with ZIP files for obvious reasons. My question or feature request if you want to call it is. Is it
2003 Jul 28
2
compression and encryption
...#39;t know much about encryption, but I suppose that there are some ciphers that are reasonably strong and don't have the same problem like gzip (that a single changed byte in the middle of the file affects the contents of the rest of the file)? Even if not, the same thing like with gzip (--rsyncable) could probably be done. The goal is to do the encryption on one side so that the storage provider doesn't ever see the unencrypted content or the key. As for compression, I was thinking of making rsync temporarily uncompress the file, then update it, and then recompress it. This way, the...
2002 Dec 23
1
deflate on token returned 0 (16384 bytes left)
Hello All, I have searched for this error and found similiar errors, but everything seems to indicate that this should be fixed in the 2.5.5 version. rsync version 2.5.5 protocol version 26 I am running rsync with -axz, trying to sync up a large gz file : Source machine: -rw-r--r-- 1 root other 175214792 Dec 22 00:17
2016 Dec 07
3
rsyncing from a compressed tarball.
...>> like a compressed tarball. >> >> Is this possible with the --files-from argument (or some other such >> argument)? >> > > I'm curious if rsyncing the entire compressed tarball might work for you, if > you compress the tarball with the gzip option of --rsyncable and then > rsync with --inplace ? > > Mike
2005 Feb 13
2
Rsync friendly zlib/gzip compression - revisited
...m). Results ===== I won't spend a lot of time looking at the effect of rsync block size on the speed up - that has been studied pretty well already, but just so we have the data points, here are the results assuming a fixed compression window size of 4000 (which is pretty close to Rusty's rsyncable patch implementation): [RS Block Size] [Speed Up] 500 194.49103 1000 179.82353 1500 164.90042 2000 169.59175 2500 154.23415 For the rest of this analysis, I'm going to set the rsync block size to 1000 and leave it there. If we look at RSync performance as a function of window size, we find...
2001 Dec 20
3
rsync *Still* Copying All Files?
...er. Here's the command line I'm issuing: C:\temp>rsync --verbose --progress --stats --compress --recursive --times 10.8.1.57::nmap /cygdrive/c/temp/nmap I'm using a source distribution of nmap just to test with. When I started noticing this error, I placed a 37MB file inside the rsyncable nmap area to emphasize the amount of time spent syncing. The progress switch allows me to watch a realtime progress counter as the files are processed. It takes several minutes for that 37MB file to come over, and the stats at the end hold up the theory that ~40MB of data has been transferred. W...
2023 Mar 04
3
Trying to diagnose incomplete file transfer
...he file. I am frequently encountering times where the file appears to have been transferred but is incomplete. (Example: foo.tgz.ab now exists on the local system, has been removed from the remote, but is incomplete.) Additional notes: To my knowledge I do not know if the 'gzip' '--rsyncable' option is being used (but I do not think so--I suspect the file is created using a command similar to 'tar czf foo.tgz ...'). The rsync commands may be launched from command-line or cron, but use the same format and options in either case. As a result, there may be multiple rsync...
2009 Nov 21
8
DO NOT REPLY [Bug 6916] New: Avoid bundling a modified zlib
https://bugzilla.samba.org/show_bug.cgi?id=6916 Summary: Avoid bundling a modified zlib Product: rsync Version: 3.1.0 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: matt at mattmccutchen.net
2003 Jun 22
1
rsync backup performance question
Dear all, I am implementing a backup system, where thousands of postgreSQL databases (max 1 Gb in size) on as much clients need to be backed up nightly across ISDN lines. Because of the limited bandwidth, rsync is the prime candidate of course. Potential problems I see are server load (I/O and CPU), and filesystem limits. Does anyone have experience with such setups? Ron Arts -- NeoNova BV