similar to: Can I let rsync only transer a part of file within specific byte ranges?

Displaying 20 results from an estimated 5000 matches similar to: "Can I let rsync only transer a part of file within specific byte ranges?"

2015 Jul 17
3
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099 --- Comment #8 from Chip Schweiss <chip at innovates.com> --- I would argue that optionally all directory scanning should be made parallel. Modern file systems perform best when request queues are kept full. The current mode of rsync scanning directories does nothing to take advantage of this. I currently use scripts to split a couple
2015 Jul 01
5
cut-off time for rsync ?
> If your goal is to reduce storage, and scanning inodes doesnt matter, > use --link-dest for targets. However, that'll keep a backup for every > time that you run it, by link-desting yesterday's copy. The goal was not to reduce storage, it was to reduce work. A full rsync takes more than the whole night, and the destination server is almost unusable for anything else when it
2017 Mar 03
2
How do you exclude a directory that is a symlink?
Considering you cant INCLUDE a directory that is a symlink... which would be really handy right now for me to resolve a mapping of 103 -> meaningful_name for backups, instead im resorting to temporary bind mounts of 103 onto meaningful_name, and when the bind mount isnt there, the --del is emptying meaningful_name accidentally at times. I think both situations could benefit from a
2015 Jul 02
1
cut-off time for rsync ?
> What is taking time, scanning inodes on the destination, or recopying the entire > backup because of either source read speed, target write speed or a slow interconnect > between them? It takes hours to traverse all these directories with loads of small files on the backup server. That is the limiting factor. Not even copying: just checking the timestamp and size of the old copies.
2015 Jul 16
1
Fwd: rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote: > As with any dedup solution, performance does take a hit and its often > not worth it unless you have a lot of duplication in the data. This is so only in some volumes in our case, but it appears that zfs permits this to be enabled/disabled on a per-volume basis. That would work for us. Is there a way to save cycles by offering zfs
2015 Jun 15
1
rsync very slow with large include/exclude file list
I investigated the rsync code and found the reason why. For every file in the source, it searches the entire filter-list looking to see if that filename is on the exclude/include list. Most aren't, so it compares (350K - 72K) * 72K names (the non-listed files) plus (72K * 72K/2) names (the ones that are listed), for a total of about 22,608,000,000 strcmp's. That's 22 BILLION
2015 Jul 02
8
[Bug 11378] New: Please add a '--line-buffered' option to rsync to make logging/output more friendly with pipes/syslog/CI systems/etc.
https://bugzilla.samba.org/show_bug.cgi?id=11378 Bug ID: 11378 Summary: Please add a '--line-buffered' option to rsync to make logging/output more friendly with pipes/syslog/CI systems/etc. Product: rsync Version: 3.1.1 Hardware: All OS: All Status: NEW
2015 Apr 16
3
rsync --delete
Hi, Rsync. I want to help rsink delete a folder with a large number of files and folders. Tried this: rsync -a --no-D --delete /dev/null /home/rc-41/data/000000000000061/2015-04-01-07-04/ skipping non-regular file "null" rsync -a --no-D --delete /dev/zero /home/rc-41/data/000000000000061/2015-04-01-07-04/ skipping non-regular file "zero" That's how it turns out rsync -a
2015 Apr 16
2
Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)
rsync folks, Henri Shustak <henri.shustak at gmail.com> wrote: > LBackup always starts a new backup snapshot with an empty directory. I > have been looking at extending --link-dest options to scan beyond just > the previous successful backup to (failed backups / older backups). > However, there are all kinds of edge cases which are worth considering > with such a changes. At
2015 Apr 15
1
rsync --link-dest won't link even if existing file is out of date
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/14/2015 11:35 PM, Henri Shustak wrote: >> Ill take a look but I imagine I cant backup the 80 Million files >> I need to in under the 5 hours i have for nightly >> maintenance/backups. Currently it's possible by recycling >> directories... I would expect that recycling directories actually makes this worse. With an
2015 Jul 13
6
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote: > The think here is that you are into "backup" tools rather than the > general purpose tool that rsync is intended to be. Yes, that is true. Rsync serves so well as a core component to backup, I can be blind about "something other than rsync". I'll look at the tools you suggest. However, you've made be
2015 Jul 13
3
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 02:19:23 +0000, Andrew Gideon wrote: > Look at tools like inotifywait, auditd, or kfsmd to see what's easily > available to you and what best fits your needs. > > [Though I'd also be surprised if nobody has fed audit information into > rsync before; your need doesn't seem all that unusual given ever-growing > disk storage.] I wanted to take this
2015 Jun 30
4
cut-off time for rsync ?
Hi, I used to rsync a /home with thousands of home directories every night, although only a hundred or so would be used on a typical day, and many of them have not been used for ages. This became too large a burden on the poor old destination server, so I switched to a script that uses "find -ctime -7" on the source to select recently used homes first, and then rsyncs only those. (A
2015 Apr 10
3
Finding specific files/directories from a remote rsync server.
Hi all, Suppose I have remote rsync server, named as: rsync.example.net. And I want to finding some specific files/directories from it. To do this, I must let my local rsync client to do a traversing among all of its modules and the corresponding sub-directories. Say, if I want to find all of the `foo/file' on this rsync server, i.e., a file named file which is located under the foo
2015 Apr 18
2
Is it possiable to suppress the site-specified messages?
Hi all, When connect to a remote rsync server, ofter it will give some site- specified messages, say, the following one: ------------ _______________________________________________________________ | University of Science and Technology of China | | Open Source Mirror (mirrors.ustc.edu.cn) | |===============================================================| |
2015 Apr 15
2
How to capture the stderr of rsync and redirect it into a file?
Hi all, See the following commands: werner at debian:~$ rsync -c ftp.cn.debian.org::debian/ 2 >aaa rsync: The server is configured to refuse --checksum (-c) rsync error: requested action not supported (code 4) at clientserver.c (849) [sender=3.0.9] rsync: read error: Connection reset by peer (104) rsync error: error in socket IO (code 10) at io.c(785) [Receiver=3.1.2dev] Why rsync cann't
2015 Apr 18
2
On the case `an identical item replaces the dots with spaces' for `--itemize-changes'.
Hi all, The `--itemize-changes' has the following notes within the manual: The other letters in the string above are the actual letters that will be output if the associated attribute for the item is being updated or a "." for no change. Three exceptions to this are: (1) a newly created item replaces each letter with a "+",
2015 Apr 14
3
The complicated filter rule used by me worked for one Debian mirror and not for the other.
Hi all, I write a complex filter rules as follows: rsync -amvHPRSB131072 -n --delete --delete-excluded \ -f +_dists/jessie/**binary-all/Packages.gz \ -f +_dists/jessie/Release* \ -f +_dists/jessie/**binary-amd64/Packages.gz \ -f +_dists/jessie/**installer-amd64/*** \ -f +_dists/jessie/**binary-i386/Packages.gz \ -f +_dists/jessie/**installer-i386/*** \ -f +_dists/***/ \ -f -_*
2015 Apr 06
2
Downloading specific files with rsync and make them keeping the original directories structures.
Hi all, See the following command: $ rsync -av ftp.cn.debian.org::debian/dists/Debian7.8/Release . Which will download the file Release under the directory from which the rsync command is isssued. If I want to keep the original directories structures, say, for this case, put the the Release in the following location: ./dists/Debian7.8/Release If the directory tree doesn't exist, let
2015 Apr 15
2
Does the --delete conflict with --files-from?
Hi all, I've tried to use the --files-from and --delete options together. But finally find that in this case, the `--delete' won't delete the extraneous files from dest dirs. So, does the --delete conflict with --files-from? Any hints on this issue? Regards -- .: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :.