similar to: How do you exclude a directory that is a symlink?

Displaying 20 results from an estimated 1200 matches similar to: "How do you exclude a directory that is a symlink?"

2015 Jul 02
1
cut-off time for rsync ?
On Wed, Jul 01, 2015 at 02:05:50PM +0100, Simon Hobson said: >As I read this, the default is to look at the file size/timestamp and if they match then do nothing as they are assumed to be identical. So unless you have specified this, then files which have already been copied should be ignored - the check should be quite low in CPU, at least compared to the "cost" of
2015 Jul 14
1
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
And what's performance like? I've heard lots of COW systems performance drops through the floor when there's many snapshots. /kc On Tue, Jul 14, 2015 at 08:59:25AM +0200, Paul Slootman said: >On Mon 13 Jul 2015, Andrew Gideon wrote: >> >> On the other hand, I do confess that I am sometimes miffed at the waste >> involved in a small change to a very
2015 Apr 06
3
rsync --link-dest won't link even if existing file is out of date
This has been a consideration. But it pains me that a tiny change/addition to the rsync option set would save much time and space for other legit use cases. We know rsync very well, we dont know ZFS very well (licensing kept the tech out of our linux-centric operations). We've been using it but we're not experts yet. Thanks for the suggestion. /kc On Mon, Apr 06, 2015 at 12:07:05PM
2017 Mar 03
2
How do you exclude a directory that is a symlink?
The directory I'm trying to copy from is: /home/blah/dir The symlink is /home/blah/dir/unwanted_symlinked_dir On Fri, Mar 3, 2017 at 8:10 AM, Paul Slootman <paul+rsync at wurtel.net> wrote: > On Fri 03 Mar 2017, Steve Dondley wrote: > > > I'm trying to rsync a directory from a server to my local machine that > has > > a symbolic link to a directory I don't
2015 Apr 06
6
rsync --link-dest won't link even if existing file is out of date
Feature request: allow --link-dest dir to be linked to even if file exists in target. This statement from the man page is adhered to too strongly IMHO: "This option works best when copying into an empty destination hierarchy, as rsync treats existing files as definitive (so it never looks in the link-dest dirs when a destination file already exists)". I was suprised by this behaviour
2018 Oct 08
2
rsync --server command line options
Hello, I ran the following commands: rsync /tmp/foo remote: rsync remote:/tmp/foo . On the remote computer, the following commands were executed: rsync --server -e.LsfxC . . rsync --server --sender -e.LsfxC . /tmp/foo Does anyone know, what is the meaning of the three dots/periods in the above two commands? The first command ends with two dots (". .") and the second command has one
2015 Apr 07
0
Patch for rsync --link-dest won't link even if existing file is out of date (fwd)
Folks, We faced a similar situation to that which Ken described - we recycle backup directories, for good reason. There is a patch to solve the problem. Our systems administrator provided the following description of the patches we use: ============================================================================ 1. rsync_link_dest improvement by Bryant Hansen Normally, existing files in
2017 Mar 03
3
How do you exclude a directory that is a symlink?
A thousand greetings, I'm trying to rsync a directory from a server to my local machine that has a symbolic link to a directory I don't want to download. I have an "exclude" option to exclude the symlink which works fine. However, if I add a --copy-links option to the command, it appears to override my "exclude" directive and the contents of the symlinked directory
2018 Jan 24
1
glob exclude vs include behaviour
not a bug, buy colour me confused: /tmp/foo$ mkdir a /tmp/foo$ touch a/foo /tmp/foo$ touch a/.baz /tmp/foo$ cd .. /tmp$ rsync -avP --exclude=a/* foo bar sending incremental file list created directory bar foo/ foo/a/ sent 71 bytes received 20 bytes 182.00 bytes/sec total size is 0 speedup is 0.00 /tmp$ ls -la bar total 20 drwxr-xr-x 3 user user 4096 Jan 24 14:17 . drwxrwxrwt 70 root
2015 Jun 15
1
rsync very slow with large include/exclude file list
I investigated the rsync code and found the reason why. For every file in the source, it searches the entire filter-list looking to see if that filename is on the exclude/include list. Most aren't, so it compares (350K - 72K) * 72K names (the non-listed files) plus (72K * 72K/2) names (the ones that are listed), for a total of about 22,608,000,000 strcmp's. That's 22 BILLION
2015 Jul 17
3
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099 --- Comment #8 from Chip Schweiss <chip at innovates.com> --- I would argue that optionally all directory scanning should be made parallel. Modern file systems perform best when request queues are kept full. The current mode of rsync scanning directories does nothing to take advantage of this. I currently use scripts to split a couple
2015 Jul 01
5
cut-off time for rsync ?
> If your goal is to reduce storage, and scanning inodes doesnt matter, > use --link-dest for targets. However, that'll keep a backup for every > time that you run it, by link-desting yesterday's copy. The goal was not to reduce storage, it was to reduce work. A full rsync takes more than the whole night, and the destination server is almost unusable for anything else when it
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
Sounds to me like maintaining the metadata cache is important - and tuning the filesystem to do so would be more beneficial than caching writes, especially with a backup target where a write already written will likely never be read again (and isnt a big deal if it is since so few files are changed compared to the total # of inodes to scan). Your report of the minutes for the re-sync shows the
2015 Apr 06
0
rsync --link-dest won't link even if existing file is out of date
Not to mention the fact that ZFS requires considerable hardware resources (CPU & memory) to perform well. It also requires you to learn a whole new terminology to wrap your head around it. It's certainly not a trivial swap to say the least... Thanks, -Clint On Mon, Apr 6, 2015 at 9:12 AM, Ken Chase <rsync-list-m829 at sizone.org> wrote: > This has been a consideration. But it
2015 Apr 15
1
Can I let rsync only transer a part of file within specific byte ranges?
Hi all, Suppose I have a file on the remote rsync server: rsync://path/to/myfile And I want to only retrieve a part of the file based a ranges of bytes to my local host, say, 0-499, means only transfer the first 500 bytes of that file. Is this possible with rsync client? Regards -- .: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :.
2015 Apr 16
0
rsync --delete
Wow, it took me a few seconds to figure out what you were trying to do. What's wrong with rm? Also I think trying to leverage the side of disqualifying all source files just to get the delete effect (very clever but somewhat obtuse!) risks creating a temporary file of some kind in the target at the start of the operation, and if you cant even mkdir then that exceeds disk quota immediately
2015 Jul 01
0
cut-off time for rsync ?
What is taking time, scanning inodes on the destination, or recopying the entire backup because of either source read speed, target write speed or a slow interconnect between them? Do you keep a full new backup every day, or are you just overwriting the target directory? /kc On Wed, Jul 01, 2015 at 10:06:57AM +0200, Dirk van Deun said: >> If your goal is to reduce storage, and scanning
2015 Jul 02
1
cut-off time for rsync ?
> What is taking time, scanning inodes on the destination, or recopying the entire > backup because of either source read speed, target write speed or a slow interconnect > between them? It takes hours to traverse all these directories with loads of small files on the backup server. That is the limiting factor. Not even copying: just checking the timestamp and size of the old copies.
2015 Jul 16
1
Fwd: rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote: > As with any dedup solution, performance does take a hit and its often > not worth it unless you have a lot of duplication in the data. This is so only in some volumes in our case, but it appears that zfs permits this to be enabled/disabled on a per-volume basis. That would work for us. Is there a way to save cycles by offering zfs
2013 Dec 31
1
Question about rsyncing to a slightly different folder structure on target
Hi all, Ok, if this isn't possible with some kind of wildcard, I can adjust the target manually, but if I can just modify the command to allow for the different folder structure on the target, I'd rather do that. I'm incrementally rsync'ing my mailstore from the old server to the new server, doing testing along the way. The command I'm currently using is: rsync