similar to: Bug#666024: rsync --link-dest can incorrectly hardlink together destination files

Displaying 20 results from an estimated 5000 matches similar to: "Bug#666024: rsync --link-dest can incorrectly hardlink together destination files"

2006 Apr 17
11
DO NOT REPLY [Bug 3693] New: rsync can use same --link-dest file several times, leading to incorrect hard links
https://bugzilla.samba.org/show_bug.cgi?id=3693 Summary: rsync can use same --link-dest file several times, leading to incorrect hard links Product: rsync Version: 2.6.8 Platform: x86 OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo:
2007 Jan 23
1
--link-dest copying modified files
Hi! It's me again with another --link-dest issue: I am using dirvish (www.dirvish.org) to create daily backup on disk images. dirvish is using rsync with --link-dest pointing to the last good image. This creates images with hardlinks to unmodified files. So far so good. Now I want to create a "current" filetree with hardlinks pointing to the last image. rsync -vaH --delete
2007 Apr 26
1
rsync mirroring and hardlink issues
I'm running a mirror of several repositories that are fetched using separate rsync runs. Since some of those repositories are hosting related files, I'm using the hardlink utility[1] in order to save disk space. However, I've noticed an issue that may lead to potential file metadata inconsistencies when using hardlink. Consider the following scenario: - two repositories (rep_a and
2013 Dec 19
5
[Bug 10334] New: rsync doesn't log hardlink-copies using --link-dest
https://bugzilla.samba.org/show_bug.cgi?id=10334 Summary: rsync doesn't log hardlink-copies using --link-dest Product: rsync Version: 3.0.9 Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: Kontakt at
2003 Dec 17
2
TODO hardlink performance optimizations
On Mon, 15 Dec 2003, jw schultz <jw@pegasys.ws> wrote: > OK, first pass on TODO complete. .... > PERFORMANCE ---------------------------------------------------------- .... > Traverse just one directory at a time > > Traverse just one directory at a time. Tridge says it's possible. > > At the moment rsync reads the whole file list into memory at the >
2007 Jul 16
5
DO NOT REPLY [Bug 4793] New: link-dest hardlink does not always work well with -o -g -p
https://bugzilla.samba.org/show_bug.cgi?id=4793 Summary: link-dest hardlink does not always work well with -o -g -p Product: rsync Version: 2.6.9 Platform: PPC OS/Version: Linux Status: NEW Severity: enhancement Priority: P3 Component: core AssignedTo: wayned@samba.org
2004 Oct 18
1
check "--link-dest" directory first
I use rsync to backup several servers to harddisk on a backupserver. To save space i use the --link-dest option which creates a hardlink to the file of the last backup if the file hasn?t changed. If a file already exists in the destination directory then an maybe existing file in the --link-dest directory is ignored. The existing file will be overwritten with the new version regardless whether
2012 Aug 10
1
Serious issue: rsync and hardlinks are dangerous...
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Rsync 3.0.9 here. I am using a rsync script like: """ rsync -z --numeric-ids -a -H --inplace --delete --delete-excluded - --stats --progress -v --itemize-changes SOURCE DESTINATION """ I detected the following issue when RSYNCing a bunch of Mercurial repositories. It is very dangerous, because it will corrupt files.
2004 Dec 01
1
rsync transfers whole content when a new hardlink is created
Hi, I detected a silly behaviour of rsync when new hardlinks of already synced files are created: Scenario: There are a local directory and a equal remote directory created by former run of rsync. Create a hardlink from a already existing file (both inside the local directory). If this hardlink has a filename with comes before the original filename when both are sorted in alphabetic order,
2013 Dec 02
0
hardlinking and -R (multiple source directories)
Hi, now it's time to come back to this topic. As supposed, the missing hardlinks where no issue of rsync. I am not sure if pairing aufs (http://aufs.sourceforge.net/) and rsync -RH will catch each and every hardlink compared to a single filesystem, but it seems to work very reliable. I tried mhdfs and aufs. Aufs is faster and very stable (I am on wheezy kernel 3.2). So at last I have my
2010 Jun 11
1
hardlink unlink-before-save?
Hello list On my server almost half of files are duplicated, and there are almost 100GB of data which is growing fast. I'm using weekly script to find duplicated files and replace them with hardlinks. Everything is fine as long as users aren't trying to edit and save some documents which have hardlinks. With samba 3.3 it seems that hardlinks aren't unlinked before file is saved.
2007 Jan 14
4
feature request, hardlink progress......
I'm copying a partition that has a bunch of hardlink based snapshots (-aPH). I think there's about 250,000 files in each backup and between 100 and 200 snapshots. Earlier today, I saw the files had completed and it was making all the hardlinks. I thought it would be "not long" but it's been making hardlinks for 12 hours (at least). There's only 36Gb in snapshot, the
2020 Jul 17
0
[OT] What is the max hardlink number for a single file on XFS
Hi list, I have a little script that uses rysnc and hardlink to perform backups. Some days ago a friend told me that rsync could crash if the hardlink limit is reached. I know (and tested) that for ext4 the max number of hardlink for a single file is 65000 but I can't get a limit on XFS. Due to the fact that I can't get good resources from google search, I tried to reach its limit
2006 Apr 17
6
DO NOT REPLY [Bug 3692] New: regression: symlinks are created as hardlinks with --link-dest
https://bugzilla.samba.org/show_bug.cgi?id=3692 Summary: regression: symlinks are created as hardlinks with -- link-dest Product: rsync Version: 2.6.7 Platform: x86 URL: http://rsync.samba.org OS/Version: FreeBSD Status: NEW Severity: major Priority: P3 Component: core
2018 Oct 16
0
[Bug 13656] New: --link-dest target with symbolic links from different user produces unnecessary error
https://bugzilla.samba.org/show_bug.cgi?id=13656 Bug ID: 13656 Summary: --link-dest target with symbolic links from different user produces unnecessary error Product: rsync Version: 3.1.3 Hardware: All OS: Linux Status: NEW Severity: minor Priority: P5 Component:
2008 May 31
1
rsync 3.0.2 with --fileflags on FreeBSD: cannot rsync hardlinked immutable files
Hi *, it seems rsync with --fileflags isn't able to work on (already) hardlinked and immutable ("schg") files on FreeBSD. The following scripts will create a simple example for this behaviour: -------------------------------------------------------------- #! /bin/sh # # set -x DIR="/var/tmp/rsync_$(date +%s)/" mkdir "${DIR}/" # Preparing dir_A mkdir
2009 Sep 20
1
Hardlink patches for sftp
Dear all, I am looking for the status on the hardlink patches that were published on this list last february by Miklos Szeredi? I'd really like to have hardlink support in sftp. This would in turn enable hardlinks in sshfs and make incremental rsync backups to remote filesystems possible. can someone tell me if these patches will be incorporated in openssh? The patches can be found under
2005 Jan 19
1
PROPOSAL: --link-hash-dest, additional linking of files to their HASH values
I'm using a few utilities to accomplish the same thing in a second pass after rsync runs. The utils all use a two-layer hash (256 directories of 256 subdirectories), which with our current backups puts a little over 100 files per directory. Anywhere from hundreds of thousands to tens of millions of files shouldn't waste too many inodes or put a gross number of files into each directory.
2014 May 27
1
[Bug 10637] New: rsync --link-dest should break hard links when encountering "Too many links"
https://bugzilla.samba.org/show_bug.cgi?id=10637 Summary: rsync --link-dest should break hard links when encountering "Too many links" Product: rsync Version: 3.0.9 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P5 Component: core AssignedTo:
2009 Mar 31
0
synchronizing hard linked trees
Hi, I have to synchronize a directory tree of one machine with a tree on a remote machine. The tree on the remote machine is used read only. In addition I clone the current target tree before each synchronizing using hardlinks to be able to switch back to any previous version if the synchronization failed or if it contained any bad data. This works since 2002 using tar and a perl script. After I