Displaying 20 results from an estimated 20000 matches similar to: "[Bug 10351] New: link-by-hash no-copy initialization"
2013 Dec 30
1
[Bug 10354] New: link-by-hash-autodir - use an automatically determined directory to collect the hash-hardlinks
https://bugzilla.samba.org/show_bug.cgi?id=10354
Summary: link-by-hash-autodir - use an automatically determined
directory to collect the hash-hardlinks
Product: rsync
Version: 3.1.1
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
2013 Dec 30
1
[Bug 10353] New: link-by-hash collision detection
https://bugzilla.samba.org/show_bug.cgi?id=10353
Summary: link-by-hash collision detection
Product: rsync
Version: 3.1.1
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: jimklimov at gmail.com
2013 Dec 30
1
[Bug 10352] New: link-by-hash hardlink-collection maintenance mode
https://bugzilla.samba.org/show_bug.cgi?id=10352
Summary: link-by-hash hardlink-collection maintenance mode
Product: rsync
Version: 3.1.1
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: jimklimov at
2005 Jan 19
1
PROPOSAL: --link-hash-dest, additional linking of files to their HASH values
I'm using a few utilities to accomplish the same thing in a second pass
after rsync runs. The utils all use a two-layer hash (256 directories of
256 subdirectories), which with our current backups puts a little over 100
files per directory. Anywhere from hundreds of thousands to tens of
millions of files shouldn't waste too many inodes or put a gross number of
files into each directory.
2014 May 30
1
attachment sis + EMLINK (too many links) = segfault bug (2.2.12)
Hi,
we use attachment dedup with lots of emails (still migrating to it
from maildir).
We use netapp storage with wafl filesystem over nfs.
Problem is that netapp has hard limit of 100k hardlinks to one file.
And we encountered it.
Problem is that dovecot start do segfault (lmtp,dsync,pop3 etc) when it
happend when tried to deliver new emails with that attachment.
Here is strace of dsync:
6740
2011 Sep 29
1
[Bug 8502] New: problem to make snapshot with --link-dest without copy of the files
https://bugzilla.samba.org/show_bug.cgi?id=8502
Summary: problem to make snapshot with --link-dest without copy
of the files
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
2013 Mar 28
1
[Bug 9749] New: hardlinkes files are copies instat of make a link
https://bugzilla.samba.org/show_bug.cgi?id=9749
Summary: hardlinkes files are copies instat of make a link
Product: rsync
Version: 3.0.9
Platform: x86
OS/Version: Linux
Status: NEW
Severity: major
Priority: P3
Component: core
AssignedTo: wayned at samba.org
ReportedBy: dieter.ferdinand at
2007 Jan 23
1
--link-dest copying modified files
Hi!
It's me again with another --link-dest issue:
I am using dirvish (www.dirvish.org) to create daily backup on disk
images.
dirvish is using rsync with --link-dest pointing to the last good image.
This creates images with hardlinks to unmodified files. So far so good.
Now I want to create a "current" filetree with hardlinks pointing to the
last image.
rsync -vaH --delete
2009 May 12
1
--copy-links and --hard-links
Hi,
I want to use rsync in a may be unusual way:
I have a source tree containing lots of symbolic links and I use
the option "--copy-links" to get the physical files (the referents of the symlinks)
on the target host.
As the host uses the synchronized files in a read-only fashion, I also want to get
hardlinks for all identical files, to save space. Thus I also use
2010 Mar 02
3
BackupPC, per-dir hard link limit, Debian packaging
I realise that the hard link limit is in the queue to fix, and I read
the recent thread as well as the older (october I think) thread.
I just wanted to note that BackupPC *does* in fact run into the hard
link limit, and its due to the dpkg configuration scripts.
BackupPC hard links files with the same content together by scanning new
files and linking them together, whether or not they started
2004 Feb 16
1
[patch] Add `--link-by-hash' option (rev 2).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 2)
* This revision is actually against CVS HEAD (I didn't realize I was working
from a stale rsync'd CVS).
* Apply permissions after
2017 Apr 05
0
[Bug 12732] New: hard links can cause rsync to block or to silently skip files
just subscribed for rsync-qa from bugzilla via rsync wrote:
> Hard link handling seems to be broken when using "rsync -aH --compare-dest". I
> found two possible scenarios:
>
> 1) rsync completes without error message and exit code 0, although some files
> are missing from the backup
> 2) rsync blocks and must be interrupted/killed
> ....
>
> Further
2012 Apr 15
0
Bug#666024: rsync --link-dest can incorrectly hardlink together destination files
[please Cc: 666024-forwarded at bugs.debian.org on any replies, thanks]
Please see the attached Debian bug report, which includes a helpful
bug-reproducing shell script; I've confirmed this still happens with
3.0.9.
Paul Slootman
On Tue 27 Mar 2012, Ian Jackson wrote:
> Package: rsync
> Version: 3.0.7-2
>
> With rsync --link-dest, if two different source files (not hardlinked
2004 Feb 09
1
[patch] Add `--link-by-hash' option.
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
Anyone have an example of an MD4 collision so I can test that case? :)
Patch Summary:
-1 +1 Makefile.in
-0 +304 hashlink.c (new)
2007 Jan 22
1
bug with --link-dest ?
Hi!
I am running rsync 2.6.4 (on Debian sarge) and I am experiencing a
strange behaviour.
I am trying to create an identical filetree on the same filesystems with
the single files being hardlinks to the source like this:
rsync -vaH --progress --delete --stats --numeric-ids -x --link-dest=/path/to/filetree/ /path/to/filetree/ /path/to/current/
/path/to is one filesystem. This creates hardlinks
2004 Feb 23
0
[patch] Add `--link-by-hash' option (rev 4).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 4)
* Updated for committed robust_rename() patch, other changes in CVS.
(rev 3)
* Don't link empty files.
* Roll over to new file when
2004 Feb 17
0
[patch] Add `--link-by-hash' option (rev 3).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 3)
* Don't link empty files.
* Roll over to new file when filesystem maximum link count is reached.
* If link fails for another reason, leave
2004 Feb 23
0
[patch] Add `--link-by-hash' option (rev 5).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 5)
* Fixed silly logic error.
(rev 4)
* Updated for committed robust_rename() patch, other changes in CVS.
(rev 3)
* Don't link empty
2010 Apr 18
2
rsync with --link-dest=DIR
Hi,
i use rsync version 2.6.9 protocol version 29 on Mac OS X 10.6.3 and expierienced the following problem.
When using --link-dest=DIR with DIR on the startup volume everything works fine.
If DIR is a volume on an external drive many files are copied instead of creating hard links.
hardlinks are create e.g. fom .jpg files and from .rtf files but not from .doc files and many other files.
The
2014 May 27
1
[Bug 10637] New: rsync --link-dest should break hard links when encountering "Too many links"
https://bugzilla.samba.org/show_bug.cgi?id=10637
Summary: rsync --link-dest should break hard links when
encountering "Too many links"
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: enhancement
Priority: P5
Component: core
AssignedTo: