Displaying 20 results from an estimated 3000 matches similar to: "Implementing rsync hard-link improvements"
2004 Jan 25
2
scan for first existing hard-link file
Here's a patch that makes rsync try to find an existing file in a group
of hard-linked files so that it doesn't create the first one in the
group from scratch if a later file could be used instead.
Details: I decided to avoid having the code do an extra scan down the
list when we encounter the lead file in the list. This is because it
would be bad to have to do the same scan in the
2004 Apr 03
0
--hard-link option now uses the first existing file - Excellent!
With regard to this NEWS item from 2.6.1pre-1:
* The --hard-link option now uses the first existing file in the
group of linked files as the basis for the transfer. This
prevents the sub-optimal transfer of a file's data when a new
hardlink is added on the sending side and it sorts alphabetically
earlier in the list than the files that are already present on the
receiving side.
I
2004 Jan 24
2
[PATCH] --links-depth for rsync
Hello,
about a year ago I ran into situation where there's a "metadirectory"
containing directories and symlinks to files. There was a need to mirror
the contents of files and directories gathered via symlinks to this
metadirectory, regular mirroring of the tree wouldn't do any good.
The attached patch gives the user ability to define how many symbolic
links rsync should follow
2003 Dec 17
2
TODO hardlink performance optimizations
On Mon, 15 Dec 2003, jw schultz <jw@pegasys.ws> wrote:
> OK, first pass on TODO complete.
....
> PERFORMANCE ----------------------------------------------------------
....
> Traverse just one directory at a time
>
> Traverse just one directory at a time. Tridge says it's possible.
>
> At the moment rsync reads the whole file list into memory at the
>
2004 Jan 19
1
File that "vanish"es between readdir and stat is not IO error
Using rsync 2.6.0 with --verbose and doing a pull.
>?receiving file list ... readlink "{FILENAME}" failed:
>?No such file or directory
>?done
>?IO error encountered - skipping file deletion
The file was a temporary file that was being deleted just as
the rsync was run. So while the file list was being built,
it was there when the directory was read but had vanished
by the
2004 Apr 05
1
Error trying to compile 2.6.0 on Solaris 9 Sparc with gcc 3.2
Hi.
I'm trying to compile rsync 2.6.0 on Solaris 9 Sparc, using gcc 3.2.
The configure script seems to run with no problem.
But when I go ahead to make, I get the following error output:
-----8<------------------------------------------------------------
gcc -I. -I. -g -O2 -DHAVE_CONFIG_H -Wall -W -c rsync.c -o rsync.o
In file included from rsync.c:23:
rsync.h:371: warning: no semicolon
2008 Mar 19
0
[PATCH] Unsnarl missing_below/dry_run logic.
The generator can skip a directory's contents altogether due to
--ignore-non-existing, a daemon exclude, or a mkdir failure. On a --dry-run,
the generator can also note the missingness of a directory while still scanning
its contents. These two scenarios were conflated using a single set of
missing_below/missing_dir variables in combination with transient increments in
dry_run; this caused
2004 Feb 06
4
memory reduction
As those of you who watch CVS will be aware Wayne has been
making progress in reducing memory requirements of rsync.
Much of what he has done has been the product of discussions
between he and myself that started a month ago with John Van
Essen.
Most recently Wayne has changed how the file_struct and its
associated data are allocated, eliminating the string areas.
Most of these changes have been
2008 Sep 27
1
Bug with crtimes and hard links?
I've been getting spurious unnecessary copying of files on OSX when
using the crtimes patch and the --crtimes -H options (version 3.0.4).
I can reliably demonstrate it (on OSX 10.5) by doing this several
times (as root):
rsync -v -N -axHAX --delete-during --fileflags --force-change /usr/
bin/ /tmp/foo/
I think I've tracked it down to the hard-link processing code in
2018 Jul 12
1
[Bug 13526] New: Hard link creation time
https://bugzilla.samba.org/show_bug.cgi?id=13526
Bug ID: 13526
Summary: Hard link creation time
Product: rsync
Version: 3.1.3
Hardware: All
OS: All
Status: NEW
Severity: normal
Priority: P5
Component: core
Assignee: wayned at samba.org
Reporter:
2004 Mar 10
4
HFS+ resource forks: WIP patch included
As you all know, rsync doesn't have any special handling
for Mac OS X HFS+ resource forks. Kevin Boyd made RsyncX
and rsync_hfs, to address this gap, but they only work when
the destination filesystem is also HFS+. I haven't been
able to find any references to an rsync that is capable of
syncing from HFS+ to UFS (etc). The only solutions I've
seen involve lots of preprocessing
2005 May 31
0
[Bug 2758] New: "File exists" error using options -b and --backup-dir with device files.
https://bugzilla.samba.org/show_bug.cgi?id=2758
Summary: "File exists" error using options -b and --backup-dir
with device files.
Product: rsync
Version: 2.6.4
Platform: All
OS/Version: AIX
Status: NEW
Severity: normal
Priority: P3
Component: core
AssignedTo:
2002 Jan 13
1
rsynd-2.5.1 / hlink.c patches
Platform: Compaq OpenVMS Alpha 7.3
Compiler: Compaq C T6.5
The following patch resolves compile problems with the HLINK.C module.
The cast on function argument for the qsort() routine was wrong, and not
allowing the compile to complete.
When the function definiton of the hlink_compare() is corrected to have
the const qualifiers, the cast inside the qsort() function call is no
longer
2003 Jul 24
0
(no subject)
Here is a diff which should allow applying batch updates remotely ( as
apposed to copying the batch files to the remote server and running rsync
there ).
Eg
rsync --write-batch=test src dst1::dst
rsync --read-batch=test dst2::dst
Oli Dewdney
diff -E -B -c -r rsync-2.5.6/flist.c rsync-2.5.6-remotebatch/flist.c
*** rsync-2.5.6/flist.c Sat Jan 18 18:00:23 2003
---
2020 Feb 10
2
Re: [libnbd PATCH 1/1] generator: Add support for NBD_INFO_INIT_STATE extension
The idea and patch is fine, but I wonder if it would be more useful to
callers if it was exposed as two separate APIs. Callers would then
not need to deal with masking out unknown flags, and it works more
like the other is_* / can_* ("flag calls") we already have.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and
2004 Feb 16
1
[patch] Add `--link-by-hash' option (rev 2).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 2)
* This revision is actually against CVS HEAD (I didn't realize I was working
from a stale rsync'd CVS).
* Apply permissions after
2008 Feb 15
4
Revised flags patch
Hi,
first of all, sorry for taking so long. Unfortunately, some other tasks
kept coming up. Anyway, attached is the version of the flags patch, that
is based on the one I'm using with 2.6.9. It is against the rsync-3.0.0pre9
release.
I've included the option name change from the repository, so the
option is now called --fileflags. Improved from the previously
distributed version is the
2004 Feb 23
0
[patch] Add `--link-by-hash' option (rev 4).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 4)
* Updated for committed robust_rename() patch, other changes in CVS.
(rev 3)
* Don't link empty files.
* Roll over to new file when
2004 Feb 17
0
[patch] Add `--link-by-hash' option (rev 3).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 3)
* Don't link empty files.
* Roll over to new file when filesystem maximum link count is reached.
* If link fails for another reason, leave
2004 Feb 23
0
[patch] Add `--link-by-hash' option (rev 5).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 5)
* Fixed silly logic error.
(rev 4)
* Updated for committed robust_rename() patch, other changes in CVS.
(rev 3)
* Don't link empty