Displaying 20 results from an estimated 20000 matches similar to: "No subject"
2003 Dec 30
3
The dangers of static buffers in rsync code
I have been trying for quite a while now to understand why is the
flist.c:f_name() function implemented using static buffers. Anyone care to
comment?
The immediate problem is that any call to f_name overrides the previous
content (well, obvious). This, combined with the fact that several
function calls are made with the result of f_name(file) results in
problems handling hardlinks - and
2002 Oct 24
2
Feature Request: break hardlinks before metadata changes
[This email is either empty or too large to be displayed at this time]
2005 Oct 01
1
rsync failed: Too many links
Dear Sir or Madam,
Has anyone seen a error message like the following?
rsync: recv_generator: mkdir "/home/kmiller/briefcase/1205275" failed: Too
many links (31)
rsync: stat "/home/kmiller/briefcase/1205275" failed: No such file or
directory (2)
As far as I can tell, I am not using any symlinks or hardlinks.
Please find below a reasonably complete bug report. Please let
2007 Apr 26
1
rsync mirroring and hardlink issues
I'm running a mirror of several repositories that are fetched using
separate rsync runs. Since some of those repositories are hosting
related files, I'm using the hardlink utility[1] in order to save disk
space.
However, I've noticed an issue that may lead to potential file metadata
inconsistencies when using hardlink.
Consider the following scenario:
- two repositories (rep_a and
2009 Jul 07
0
rsync-3.0.6 regression test problems
Hi,
I've build rsync-3.0.6 on a number of legacy unix systems, and on a few
systems the regression tests showed up with some errors:
HP-UX 11.11 and 11.23 (ia64):
FAIL chown
FAIL dir-sgid
FAIL fuzzy
FAIL itemize
IRIX 6.5.13m:
FAIL chown
FAIL fuzzy
FAIL itemize
MacOS-X 10.4:
FAIL chgrp
the errors on "fuzzy" and "itemize" are
2004 Dec 01
1
rsync transfers whole content when a new hardlink is created
Hi,
I detected a silly behaviour of rsync when new hardlinks of already synced
files are created:
Scenario:
There are a local directory and a equal remote directory created by former run
of rsync.
Create a hardlink from a already existing file (both inside the local
directory).
If this hardlink has a filename with comes before the original filename when
both are sorted in
alphabetic order,
2003 Dec 17
2
TODO hardlink performance optimizations
On Mon, 15 Dec 2003, jw schultz <jw@pegasys.ws> wrote:
> OK, first pass on TODO complete.
....
> PERFORMANCE ----------------------------------------------------------
....
> Traverse just one directory at a time
>
> Traverse just one directory at a time. Tridge says it's possible.
>
> At the moment rsync reads the whole file list into memory at the
>
2008 Feb 12
3
Rsync to a Read Only file system
I think your product is awesome, but I am experiencing an unexpected
behaviour.
$ rsync -avviPH /Users/alan/Desktop/rsync_test\ Folder/
root@slug::Downloads
opening tcp connection to slug port 873
sending daemon args: --server -vvlHogDtpre30.16i "--log-format=%i" --
partial . Downloads
sending incremental file list
.d..t..g... ./
rsync: failed to write xattr user.rsync.%stat for
2004 Sep 15
0
(no subject)
Hi,
I'm trying to get rsync over OpenSSH/Cygwin working. I started with a command like this (which fails):
$ rsync -v -v -v --recursive --rsh="ssh -i /home/ul081b/mpdm-keys/rsa-mpdm01 mpdm@mpdm-w2k3" MPDM-W2K3::MPDM .
opening connection using ssh -i /home/ul081b/mpdm-keys/rsa-mpdm01 mpdm@mpdm-w2k3 MPDM-W2K3 rsync --server --daemon .
bash: line 1: MPDM-W2K3: command not found
2012 Jun 05
2
[Bug 8979] New: rsync daemon: High load while skipping hardlinks
https://bugzilla.samba.org/show_bug.cgi?id=8979
Summary: rsync daemon: High load while skipping hardlinks
Product: rsync
Version: 3.0.5
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: simon.klinkert at
2015 Nov 30
0
Questions about hardlinks, alternate storage and compression]
Hi all,
I have some updates on the hard-link discussion.
First, let me explain that I installed a test machine with CentOS 7.1
and dovecot/pigeonhole version 2.2.10-4 and the results where identical
on what I had on CentOS 6.7 and dovecot 2.0.9-19
The bottom line is that hardlinking works only when no, or at most only
one, RCPT have sieve filtering. For example:
- if no RCPT has sieve
2007 Jan 23
1
--link-dest copying modified files
Hi!
It's me again with another --link-dest issue:
I am using dirvish (www.dirvish.org) to create daily backup on disk
images.
dirvish is using rsync with --link-dest pointing to the last good image.
This creates images with hardlinks to unmodified files. So far so good.
Now I want to create a "current" filetree with hardlinks pointing to the
last image.
rsync -vaH --delete
2003 Dec 17
1
TODO hardlink reporting problem - fixed?
On Mon, 15 Dec 2003, jw schultz <jw@pegasys.ws> wrote:
> OK, first pass on TODO complete.
....
This hardlink bug report is nearly 21 months old... So I took a look
at it using 2.5.7. See below.
> BUGS ---------------------------------------------------------------
>
> Fix hardlink reporting 2002/03/25
> (was: There seems
2004 Feb 16
1
[patch] Add `--link-by-hash' option (rev 2).
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 2)
* This revision is actually against CVS HEAD (I didn't realize I was working
from a stale rsync'd CVS).
* Apply permissions after
2020 Jul 17
0
[OT] What is the max hardlink number for a single file on XFS
Hi list,
I have a little script that uses rysnc and hardlink to perform backups.
Some days ago a friend told me that rsync could crash if the hardlink
limit is reached. I know (and tested) that for ext4 the max number of
hardlink for a single file is 65000 but I can't get a limit on XFS.
Due to the fact that I can't get good resources from google search, I
tried to reach its limit
2015 Jun 29
0
Questions about hardlinks, alternate storage and compression
Hi all,
any ideas?
Especially point n.1 (no hardlink when sending the same email to
multiple addresses) confuse me a bit. Searching in old messages I even
stumbled on some users stating that, using Dovecot LMTP server, they
achieved what I want (one messagge, multiple hardlinks), but I am
_already_ using LMTP with no avail...
Regards.
On 27/06/15 18:18, Gionatan Danti wrote:
> Hi all,
2004 Sep 16
3
Rsync param parsing using --rsh broken? (was: no subject)
Reposting this with a subject line :-)
Hi,
I'm trying to get rsync over OpenSSH/Cygwin working. I started with a command
like this (which fails):
$ rsync -v -v -v --recursive --rsh="ssh -i /home/ul081b/mpdm-keys/rsa-mpdm01mpdm@mpdm-w2k3" MPDM-W2K3::MPDM .
opening connection using ssh -i /home/ul081b/mpdm-keys/rsa-mpdm01 mpdm@mpdm-w2k3 MPDM-W2K3 rsync --server --daemon .
bash:
2002 Oct 08
1
Some tests fail if rsync is not on path (with patch)
While installing rsync on a new Sun Netra running Solaris 2.8, two tests
(chgrp and hardlinks) failed.
I found that these tests execute rsync while other successfull tests
exectute $RSYNC. It is fortunate that my shell path was quite restricted
and that no earlier version of rsync was installed on my path. The system
would have run the chgrp and hardlinks tests with an earlier rsync if it had
2019 Dec 19
2
Dovecot, pigeonhole and hardlinks
Hi list,
many moons ago I asked about preserving hardlink between identical
messages when pigeonhole (for sieve filtering) was used.
The reply was that, while hardlink worked well for non-filtered
messages, using pigeonhole broke the hardlink (ie: some message-specific
data was appended to the actual mail file). Here you can find the
original thread:
2005 Jun 07
1
Hard links revisited
I can't seem to get the hardlinks to work in windows via --link-dest.
It just refuses to find the files in the directory and make a hard
link. Here's the setup.
-I run 2 rsyncs from batch files called sync.bat and sync2.bat.
-I'm connected to the server via a mapped network drive, we'll call it 'z'.
-I then run the client, on the destination, once to pull the data