Displaying 20 results from an estimated 800 matches similar to: "[Bug 8512] New: rsync -a slower than cp -a"
2020 Apr 25
1
commit b936741 breaks compilation on macos
Hi,
On systems with HAVE_SETATTRLIST, commit b936741 breaks compilation.
This is because do_setattrlist_times hasn't been converted to STRUCT_STAT
*stp.
Here's a small patch to do it.
Cheers,
Filipe
PS: I've been playing with IO_BUFFER_SIZE, MAX_MAP_SIZE and WRITE_SIZE. Any
plans to make it configurable at runtime? It seems to make a big difference
for large files on a fast link
2004 Aug 02
4
reducing memmoves
Attached is a patch that makes window strides constant when files are
walked with a constant block size. In these cases, it completely
avoids all memmoves.
In my simple local test of rsyncing 57MB of 10 local files, memmoved
bytes went from 18MB to zero.
I haven't tested this for a big variety of file cases. I think that this
will always reduce the memmoves involved with walking a large
2011 Sep 12
2
Duration
Is it normal for rsync to take 3 hours on this transfer?
Number of files: 27419348
Number of files transferred: 19501
Total file size: 185.39G bytes
Total transferred file size: 195.92M bytes
Literal data: 195.68M bytes
Matched data: 241.09K bytes
File list size: 402.01M
File list generation time: 0.561 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 600.61M
Total bytes received:
2003 Apr 27
4
Bogus rsync "Success" message when out of disk space
Patches welcome, eh, Paul?
Upon further (belated) investigation, there are 2 affected places
in receiver.c with this error message. Both call write_file().
And write_file is called only in those two places. So that is the
appropriate location to patch. Especially since the obvious fix is
to use the rewrite code already there for the sparse file writes.
2003 Jul 17
1
2 GB Limit when writing to smbfs filesystems
I'm running RedHat 8.0 with samba-2.2.7-5.8.0 (installed from RedHat
distribution)
When I use cpio to write a backup (> 2GB) to a smbfs filesystem, I get the
error: File size limit exceeded
I get the same error when I linux copy (cp) a file (> 2GB) from a Linux ext3
filesystem to the smbfs filesystem.
The smbfs filesystem is mounted from a Windows 2000 Professional
workstation.
After
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now.
When we discovered the problem with full filesystems not allowing
deletes over NFS, we became very anxious to fix this; our users fill
their quotas on a fairly regular basis, so it''s important that they
have a simple recourse to fix this (e.g., rm). I played around with
this on my OpenSolaris box at home, read around
2015 Apr 30
2
nfs (or tcp or scheduler) changes between centos 5 and 6?
> Message: 4
> Date: Wed, 29 Apr 2015 08:35:29 -0500
> From: Matt Garman <matthew.garman at gmail.com>
> To: CentOS mailing list <centos at centos.org>
> Subject: [CentOS] nfs (or tcp or scheduler) changes between centos 5
> and 6?
> Message-ID:
> <CAJvUf-CyTg8ZiGq3OXRLKw7s1K2dGx1gqo_2XwOAXXQty=RHZQ at mail.gmail.com>
> Content-Type: text/plain;
2004 Jul 30
1
Problem related to time-stamp
Hi,
I m facing problem in "rsync" rtelated to the
time-stamp of the files.
Im using rsync for transfering the file from my m/c
(OS :jaluna-linux) to a remote m/c(OS:jaluna-linux)
and even if there was no change in the files on my
m/c, when i rsync them to remote m/c the time-stamp of
the file on remote m/c (which i transfered from my
m/c) will change.
my file name is bigfile and it is
2013 Sep 03
2
rsync -append "chunk" size
I'm transferring 1.1 Mb files over very poor GSM EDGE connection. My
rsync command is:
rsync --partial --remove-source-files --timeout=120 --append --progress
--rsh=ssh -z LOCAL_FILE root at SERVER:REMOTE_PATH
File on remote server "grows" in size in steps of 262144 bytes. That is
a lot, because system needs to transfer at least 262144 (before
compression) every time connection
2007 Nov 27
1
Syncing to multiple servers
Helle everyone,
Let's say we have 3 servers, 2 of them have the latest (stable) version
of rsyncd running (2.6.9)
<Server1> ==> I N T E R N E T ==> <Server2 (rsyncd running)> ==> LAN
==> <Server3 (rsyncd running)>
Suppose I want to send a big file (bigfile.big) from Server1 to both
Server2 and Server3. It would be a good idea to send first from Server1
2009 Feb 19
4
[Bug 1558] New: Sftp client does not correctly process server response messages after write error
https://bugzilla.mindrot.org/show_bug.cgi?id=1558
Summary: Sftp client does not correctly process server response
messages after write error
Product: Portable OpenSSH
Version: 4.3p2
Platform: amd64
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: sftp
2007 Mar 02
1
--delete --force Won't Remove Directories With Dotnames
--delete --force Won't Remove Directories With Dotnames
rsync 2.6.9
Me, personally, I reckon this to be an irritant ... but perhaps (and having
thought about this a bit I decided it's a good chance) this is an
intentional and useful behaviour. But it's a nuisance if you call
your --partial-dir .partial, as I happen to do, since now if you remove a
directory which was aborted in
2012 Oct 20
2
can't find the error in if function... maybe i'm blind?
Hi everybody,
the following alway gives me the error
"Fehler in if (File$X.Frame.Number[a] + 1 == File$X.Frame.Number[a + 1])
(File$FishNr[a] <- File$FishNr[a - : Fehlender Wert, wo TRUE/FALSE n?tig
ist". Maybe its stupid, but i'm not getting why... Maybe someone can help
me. Thanks a lot!
for (i in unique(BigFile$TrackAll))
{ File <-
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
I want to use --min-size to copy just large files (and their necessary
parent directories), but everything I've tried copies *all* the source
directories, and creates them empty on the destination even if they
don't have any big files in them. I only want the minimal directory
hierarchies that contain the big files. This doesn't work:
$ rm -rf /tmp/foo
$ rsync -ai --min-size
2016 Oct 26
0
NFS help
On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote:
> Again, no machine on the internal network that my 2 CentOS hosts are
> on are connected to the internet. I have no way to download anything.,
> There is an onerous and protracted process to get files into the
> internal network and I will see if I can get netperf in.
Right, but do you have
2013 Dec 19
1
[Bug 10336] New: Time in file listing with --list-only option is inconsistent whether dst is given or not
https://bugzilla.samba.org/show_bug.cgi?id=10336
Summary: Time in file listing with --list-only option is
inconsistent whether dst is given or not
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo:
2002 Sep 27
0
Server writes...?
I have a quick question about rsync's writing of files.
I have a team of people that all use the host BigServer, which is
running rsync as a deamon, as a central place to keep all shared files
backed up. The "master copy" for any given file is considered to be the
local file that somebody has worked on -- i.e., BigServer is NOT
considered the master copy. BigServer is the backup
2012 Aug 19
1
local -> local file copy question
In looking at source, I started at fileio and found
write routines but no read routines.
I found a 'WRITE_SIZE' (32K), but no 'READ_SIZE' --
is that' what the MAX_MAP_SIZE (256K)?
I would like to make so that rsync can use larger I/O sizes if
(maybe a command line option?)....
The map routine led me to receiver -- where it looks like it
is responsible for reading the file.
2011 Sep 04
1
Rsync with direct I/O
As far as I can tell rsync doesn't support this. How hard would it be
to implement this? Is it trivial enough to just change the calls in the
code with sed? I think this can significantly reduce CPU usage and
increase I/O speed when dealing with fast storage solutions. It can make
a huge difference when say transferring 30TB of data.
Here are some tests I did. So far the only thing I know
2016 Oct 27
0
NFS help
On Thu, Oct 27, 2016 at 1:03 AM, Larry Martell <larry.martell at gmail.com> wrote:
> On Wed, Oct 26, 2016 at 9:35 AM, Matt Garman <matthew.garman at gmail.com> wrote:
>> On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote:
>>> Again, no machine on the internal network that my 2 CentOS hosts are
>>> on are connected to the