similar to: Red Hat rsync - 'sign' patch

Displaying 20 results from an estimated 3000 matches similar to: "Red Hat rsync - 'sign' patch"

2002 Feb 01
0
rsync Warning: unexpected read size of 0 in map_ptr
On Wed, Jan 30, 2002 at 06:03:10PM -0500, Bill Nottingham wrote: > Dave Dykstra (dwd@bell-labs.com) said: > > I stumbled across the bug report > > http://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=58878 > > > > which shows that you made a bug fix to rsync on Sunday. What exactly did > > you do? > > Attached. It's the same thing as yours, I just
2003 Mar 23
1
[RFC] dynamic checksum size
Currently rsync has a bit of a problem with very large files. Dynamic block sizes were introduced to try handle that automatically if the user didn't specify a block size. Unfortunately that isn't enough and the block size would need to grow faster than the file. Besides, overly large block sizes mean large amounts of data need to be copied even for small changes. The maths indicate
2002 Aug 05
5
[patch] read-devices
Greetings, I'd like to propose a new option to rsync, which causes it to read device files as if they were regular files. This includes pipes, character devices and block devices (I'm not sure about sockets). The main motivation is cases where you need to synchronize a large amount of data that is not available as regular files, as in the following scenarios: * Keep a copy of a block
2003 Sep 14
2
rsync error: error in rsync protocol data stream (code 12) at io.c(463)
Hi, I'm having a problem rsyncing one file (since I signed it). It seems that the content of a file is able to cause problems in the protocol. building file list ... 28820 files to consider apt/packages/avifile/ apt/packages/avifile/avifile-0.7.34-1.dag.rh90.i386.rpm rsync: error writing 4 unbuffered bytes - exiting: Broken pipe rsync error: error in rsync protocol data stream (code
2003 Mar 30
1
[RFC][patch] dynamic rolling block and sum sizes II
Mark II of the patch set. The first patch (dynsumlen2.patch) increments the protocol version to support per-file dynamic block checksum sizes. It is a prerequisite for varsumlen2.patch. varsumlen2.patch implements per-file dynamic block and checksum sizes. The current block size calculation only applies to files between 7MB and 160MB setting the block size to 1/10,0000 of the file length for a
2002 Mar 23
1
why variable last_i is needed in match.c rsync source ?
Hi all I see the rsync source and rsync makes hashing table and search hashing table tag_table to find the index of array struct sum_buf , which is a element of struct sum_struct. According to the source code, variable last_i is used to encourage adjacent matches allowing the RLL coding of the output to work more efficiently. Why last_i makes more efficiency? I can't understanding what
2002 Jan 13
0
rsynd-2.5.1 / io.c patches
Platform: Compaq OpenVMS Alpha 7.3 Compiler: Compaq C T6.5 The following patch resolves compile problems with the IO.C module. The (char) type was being used where (void) was more appropriate based on the actual use of the code. The (char) type was also being used where the usage was actually an (unsigned char). const qualifiers were added to improve compile efficiency. EAGLE> type
2004 Aug 02
4
reducing memmoves
Attached is a patch that makes window strides constant when files are walked with a constant block size. In these cases, it completely avoids all memmoves. In my simple local test of rsyncing 57MB of 10 local files, memmoved bytes went from 18MB to zero. I haven't tested this for a big variety of file cases. I think that this will always reduce the memmoves involved with walking a large
2002 Sep 30
0
rsync --daemon only binding against IPv6
Hi! We (NetBSD pkgsrc) got the following bug report: http://www.NetBSD.org/cgi-bin/query-pr-single.pl?number=18134 In short, it says that rsync-2.5.5 does not bind to a IPv4 port (it binds only to an IPv6 port) when used with rsync --daemon, making it impossible to use rsync as rsync server under NetBSD 1.5.2 Any hints on this one? Also, one patch we have in pkgsrc seems not to have been
2003 Feb 24
1
exit status 12 when transferring a large file
--- Erhalten von ZBM.ZARBR 089/32000-545 24-02-03 12.07 Hi, I mirror a server installation using rsync 2.5.6 (on both sides) with these options: -a -x --numeric-ids -H --delete --stats -e ssh -z --exclude-from ... This happens every night. In about 80% of the cases rsync returns exit status 12 when trying to transfer a certain file. In the other 20% the file is
2002 Dec 09
2
Rsync performance increase through buffering
I've been studying the read and write buffering in rsync and it turns out most I/O is done just a couple of bytes at a time. This means there are lots of system calls, and also most network traffic comprises lots of small packets. The behavior is most extreme when sending/receiving file deltas of identical files. The main case where I/O is buffered is writes from the server (when io
2003 Oct 01
0
AW: problem with batch mode:
OK. I got the rsync CVS code and compiled under Linux. That did the job, but only with --no-whole-file because of the local transfer. I then tried to read-batch... under Windows / Cygwin with the current Cygwin rsync. That didn't work - as expected. After compiling again under cygwin it worked! I can now create a diff from a new CD to the version before and send the diff files by email. On
2003 Oct 05
2
Possible security hole
Maybe security related mails should be sent elsewhere? I didn't notice any so here it goes: sender.c:receive_sums() s->count = read_int(f); .. s->sums = (struct sum_buf *)malloc(sizeof(s->sums[0])*s->count); if (!s->sums) out_of_memory("receive_sums"); for (i=0; i < (int) s->count;i++) { s->sums[i].sum1 = read_int(f);
2013 Jan 03
1
rsync 3.0.9 hangs on large file
Hi Folks, Similar to an earlier thread, but slightly more ordinary. My old rsync backup script, which worked fine under Ubuntu 12.04, hangs on Ubuntu 12.10 (rsync 3.0.9) and a 250 MB file. Command line as follows: rsync --itemize-changes --human-readable --progress --delete \ --delete-excluded --compress --bwlimit=18 --recursive --archive \ --partial --partial-dir=~/partial
2003 Jan 28
2
rsync-2.5.6 build on Red Hat 8.0 fails
The packaging/lsb/rsync.spec file is broken as shipped: It has a "Sept" month (rpmbuild here takes only 3-letter month names), and RH gzips the manpages, so the %files list can't find them. I also added doc/README-SGML and doc/rsync.sgml to the %doc files. Patch follows. Thanks for all the good work! --- rsync-2.5.6/packaging/lsb/rsync.spec.orig 2003-01-28 06:28:35.000000000 +0100
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
After fresh installation of CentOS 6.2 on my server, I get following errors in my dmesg output: ------- MTRR default type: uncachable MTRR fixed ranges enabled: 00000-9FFFF write-back A0000-BFFFF uncachable C0000-D7FFF write-protect D8000-E7FFF uncachable E8000-FFFFF write-protect MTRR variable ranges enabled: 0 base 000000000 mask C00000000 write-back 1 base 400000000 mask
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
Good Day All, Today I looked at the dmesg log and I notice that the following messages regarding mtrr_gran_size/mtrr_chunk_size. I am currently running CentOS 6.3 and I installed CentOS 6.2 and 6.1 and I was seeing the same errors. When I installed CentOS 5.8 on the same laptop I do not see these errors. $ lsb_release -a LSB Version:
2003 Jul 23
1
SIGCHLD SIG_IGN, then wait - warning messages
Rsync maintainers please review rsync bug https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98740 The code in question is in socket.c in start_accept_loop. The user is getting these warning messages:
2012 Apr 12
1
6.2 x86_64 "mtrr_cleanup: can not find optimal value"
Hi, I have server that has been running 5.x - 5.8 for a few years without issue and decided to move it to a fresh install of 6.2. First thing I noticed is a good part of the log has these mtrr messages finally ending with "mtrr_cleanup: can not find optimal value" and "please specify mtrr_gran_size/mtrr_chunk_size". I have been searching around and reading the kernel docs
2005 Mar 16
0
Problem with rsync --inplace very slow/hung on large files
I'm trying to rsync a very large (62gig) file from one machine to another as part of a nightly backup. If the file does not exist at the destination, it takes about 2.5 hours to copy in my environment. But, if the file does exist and --inplace is specified, and the file contents differ, rsync either is so significantly slowed as to take more than 30 hours (the longest I've let an