similar to: Revisiting two old issues

Displaying 20 results from an estimated 10000 matches similar to: "Revisiting two old issues"

2003 Mar 30
1
[RFC][patch] dynamic rolling block and sum sizes II
Mark II of the patch set. The first patch (dynsumlen2.patch) increments the protocol version to support per-file dynamic block checksum sizes. It is a prerequisite for varsumlen2.patch. varsumlen2.patch implements per-file dynamic block and checksum sizes. The current block size calculation only applies to files between 7MB and 160MB setting the block size to 1/10,0000 of the file length for a
2003 Mar 22
2
[RFC] protocol version
I'm in the midst of coding a patch set for consideration that will bump the protocol version and have a couple of observations. The current minimum backwards-compatible protocol is 15 but we have code that checks for protocol versions as old as 12. If someone else doesn't beat me to it i'm considering cleaning out the pre-15 compatibility code. A backwards compatibility patch could
2002 Aug 05
5
[patch] read-devices
Greetings, I'd like to propose a new option to rsync, which causes it to read device files as if they were regular files. This includes pipes, character devices and block devices (I'm not sure about sockets). The main motivation is cases where you need to synchronize a large amount of data that is not available as regular files, as in the following scenarios: * Keep a copy of a block
2003 Oct 05
2
Possible security hole
Maybe security related mails should be sent elsewhere? I didn't notice any so here it goes: sender.c:receive_sums() s->count = read_int(f); .. s->sums = (struct sum_buf *)malloc(sizeof(s->sums[0])*s->count); if (!s->sums) out_of_memory("receive_sums"); for (i=0; i < (int) s->count;i++) { s->sums[i].sum1 = read_int(f);
2003 Mar 23
1
[RFC] dynamic checksum size
Currently rsync has a bit of a problem with very large files. Dynamic block sizes were introduced to try handle that automatically if the user didn't specify a block size. Unfortunately that isn't enough and the block size would need to grow faster than the file. Besides, overly large block sizes mean large amounts of data need to be copied even for small changes. The maths indicate
2004 Jan 15
1
Resolving problems in the generator->receiver pipes
When I was working on the the hard-link change, I noticed that many of the hard-link verbose messages were getting lost. These messages get output very near the end of the transfer, and it turns out that the reason for the loss was that there are two pipes flowing from the generator and the receiver, and it was possible for the "we're all done" message to get received down the redo
2005 Apr 25
2
How about a --min-size option, next to --max-size
There's a rather old bug report in Debian's bug tracking system (see http://bugs.debian.org/27126) about wanting to be able to specify the maximum file size, as well as the minimum file size. Here's the text: Sometimes, it's useful to specify a file size range one is interested in. For example, I'd like to keep an up-to-date mirror of Debian, but I currently
2004 Jan 17
1
--delete-sent-files (AKA --move-files)
Yes, it's time once again to return to the subject of moving files. With the recent changes to the communications code between the receiver and the generator, there is now a non-clogging channel that we can use to signal the sender when a file has been successfully transferred, which allows us delete the original for all transferred files. I have in the past waffled on whether this feature
2005 Jan 05
2
Preliminary Suggestion For Atomic Transactions
In the past there's been a need to provide consistency between symbolic links or repository metadata during a sync. Currently, rsync renames files piecemeal. The attached patch (extremely ugly) attempts to resolve this by foregoing the rename step until the end. It adds a new option (if we didn't, ls might catch up). There are several issues to get over. The first big one in
2002 Oct 11
4
Problem with checksum failing on large files
I'm having a problem with large files being rsync'd twice because of the checksum failing. The rsync appears to complete on the first pass, but then is done a second time (with second try successful). When some debug code was added to receiver.c, I saw that the checksum for the remote file & the temp file do not match on the first try, so (as expected) it repeats the rsync & the
2003 Dec 20
3
preview release: 2.6.0pre1
OK, I packaged up the current CVS as our first preview release for 2.6.0. You can grab it here: http://samba.org/ftp/rsync/preview/rsync-2.6.0pre1.tar.gz The MD5 checksum is: 70e9dea967f083c231b7821ef35aef1b rsync-2.6.0pre1.tar.gz There is not currently a .sig file for the package, but I'm looking into that next. Please test this and let me know if we have any remaining issues
2005 Jun 27
5
adding a new log-format escape
I'm adding a new escape to log-format, %s, to print out the checksum of a file, and I've got a couple problems. They've got to be simple bugs, but I haven't been able to figure them out. The following patch gives me a broken pipe and a bus error when I test it. Note that I've applied the md5 patch beforehand. diff -Naur rsync-2.6.5-md5/log.c rsync-2.6.5/log.c ---
2004 Aug 02
4
reducing memmoves
Attached is a patch that makes window strides constant when files are walked with a constant block size. In these cases, it completely avoids all memmoves. In my simple local test of rsyncing 57MB of 10 local files, memmoved bytes went from 18MB to zero. I haven't tested this for a big variety of file cases. I think that this will always reduce the memmoves involved with walking a large
2005 Sep 09
2
File Corruption
We are using rsync to transfer Oracle redo logs from one system to another over a WAN/VPN. The problem we are having is that 1 out of about 500 or so files sent is corrupted. The receiving Oracle server produces a message like this: --- Specify log: {<RET>=suggested | filename | AUTO | CANCEL} ORA-00283: recovery session canceled due to errors ORA-00368: checksum error in redo log block
2004 Dec 27
7
[Bug 2187] rsync large file getting verification failed using -z
https://bugzilla.samba.org/show_bug.cgi?id=2187 ------- Additional Comments From qiucheng@csc.com.cn 2004-12-27 01:15 ------- Created an attachment (id=869) --> (https://bugzilla.samba.org/attachment.cgi?id=869&action=view) error log -- Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the QA contact
2005 Nov 29
3
Is it possible to backup database using rsync?
Hi everyone, I want to back up my database using log files. 1. Is it possible to backup database using rsync? 2. Can it copy redo log file which are open? 3. It has any special feature to handle redo log files of database while copying? Urgent help needed..... regards Harish Naik
2005 Feb 11
3
OCFS file system used as archived redo destination is corrupted
we started using an ocfs file system about 4 months ago as the shared archived redo destination for the 4-node rac instances (HP dl380, msa1000, RH AS 2.1) . last night we are seeing some weird behavior, and my guess is the inode directory in the file system is getting corrupted. I've always had a bad feeling about OCFS not being very robust at handling constant file creation and deletion
2004 Feb 06
4
memory reduction
As those of you who watch CVS will be aware Wayne has been making progress in reducing memory requirements of rsync. Much of what he has done has been the product of discussions between he and myself that started a month ago with John Van Essen. Most recently Wayne has changed how the file_struct and its associated data are allocated, eliminating the string areas. Most of these changes have been
2004 Jan 27
1
Differentiating debug messages from both sides
Some of the debug messages that rsync outputs (when verbose >= 2) can occur on both sides of the connection. This makes it hard to know which program is saying what. Some debug messages deal with this by outputting a "[PID]" string at the start of the message. Unfortunately, the startup message that tells us which pid is which is only output when verbose >= 3, so there's a
2010 Feb 17
1
Problems transferring from older version of rsync to new
I have a system running rsync in daemon mode, the version information: > rsync --version rsync version 2.6.8 protocol version 29 On another system I have: > authoritative# rsync --version rsync version 3.0.7 protocol version 30 I have the following problem when trying to fetch files from the system running the old daemon: # /usr/local/bin/rsync --protocol=29 -vvvvvvvv