similar to: rsync-2.6.1pre-1 hang

Displaying 20 results from an estimated 300 matches similar to: "rsync-2.6.1pre-1 hang"

2004 Feb 06
4
memory reduction
As those of you who watch CVS will be aware Wayne has been making progress in reducing memory requirements of rsync. Much of what he has done has been the product of discussions between he and myself that started a month ago with John Van Essen. Most recently Wayne has changed how the file_struct and its associated data are allocated, eliminating the string areas. Most of these changes have been
2004 May 02
1
SEGV on FreeBSD 4.8-STABLE with 2.6.2
I'm getting a SEGV on a FreeBSD 4.8-STABLE box. The client is Solaris 9/SPARC. Both boxes run 2.6.2. The command I'm running is: $ rsync -arHRv --numeric-ids --delete --exclude=/opt/dist/cdrom \ [paths] [server]:[path] If I whittle down what appears in [paths], then it works. $ gdb rsync rsync.core gdb> bt #0 0x280faf0d in strncmp () from /usr/lib/libc.so.4 #1 0x7 in ?? () #2
2004 Jun 09
0
[Bug 1448] New: core dump in send_file_name
https://bugzilla.samba.org/show_bug.cgi?id=1448 Summary: core dump in send_file_name Product: rsync Version: 2.6.2 Platform: x86 OS/Version: NetBSD Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: eravin@panix.com QAContact:
2004 Mar 10
4
HFS+ resource forks: WIP patch included
As you all know, rsync doesn't have any special handling for Mac OS X HFS+ resource forks. Kevin Boyd made RsyncX and rsync_hfs, to address this gap, but they only work when the destination filesystem is also HFS+. I haven't been able to find any references to an rsync that is capable of syncing from HFS+ to UFS (etc). The only solutions I've seen involve lots of preprocessing
2003 Jul 24
0
(no subject)
Here is a diff which should allow applying batch updates remotely ( as apposed to copying the batch files to the remote server and running rsync there ). Eg rsync --write-batch=test src dst1::dst rsync --read-batch=test dst2::dst Oli Dewdney diff -E -B -c -r rsync-2.5.6/flist.c rsync-2.5.6-remotebatch/flist.c *** rsync-2.5.6/flist.c Sat Jan 18 18:00:23 2003 ---
2004 Feb 02
1
[PATCH] --one-file-system and automounter
We use rsync in a Linux installation script. First, the root filesystem of another machine on the network is cloned with "rsync -axzH", and then a few files are updated to give the clone its own identity. This works fine, but last week, the Postfix mailer daemon on a new machine refused to start because some lock files had a link count of 2. It turned out that rsync had created two
2008 Sep 03
0
rsync-3.0.3 crashes with protection exception
Hi, I'm new to rsync and currently installing rsync-3.0.3 to a OS/390 Unix System Services environment. The build process runs fine and does not produce errors. But if I test the program, it crashes everytime with a protection exception. rsync-2.6.9 was running fine! I can't figure out why exactly it crashes. I hope that someone on this list can give me a hint on that. This is the debug
2003 Feb 16
1
rsync-exclude.patch.
> I like the idea of your rsync-exclude.patch and have thought > about hacking it in myself. However as you already have done the work > may I make a small suggestion...... can the name of the exclude file > (your .rsync) be specified in the flags.... e.g. > > rsync --rsync-exclude=.snapshot -axvH /here /there > > In this way different invocations (e.g. system and
2008 Jan 31
1
DO NOT REPLY [Bug 5235] New: buffer overflow in receive_file_entry
https://bugzilla.samba.org/show_bug.cgi?id=5235 Summary: buffer overflow in receive_file_entry Product: rsync Version: 2.6.9 Platform: Other OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: rsync@ofdan.co.uk
2005 Jun 09
0
[Bug 2784] New: rsync gives following error: buffer overflow in receive_file_entry
https://bugzilla.samba.org/show_bug.cgi?id=2784 Summary: rsync gives following error: buffer overflow in receive_file_entry Product: rsync Version: 2.6.3 Platform: All OS/Version: All Status: NEW Severity: major Priority: P3 Component: core AssignedTo: wayned@samba.org
2005 Jun 09
0
[Bug 2785] New: rsync gives following error: buffer overflow in receive_file_entry
https://bugzilla.samba.org/show_bug.cgi?id=2785 Summary: rsync gives following error: buffer overflow in receive_file_entry Product: rsync Version: 2.6.3 Platform: All OS/Version: All Status: NEW Severity: major Priority: P3 Component: core AssignedTo: wayned@samba.org
2014 Mar 26
11
[Bug 10518] New: rsync hangs (100% cpu)
https://bugzilla.samba.org/show_bug.cgi?id=10518 Summary: rsync hangs (100% cpu) Product: rsync Version: 3.1.1 Platform: All OS/Version: Linux Status: NEW Severity: critical Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: syzop at vulnscan.org QAContact:
2004 Jan 26
1
How match.c hash_search works with multiple checksums that have identical tags
I am trying to understand how match.c works. I am reading the code and something doesnt look quite right. This is usually a sign that I am missing something obvious. Here is what I see. build_hash_table uses qsort to order targets in ascending order of //tag,index// into the array of checksums. It then accesses the targets in ascending order and writes the index at the tag's location in
2012 Feb 01
0
timeout during hash_search
Hi, I'm experiencing timeouts during synchronization of large files. The scenario is as follows: Locally I have a large file (e.g. 20GB), which is already present at the server. When I try to resync it with: rsync --inplace --ignore-times -vvvv --compress bigfile rsync://xxx at ... The '--ignore-times' is only to force a comparison. The real world scenario would be the sync of a
2011 Apr 21
2
Crash copying to a zfs-fuse partition
Hi - I'm using rsync 3.0.8 on Fedora 14 x86-64 (package, not built). I get a repeated crash trying to rsync to a particular zfs partition: [root at xback1 diskbackup]# rsync -raHx --inplace --numeric-ids --stats --no- whole-file --delete /xback1_back1/home_jss/20110225-000501/ /xback1_test1/home_jss/current/ Segmentation fault [root at xback1 diskbackup]# rsync: writefd_unbuffered failed to
2011 Jul 22
2
[Bug 8315] New: hang in rsync (match.c)
https://bugzilla.samba.org/show_bug.cgi?id=8315 Summary: hang in rsync (match.c) Product: rsync Version: 3.0.9 Platform: All OS/Version: All Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: jeremy at jeremysanders.net QAContact:
2006 Mar 31
3
DO NOT REPLY [Bug 3649] New: buffer overflow in receive_file_entry
https://bugzilla.samba.org/show_bug.cgi?id=3649 Summary: buffer overflow in receive_file_entry Product: rsync Version: 2.6.0 Platform: Other OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: sambesselink@planet.nl
2001 Nov 20
2
patch to enable faster mirroring of large filesystems
I have attached a patch that adds 4 options to rsync that have helped me to speed up my mirroring. I hope this is useful to someone else, but I fear that my relative inexperience with rsync has caused me to miss a way to do what I want without having to patch the code. So please let me know if I'm all wet. Here's my story: I have a large filesystem (around 20 gigabytes of data) that
2002 Oct 09
1
ERROR: buffer overflow in receive_file_entry
has anyone seen this error: ns1: /acct/peter> rsync ns1.pad.com::acct overflow: flags=0xe8 l1=3 l2=20709376 lastname=. ERROR: buffer overflow in receive_file_entry rsync error: error allocating core memory buffers (code 22) at util.c(238) ns1: /acct/peter> -- Peter Dominguez 72 Belvedere Dr Yonkers, NY 10705-2814 USA Tel: 914-423-4000 Fax: 914-423-8640 Email: peter@pad.com
2004 Jul 22
0
ERROR: out of memory in receive_file_entry
Hello, I'm looking for some possible solutions to the out of memory problem when dealing with very large directory trees. Client: linux-2.4.20 Server: HP-UX 11.11 rsync version: 2.6.2 Directory size: 400Gbytes number of files: 3273133 rsync cmd: rsync -avRx --progress --stats --numeric-ids --blocking-io --delete -e ssh hp-ux.server:/path /local/linux/ It seems to fail after consuming