similar to: Feature request, or HowTo? State-full resume rsync transfer

Displaying 20 results from an estimated 10000 matches similar to: "Feature request, or HowTo? State-full resume rsync transfer"

2010 Feb 03
1
limiting the number of connections per client
Hello All, We have a very high utilization rsync server. We can handle large number of connections at a time, but would like to limit to one connection per client. we don't want multiple connections from the same client. Is that possible? Thanks Saqib http://enterprise20.squarespace.com
2010 Jun 06
3
rsync sleep
Is it possible to sleep 1 second after each file is rsynced? Ofcourse, I can put this in a for loop and do a sleep after each file is done, I was wondering if there was anything native in rsync for this type of operation. TIA
2009 Dec 30
3
rsync email notification on success and failure + Log
Hi, I have a bash script for rsync that should tranfer all my filer from one drive to the other. I would like to know how I can make the script sending me an email after the script is run and be able to know if it was a success or not and also if possible to include the log file. This is my script: !/bin/bash rsync -av --delete --log-file=/home/duffed/RSyncLog/$(date
2009 Jul 04
4
Rsync with spaces in source or destination path
Hi, I am trying to transfer a file that has spaces in its name. The rsync gives me below error. Am I doing anything wrong? #ls -l /tmp/test\ file -rw-rw-r-- 1 xxx xxx 0 Jul 5 02:23 /tmp//test file # /usr/local/bin/rsync --archive /u/masanip/ACH/test\\\ file /tmp/mydir/ rsync: link_stat "/tmp/test\ file" failed: No such file or directory (2) Number of files: 0 Number of
2004 Dec 31
4
Mirroring directories at once
Dear All I would like to use the command rsync to have a perfect copy of each one of five directories and their contents. How can I do that at once? I know that I can do that directory by directory... but it would not be the fastest way. Thanks in advance, Paul
2004 Dec 11
3
rsync to retry if copy failed - possible?
Hello, I want to be 100% sure that rsync copies something from one location to another. However, I did not see an option which would make rsync retry an operation if it failed for whatever reason (network was down when rsync started, or network went down during rsync was copying something, connection timed out etc.). For example, wget has an option like --tries=number Set number of retries
2010 Feb 16
5
Rsync / ssh high cpu load
Hi everybody, I am using Cygwin rsync/ssh to synchronize my files from my home computer to my server. I raised a question on the Cygwin list as well, but thought this list would be appropriate too. The issue is that the CPU on the sending side is fully allocated by rsync/ssh (both taking 50%) during a file transfer. This is the command I use: rsync -e ssh * <user>@<server>:/data
2005 Jan 13
3
rsyncd.conf: "timeout=<minimal>" crazyness
Hi, from time to time, in times like today where the whole world is grabbing SUSE-9.2 and/or debian-30r4, I really like to condemn those other anon rsync server admins (you know, the successors of the traditional unix ftp server admins). They usually have within their /etc/rsyncd.conf a line like timeout = <very low> because they are thinking "less" there is "better for
2004 Sep 07
4
"parallelizing" the two initial phases?
Hi (especially Wayne), ftp.gwdg.de is rsyncing most of the data from about 500 other rsync servers. Especially during the general "high traffic" phases like the release of a new Knoppix ISO or a new SUSE distribution or a new KDE release, I see timeouts with other servers which have maximum traffic at that time. There is a general scheme: 1. rsync is building the data base of the
2004 Jun 08
2
[Kde-mirrors] rsync errors from rsync.kde.org
Hi, On Wed, 9 Jun 2004, Dirk Mueller wrote: > it seems a lot of people get recently rsync protocol errors from > rsync.kde.org. Part of the problem is probably that we upgraded to rsync > 2.6.2 due to security problems with older versions. > > the newer rsync release contains a newer rsync protocol that appears to be not > fully backward compatible. The newer, improved protocol
2005 Apr 01
2
Rsync over an NFS mount (Scanned @ Decoma)
If I perform an local rsync with the 'delete' option over an NFS mount, and the mount point has been unmounted, the rsync will delete what was once on the local box.. For example Machine 1(192.168.0.1) is a live production server with a database within the /DB directory Machine 2 (192.168.0.2) is an idling server, capturing all changes of Machine 1's database. The rsync is done over
2004 Dec 03
1
SUGGESTION: rsyncing gziped source with non gziped destination
Would it be possible to make rsync capable to sync gziped source (at server) with non gziped detination file? PROBLEM: The rsyncd server provides a few *frequently* accessed, slow changing, *big* text files (DNS RBL zones). It seems that the source files are too big to stay in memory caches and rsync sessions cause to many hard disk I/O operations. Making rsynd capable to "ungzip"
2004 Jul 27
1
[Fwd: multiple rsync simlataneously]
---------------------------- Original Message ---------------------------- Subject: multiple rsync simlataneously From: kbala@midascomm.com Date: Wed, July 21, 2004 7:51 pm To: rsync@lists.samba.org -------------------------------------------------------------------------- Hi, We are using rsync to backup the data from 100 remote machine and the rsync process are started at the
2005 Jan 02
1
suggestion: "user delete"
dear rsync developers: I use rsync as a convenient synchronization tool. I know it was not made therefore, but it is very convenient. the only drawback I see is that the delete option is "harsh"---if I edit on machine A, and then forget that A is the updated machine, an rsync pull from machine B will delete my new files on A (if I use --delete). most of the time, this is what
2005 Jul 14
4
@ERROR: access denied
I'm trying to set up an rsync daemon on my OS X machine to sync my remote subversion repositories. My config file /etc/rsyncd.conf looks like this: motd file = /etc/motd max connections = 25 syslog facility = local3 [repositories] comment = Subversion Repositories path = /usr/local/repositories read only = no list = yes hosts allow = 127.0.0.1 auth users = username secrets file
2005 Mar 28
4
batch mode error
Anyone, When attempting to use batch mode to update a second host by executing the .sh file I get: [root@sspfedweb batch]# ./obsession_0000.sh Batch file ./batch/obsession_0000.rsync_flist open error: No such file or directory rsync error: syntax or usage error (code 1) at batch.c(241) Can anyone tell me whatI'm doing wrong. I tar and gzip the batch directory and ftp to the target
2005 Oct 27
2
Rsync over NFS mount sending whole files
Hey all, I'm not sure if anyone has experienced this, and I have searched for it online, with no conclusive, err.. conclusions. Basically, when rsyncing two \test1(local) and \mnt\test2\ (NFS mount) it seems that when using rsync with --no-whole-file entire files (instead of just updated blocks) are sent through. I am using the following command rsync -avtz --no-whole-file \test1\
2007 Jun 06
5
Feature request: External deletion command
Hi everyone, it would be nice if rsync could call an external command to delete files. Than one could call a secure deletion tool like "wipe", which overwrites files a few times before deleting them. Right now Im wiping personal data which I dont need anymore, but it doesnt help much since they can easily be recovered from my backup made with rsync. Bye, Mario -- Der GMX
2008 Sep 25
2
Rsync 3
Hi Everyone, We want to use the rsync 3 for incremental file transfer of our File system from one box to another, however there are about a million files to be copied, so just wanted to know by any one of your's previous experience with rsync 3, will these many files get copied at first time and incrementally in the subsequent run of the crontab. Replies will be much appreciated. Thanks in
2008 Jan 25
2
rsyncd performance when handling multiple clients in parallel
Hi: I use rsync to transfer multiple files from several clients to a server in parallel. I am wondering how many concurrent connections the server should handle to maximize the throughput (number of bytes written to server). In an extreme case, if only one connection allowed, the disk IO speed of server will not be fully utilized. On the other hand, if the server allows too many connections, the