Hello, I have been using rsync to sync data from 1 ftp server to a backup ftp server. Both servers are Solaris 10 X86, and the file systems being synced are a few hundred zfs user data file-systems under the parent/root ftp area. I noticed an unexpected problem in the sync log files, one of the user data directories had bumped up against it's quota, and failed to write the data. That is expected, what was unexpected was the entire rsync process died with this error, and the remaining user file systems in the rsync job, about 100 or so did not get written. Is it normal for rsync to terminate an entire session if it runs into a quota issue on one of the file systems it is syncing? Here is the error lines from the rsync session sync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "xxxx/xxxxx/xxxxx/xxxx : Disc quota exceeded (49) rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6] rsync: connection unexpectedly closed (271703 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6] Note this was the only file-system in the rsync job that was out of space. I expected that if this ever happened an error would be output, then the rest of the job would continue. Maybe it has something to do with zfs, or that I am syncing hundreds of file systems with one rsync command that is syncing the hierarchical root of them e.g.: /usr/local/bin/rsync -av --exclude='*lost+found*' --delete --rsync-path="/usr/local/bin/rsync" /opt/ftp/ user@$FtP_backup:/opt/ftp/ Any comments/advice is appreciated. -- View this message in context: http://old.nabble.com/rysnc-failure-tp28287782p28287782.html Sent from the Samba - rsync mailing list archive at Nabble.com.
mm_half3 wrote:> > Hello, > > I have been using rsync to sync data from 1 ftp server to a backup ftp > server. Both servers are Solaris 10 X86, and the file systems being > synced are a few hundred zfs user data file-systems under the parent/root > ftp area. I noticed an unexpected problem in the sync log files, one of > the user data directories had bumped up against it's quota, and failed to > write the data. That is expected, what was unexpected was the entire > rsync process died with this error, and the remaining user file systems > in the rsync job, about 100 or so did not get written. Is it normal for > rsync to terminate an entire session if it runs into a quota issue on one > of the file systems it is syncing? Here is the error lines from the rsync > session > > > sync: writefd_unbuffered failed to write 4 bytes to socket [sender]: > Broken pipe (32) > rsync: write failed on "xxxx/xxxxx/xxxxx/xxxx : Disc quota exceeded (49) > rsync error: error in file IO (code 11) at receiver.c(302) > [receiver=3.0.6] > rsync: connection unexpectedly closed (271703 bytes received so far) > [sender] > rsync error: error in rsync protocol data stream (code 12) at io.c(600) > [sender=3.0.6] > > > Note this was the only file-system in the rsync job that was out of space. > I expected that if this ever happened an error would be output, then the > rest of the job would continue. Maybe it has something to do with zfs, > or that I am syncing hundreds of file systems with one rsync command that > is syncing the hierarchical root of them e.g.: > > > /usr/local/bin/rsync -av --exclude='*lost+found*' --delete > --rsync-path="/usr/local/bin/rsync" /opt/ftp/ user@$FtP_backup:/opt/ftp/ > > > Any comments/advice is appreciated. >I would split the sync job into many, one per user/hierarchy. This way, you can isolate where the error occurred, and you would be able to back up the other users data. If rsync could be configured to ignore errors, it wouldn't save you work because - you would have to rerun the whole job - you would have to examine the error log, do the same split into user/hierarchy to determine the location of the error ker. -- View this message in context: http://old.nabble.com/rysnc-failure-tp28287782p28364394.html Sent from the Samba - rsync mailing list archive at Nabble.com.
On Thu, Apr 22, 2010 at 6:45 PM, mm_half3 <mm_half3 at yahoo.com> wrote:> I noticed an unexpected problem in the sync log files, one of the user data > directories had bumped up against it's quota, and failed to write the data. > That is expected, what was unexpected was the entire rsync process died > with this error, and the remaining user file systems in the rsync job, > about 100 or so did not get written.Yeah, rsync explicitly dies if it gets an I/O error that it interprets as a lack of space. Typically this means that the entire destination disk has filled up, and trying more copying is futile. In some cases that assumption is not warranted, but it is the most common. There are potential ways to improve this, but most of them require making rsync smarter about being able to query quotas and destination disk amounts at different spots in the hierarchy, which is not a simple change to make. The already-suggested idea to split the job into separate per-quota copies is probably your best bet for now. ..wayne.. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.samba.org/pipermail/rsync/attachments/20100502/5addc1c6/attachment.html>