Has anyone got a method for limiting the total number of bytes transfered with rsync? I was thinking running with -n and then using the output to check how much will been transfered. I ask because a client had a broken filesystem that occasionally has 2T+ files on it (broken filesystem, so they weren't actually that big) but we happily ran up a huge b/w bill with rsync. -Mike
On Mon, Feb 28, 2005 at 05:24:36PM -0700, Michael Best wrote:> I ask because a client had a broken filesystem that occasionally has > 2T+ files on it (broken filesystem, so they weren't actually that big) > but we happily ran up a huge b/w bill with rsync.Rsync 2.6.4 has a new option, --max-size=N, that can be used to filter out any really big files from being synchronized. For instance: rsync -av --max-size=10g host:/src/ /dest/ Beyond that, no -- there's no total-transfer-size limit. I'm working on the release 2.6.4pre2 release that should be ready fairly soon now. ..wayne..
Michael Best wrote:> > I ask because a client had a broken filesystem that occasionally has 2T+ > files on it (broken filesystem, so they weren't actually that big) but > we happily ran up a huge b/w bill with rsync.For this specific example you could probably wildcard match the files with a --exclude= argument. Borked filesystems usually generate matchable files. For a more generic test: if [ `du -sb <root>` -gt <size limit> ] ; then <barf>; else <rsync>; fi Which is crude, time-consuming and won't transfer anything if you step even one byte over the limit. How evil do you want to be? :) You could get your script to parse file-from before hand and maybe only backup the entries that come in under the limit. Perhaps build the full file list yourself, incrementing a counter each file until you reach the size limit then fire off rsync with that new list passed to --files-from. Again, not very elegant, but it'd do the job. This also assumes you're running something unixy (Doing this stuff in batch a whole new level of hell).> -Mike