I'm trying to make a backup using this command rsync -auvH /home/ /bak --delete --bwlimit=1000 --status server load has been increased so much and the server crashed, as well has gone out of memory My Server is a Dual Xeon 2.0 GHz with 2GB of Memory + 1GB Swap. Could be that there are too many files, about 5.000.000, to be backed up ? The way the files are structured make very difficult to create several backup with less files. Is there any one else facing issues like this ? Is there any workaround on this ? Thank you, Eugen Luca
On Wed, Jan 28, 2004 at 02:12:02PM -0500, Eugen Luca wrote:> I'm trying to make a backup using this command > > rsync -auvH /home/ /bak --delete --bwlimit=1000 --status > > server load has been increased so much and the server crashed, as well > has gone out of memory > My Server is a Dual Xeon 2.0 GHz with 2GB of Memory + 1GB Swap. > Could be that there are too many files, about 5.000.000, to be backed upThe file list size is part of your problem particularly for local transfer. At approximately 100 bytes per file you are near the limits of per processs allocatable space. Add 72 bytes per file (non CVS HEAD) for the -H on the generator and it gets worse. Now factor in the fact that for local transfer the file list (but not hlink list) is built twice (not shared) and you are pushing/exceeding 1GB RSS just for rsync. If this causes the OS to crash you have an unrelated problem that rsync i just revealing.> ? > The way the files are structured make very difficult to create several > backup with less files. > Is there any one else facing issues like this ? Is there any workaround > on this ?Many people have faced this issue. I'd suggest perusing the archives. There are enhancements in CVS that reduce the increased memory requirements of -H. How it may be worked around will depend on your fileset. You have already indicated that your structure makes breaking up job difficult. -- ________________________________________________________________ J.W. Schultz Pegasystems Technologies email address: jw@pegasys.ws Remember Cernan and Schmitt
On Wed, Jan 28, 2004 at 02:12:02PM -0500, Eugen Luca wrote:> server load has been increased so much and the server crashed, as well > has gone out of memoryThe rsync version in CVS has a number of memory-saving optimizations in it. Just the file list reductions (if we ignore -H for now) would save ~114MB of memory in a list of 5,000,000 files, and that's just for one of the processes--if the sending and receiving are being done on the same machine, multiply that savings by 2. The version in CVS also prevents a copy-on-write memory-duplication between the two processes on the receiving side, which should cut the file-list memory use in half (i.e. a savings of ~350MB for 5,000,000 files). The memory use of the -H option used to cause the entire file list to be duplicated, so that's another ~350MB savings in memory footprint on the receiving side, though we do need to add back in a variable amount of memory-use depending on how many files are actually linked together in the transfer (figure something like 20 bytes or so per linked file). So, you might want to give the CVS version a try. There's info here on how to get it: http://rsync.samba.org/download.html You can use CVS, rsync the latest files (from the unpacked/rsync dir), or grab a nightly snapshot in tar form. ..wayne..
On Wed, 28 Jan 2004, Eugen Luca wrote:> I'm trying to make a backup using this command> rsync -auvH /home/ /bak --delete --bwlimit=1000 --status> server load has been increased so much and the server crashed, as well > has gone out of memory > My Server is a Dual Xeon 2.0 GHz with 2GB of Memory + 1GB Swap.In addition to using the new improved CVS version of rsync as others have suggested, you should also make it so your system won't die just because there is a large process. If you add some more swapspace, the system will slow down when you run big programes. This is painfull, but far less painfull than whatever happens when it runs out of memory. As long as you've got disk space you can add more swap easily on almost any modern OS. On Linux, the commands are: dd if=/dev/zero of=swapfile bs=1M count=2K swapon swapfile You can add the swap area to /etc/fstab so it will be used on every reboot. It is much easier to debug a slow system than to debug a crashed system. If the swapfile is on a slow disk, you can make it low priority, so it is used last. Many Linux versions limit you to 2Gbytes per swapfile, but you can use several at the same time. You should use rsync's -exclude option so you don't back up the swapfile. I haven't run a system out of memory in a while. When I've done it in the past, bad things happened, but the system didn't crash, I just wished it had.> Could be that there are too many files, about 5.000.000, to be backed up > ? > The way the files are structured make very difficult to create several > backup with less files. > Is there any one else facing issues like this ? Is there any workaround > on this ? > > > Thank you, > Eugen Luca >-- Paul Haas paulh@Hamjudo.com