I wrote a python script wrapping find & tar to do full/diff/increment backup
for the system. For example, my configuration like this:
# cat /etc/new/fs_backup.config
fields = "identity, type, time, archive";
default = "full";
type = "${default}"; # comment
datadir = "/mnt/host/fs_backup/";
table = "$datadir/.table";
tsfmt = "%Y%m%d%H%M.%S";
time_fmt = "%Y%m%d-%H%M%S";
info_files = ".files";
info_dirs = ".dirs";
info_identity = ".identity";
name = "%identity.%type.%time";
archive = "$name.tgz";
And I run:
# fs_backup -a t:/var/www/html pages
# fs_backup -a x:/var/www/html/syssite/home/cache/ pages
These will create a directory "/mnt/host/fs_backup/pages" and 2 files
"/mnt/host/fs_backup/{.t_files,.x_files}", while t/x have the same
meaning
of tar's 'T/X'. Now I can do this to backup the directory:
# fs_backup [ -t full ] pages
# fs_backup -t diff pages
# fs_backup -t incr pages
These are full or increment backup.
Now the directory is big, nearly 8GB, when I do "fs_backup -t full
pages" on
local file system, it is OK , but when I mount the smbfs:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 32G 20G 11G 64% /
none 1013M 0 1013M 0% /dev/shm
/dev/sdb1 135G 46G 82G 36% /data
//store/homes 307G 27G 280G 9% /mnt/host
The process exit by error like this:
2007-03-02 10:44:16 fs_backup ERROR
gzip: stdout: File too large
tar: /mnt/host/fs_backup/at_pages/at_pages.full.20070302-094405.tgz: Wrote
only 8192 of 10240 bytes
tar: Error is not recoverable: exiting now
I also tried to wrap bzip2 instead of gzip, but this error remains:
bzip2: I/O or other error, bailing out. Possible reason follows.
bzip2: File too large
Input file = (stdin), output file = (stdout)
tar: /mnt/host/fs_backup/at_pages/at_pages.full.20070301-145957.tgz : Wrote
only 8192 of 10240 bytes
tar: Error is not recoverable: exiting now
Is there anyone can help me?
Thank you very much.
Best regards,