similar to: hardlinking and -R (multiple source directories)

Displaying 20 results from an estimated 100 matches similar to: "hardlinking and -R (multiple source directories)"

2013 Apr 06
0
rsync 3.0.9 partial file left after CTRL-C WITHOUT using --partial
Hi Justin No i did a test setup after getting to it in real life. In this test setup: - rsync (daemonless) is run - ctrl-c, - results are checked, - dest directory is deleted and so on. in 50 % the file is left over. i can reproduce it on a big fat slow (relative to a small ext4 partition on same lvm) xfs-partition. If i do this on root I even if I manage tp ctrl-c at the right time, I
2013 Apr 05
3
Fwd: rsync 3.0.9 partial file left after CTRL-C WITHOUT using --partial
Hi folks, man page says "By default, rsync will delete any partially transferred file if the transfer is interrupted" I have (reproducible) a partial file left, if I do CTRL-C source-dir: mounted LVM XFS dest-dir: see source-dir Ubuntu 12.04.1 (LTS) kernel 3.2.0-39-generic command: rsync -a
2013 Aug 02
2
hardlinking and -R (multiple source directories)
Hi, hardlinking (-H) works perfectly while using a syntax like -avhxSDH <SRC> <DEST> Now I have to mirror multiple SRC directories which contain hardlinks. e. g: src1/a is a hardlink to src2/b -RavhxSDH SRC1 SRC2 DEST does not preserve hardlink a and b in DEST. Is there any chance to do that? Thanks lopiuh -------------- next part -------------- An HTML attachment was scrubbed...
2015 Apr 28
3
Options for a "I'm done" flag file
As part of my backup system, I use Rsync to keep a copy of each server on one central backup server. This backup server then uses StoreBackup to keep multiple iterations of each clone directory. So that the StoreBackup archives don't keep adding "redundant" and misleading backups, I update a flag file with the current date/time before doing the Rsync update, and test to see if this
2015 Apr 28
0
Options for a "I'm done" flag file
rsync -av /src/ /dst/ && touch /dst/done That should do it as the touch only happens if rsync exits with a code of 0. If you need to consider other non zero exit code, it is still doable, just a bit more shell code. There are surely other options as well, but this is probably the most simple. On Apr 28, 2015 3:47 AM, "Simon Hobson" <linux at thehobsons.co.uk> wrote:
2007 Dec 04
2
backup / compressed copy
Hi, I think about buying a NAS-server from Sun. To backup this server I want to use our central to-tape backup. For whatever reason ppl are asking me to make one compressed copy to disk and only backup this copy. So to reduce load I'd like to have a script that: identifies changed files only (using md5?) copies them and compresses them storeBackup.pl does something similar, but keeps
2016 Jun 19
1
rsync script for snapshot backups
Am 19.06.2016 um 19:27 schrieb Simon Hobson: > Dennis Steinkamp <dennis at lightandshadow.tv> wrote: > >> i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. >> After creating the initial full backup, the following backups should only contain "new data" and the rest will be referenced
2016 Jun 19
0
rsync script for snapshot backups
Dennis Steinkamp <dennis at lightandshadow.tv> wrote: > i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. > After creating the initial full backup, the following backups should only contain "new data" and the rest will be referenced via hardlinks (-link-dest) > ... > Well, it works but
2013 Dec 02
2
symlink in -R src_dirlist and real dirs on target
Hi folks, I have a bunch of directories to mirror via rsync. I have lots of hardlinked files spread about these directories. Therfore I use -R (--relative) and -H. so far ok but: I create symlinks to the source-directories via script because src-dirs have changing names (date and time of backup) and I want to have constant directory names on target. How can I achieve that? I thought -k
2016 Jun 20
1
rsync script for snapshot backups
The scripts I use analyze the rsync log after it completes and then sftp's a summary to the root of the just completed rsync. If no summary is found or the summary is that it failed, the folder rotation for that set is skipped and that folder is re-used on the subsequent rsync. The key here is that the folder rotation script runs separately from the rsync script(s). For each entity I want
2015 Jul 13
6
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote: > The think here is that you are into "backup" tools rather than the > general purpose tool that rsync is intended to be. Yes, that is true. Rsync serves so well as a core component to backup, I can be blind about "something other than rsync". I'll look at the tools you suggest. However, you've made be
2015 May 07
0
Backup PC or other solution
Hello Alessandro, Wednesday, May 6, 2015, 9:21:10 PM, you wrote: > I'm new with backup ops and I'm searching a good system to accomplish this > work. Everybody has its favorite backup program, but why rely on only one system? I have to backup 8 servers and use three backup systems in parallel. -- BackupPC. Easy to use, nice user interface with graphical recovery of individual
2015 Jul 13
0
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
Andrew Gideon <c182driver1 at gideon.org> wrote: > These both bring me to the idea of using some file system auditing > mechanism to drive - perhaps with an --include-from or --files-from - > what rsync moves. > > Where I get stuck is that I cannot envision how I can provide rsync with > a limited list of files to move that doesn't deny the benefit of --link- >
2016 Jun 19
5
rsync script for snapshot backups
Hey guys, i tried to create a simple rsync script that should create daily backups from a ZFS storage and put them into a timestamp folder. After creating the initial full backup, the following backups should only contain "new data" and the rest will be referenced via hardlinks (-link-dest) This was at least a simple enough scenario to achieve it with my pathetic scripting skills.
2012 Aug 27
7
Deduplication data for CentOS?
Hi list, is there any working solution for deduplication of data for centos? We are trying to find a solution for our backup server which runs a bash script invoking xdelta(3). But having this functionality in fs is much more friendly... We have looked into lessfs, sdfs and ddar. Are these filesystems ready to use (on centos)? ddar is sthg different, I know. Thx Rainer
2015 Jul 13
3
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 02:19:23 +0000, Andrew Gideon wrote: > Look at tools like inotifywait, auditd, or kfsmd to see what's easily > available to you and what best fits your needs. > > [Though I'd also be surprised if nobody has fed audit information into > rsync before; your need doesn't seem all that unusual given ever-growing > disk storage.] I wanted to take this
2014 May 16
9
Centos backup tools
Hi all! I'm building a raid box to use for backups, connectivity will be either USB3 or esata. Looking for suggestions on backup software I can use. I know there's rsync, which may be a good solution. I also find backupPC at epel, backintime also at epel, kbackup. DejaDup looks interesting, but none of the repos I'm set up to use shows it being available. some small details: I
2015 May 06
10
Backup PC or other solution
I list, I'm new with backup ops and I'm searching a good system to accomplish this work. I know that on centos there are bacula and amanda but they are too tape oriented. Another is that they are very powerfull but more complex. I need a solution for small office for disk storage and I found Backup PC. Many people say that it is great for small stuff and for great number of data. What do
2015 Jul 02
1
cut-off time for rsync ?
On Wed, Jul 01, 2015 at 02:05:50PM +0100, Simon Hobson said: >As I read this, the default is to look at the file size/timestamp and if they match then do nothing as they are assumed to be identical. So unless you have specified this, then files which have already been copied should be ignored - the check should be quite low in CPU, at least compared to the "cost" of
2003 Nov 19
1
daily back (incremental backup )
Hello, I have Novell Netware File server. Which is mounted as /mnt/novell on one of my Linux m/c. I want to take backup of Novell Server to my Linux m/c. I need everyday backup of Novell server to be taken on Linux m/c. I don't want full backup every day but I need an incremental backup. I do not want to delete any old directory or files. I have taken script from rsync examples, and