Displaying 16 results from an estimated 16 matches for "c182driver1".
2015 Jul 16
1
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Tue, 14 Jul 2015 08:59:25 +0200, Paul Slootman wrote:
> btrfs has support for this: you make a backup, then create a btrfs
> snapshot of the filesystem (or directory), then the next time you make a
> new backup with rsync, use --inplace so that just changed parts of the
> file are written to the same blocks and btrfs will take care of the
> copy-on-write part.
That's
2015 Jul 13
0
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
Andrew Gideon <c182driver1 at gideon.org> wrote:
> These both bring me to the idea of using some file system auditing
> mechanism to drive - perhaps with an --include-from or --files-from -
> what rsync moves.
>
> Where I get stuck is that I cannot envision how I can provide rsync with
> a limited l...
2011 Aug 10
1
Purpose of --checksum-seed ?
I'm trying to understand the point of the --checksum-seed option. As I
understand it from a little reading, checksums are not cached over
executions of rsync. So...what is the point of fixing the seed?
Is this in support of patches which *do* support caching of checksums?
I've read about caching these in files and in xaddr. Is there a "best
solution" for caching
2015 Jul 13
6
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote:
> The think here is that you are into "backup" tools rather than the
> general purpose tool that rsync is intended to be.
Yes, that is true. Rsync serves so well as a core component to backup, I
can be blind about "something other than rsync".
I'll look at the tools you suggest. However, you've made be
2015 Jul 13
3
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 02:19:23 +0000, Andrew Gideon wrote:
> Look at tools like inotifywait, auditd, or kfsmd to see what's easily
> available to you and what best fits your needs.
>
> [Though I'd also be surprised if nobody has fed audit information into
> rsync before; your need doesn't seem all that unusual given ever-growing
> disk storage.]
I wanted to take this
2015 Jul 13
0
cut-off time for rsync ?
On Thu, 02 Jul 2015 20:57:06 +1200, Mark wrote:
> You could use find to build a filter to use with rsync, then update the
> filter every few days if it takes too long to create.
If you're going to do something of that sort, you might want instead to
consider truly tracking changes. This catches operations that find will
miss, such as deletes, renames, copies preserving timestamp
2015 Jul 16
1
Fwd: rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote:
> As with any dedup solution, performance does take a hit and its often
> not worth it unless you have a lot of duplication in the data.
This is so only in some volumes in our case, but it appears that zfs
permits this to be enabled/disabled on a per-volume basis. That would
work for us.
Is there a way to save cycles by offering zfs
2006 Aug 03
1
Patch to handle ACL differences
A while ago (2.6.2), I built and posted a patch which caused rsync to "do
the right thing" where --link-dest was being used and where files had been
changed only in their ACLs. I've recreated this for 2.6.8 (there were
some small differences).
I've tested this using --link-dest copying from Linux-Linux and
Linux-Solaris. I plan to test Solaris-Solaris too, of course.
But
2009 Mar 23
0
bwlimit only on server side
I want to use rsync under the control (ie. initiated from) the client
side, but with the bandwidth controlled by the server side. I can force
the bwlimit option on the command line executed on the server, but will
this make a difference given that the files are being sent from client to
server?
If not, is there some other way to limit this bandwidth inbound to the
server? I know that I can
2010 May 20
1
Speeding rsync via externalities like file system choice
Using rsync --link-dest, I end up with a file system that has a
relatively large number of directory entries but relatively small number
of inodes. Copying this volume takes hours...far more than other volumes
of similar size. I blame the much larger amount of directory traversal
(and comparisons between source and destination) that are occurring.
I'm using ext3, but I'm not wed to
2009 Nov 29
2
Any way to predict the amount of data to be copied when re-copying a file?
I do backups using rsync, and - every so often - a file takes far longer
than it normally does. These are large data files which typically change
only a little over time.
I'm guessing that these large transfers are caused by occasional changes
that "break" (ie. yield poor performance) in the "copy only changed
pages" algorithm. But I've no proof of this. And I
2008 Nov 02
2
Problem with extended ACLs in 3.0.4?
I've been using a 2.6.2 that I modified myself to get ACLs as I like.
I'm trying now to get back into the public version of rsync, but am
finding difficulties.
This one seems pretty basic. It's on a CentOS 4.5 machine with rsync rpm
rsync-3.0.4-1.el4.rf and kernel 2.6.9-55.0.2.plus.c4. After the
operation, f1 and f2 should have identical ACLs. They don't.
[root@house0
2010 Jun 30
1
...failed: too many links (31)
We do backups using rsync --link-dest. On one of our volumes, we just
hit a limit in ext3 which generated the error:
rsync: link "..." => ... failed: Too many links (31)
This appears to be related to a limit in the number of directory entries
to which an inode may be connected. In other words, it's a limit on the
number of hard links that can exist to a given file. This
2013 Jul 11
1
remote rsync exit code 0: is this a bug?
Hello:
[I apologize if this is a repeat. I had to rebuild my posting profile,
and I think I didn't do it correctly before I sent a previous version
of this.]
We use rsync as a part of a home-grown backup solution. In the specific
case at hand, we're using rsync to copy volumes off-site. The "sending"
server invokes rsync to transfer each volume to the off-site archive.
2012 Mar 22
1
Help debugging an issue with --fuzzy --fuzzy and --link-dest
I've identified a situation where the combination of --fuzzy --fuzzy
(yes: two of them) and --link-dest is not behaving as I'd expect. I'm
first wondering if my expectation is wrong. Assuming that it is not,
then I'm wondering how best to figure out the problem.
The double use of --fuzzy is based upon https://bugzilla.samba.org/
show_bug.cgi?id=4056 which should be present on
2009 Sep 27
5
LVM snapshots vs. --link-dest
I currently do incremental backups using --link-dest. Unchanged files
are hard links to the previous snapshot; changed files are new copies.
Where this "fails" is for large files that have received small changes.
The directory containing my main IMAP account, for example, typically
generates between 1 and 2 G of daily backup data as I file messages in my
inbox. Yesterday, though,