Displaying 19 results from an estimated 19 matches for "m829".
Did you mean:
829
2017 Mar 03
2
How do you exclude a directory that is a symlink?
Considering you cant INCLUDE a directory that is a symlink... which would
be really handy right now for me to resolve a mapping of 103 -> meaningful_name
for backups, instead im resorting to temporary bind mounts of 103 onto
meaningful_name, and when the bind mount isnt there, the --del is emptying
meaningful_name accidentally at times.
I think both situations could benefit from a
2015 Apr 06
3
rsync --link-dest won't link even if existing file is out of date
This has been a consideration. But it pains me that a tiny change/addition
to the rsync option set would save much time and space for other legit use
cases.
We know rsync very well, we dont know ZFS very well (licensing kept the
tech out of our linux-centric operations). We've been using it but we're
not experts yet.
Thanks for the suggestion.
/kc
On Mon, Apr 06, 2015 at 12:07:05PM
2015 Jul 02
1
cut-off time for rsync ?
On Wed, Jul 01, 2015 at 02:05:50PM +0100, Simon Hobson said:
>As I read this, the default is to look at the file size/timestamp and if
they match then do nothing as they are assumed to be identical. So unless
you have specified this, then files which have already been copied should be
ignored - the check should be quite low in CPU, at least compared to the
"cost" of
2018 Oct 08
2
rsync --server command line options
Hello,
I ran the following commands:
rsync /tmp/foo remote:
rsync remote:/tmp/foo .
On the remote computer, the following commands were executed:
rsync --server -e.LsfxC . .
rsync --server --sender -e.LsfxC . /tmp/foo
Does anyone know, what is the meaning of the three dots/periods in the
above two commands? The first command ends with two dots (". .") and
the second command has one
2015 Jul 14
1
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
And what's performance like? I've heard lots of COW systems performance
drops through the floor when there's many snapshots.
/kc
On Tue, Jul 14, 2015 at 08:59:25AM +0200, Paul Slootman said:
>On Mon 13 Jul 2015, Andrew Gideon wrote:
>>
>> On the other hand, I do confess that I am sometimes miffed at the waste
>> involved in a small change to a very
2015 Apr 06
0
rsync --link-dest won't link even if existing file is out of date
...t ZFS requires considerable hardware resources
(CPU & memory) to perform well. It also requires you to learn a whole new
terminology to wrap your head around it.
It's certainly not a trivial swap to say the least...
Thanks,
-Clint
On Mon, Apr 6, 2015 at 9:12 AM, Ken Chase <rsync-list-m829 at sizone.org>
wrote:
> This has been a consideration. But it pains me that a tiny change/addition
> to the rsync option set would save much time and space for other legit use
> cases.
>
> We know rsync very well, we dont know ZFS very well (licensing kept the
> tech out of ou...
2015 Apr 07
0
Patch for rsync --link-dest won't link even if existing file is out of date (fwd)
...grity of this
communication has been maintained or that the communication is free of
errors, virus, interception or interference.
Please consider the environment before printing this email.
---------- Forwarded message ----------
Date: Mon, 6 Apr 2015 01:51:21 -0400
From: Ken Chase <rsync-list-m829 at sizone.org>
To: rsync at lists.samba.org
Subject: rsync --link-dest won't link even if existing file is out of date
Feature request: allow --link-dest dir to be linked to even if file exists
in target.
This statement from the man page is adhered to too strongly IMHO:
"This option...
2015 Apr 06
6
rsync --link-dest won't link even if existing file is out of date
Feature request: allow --link-dest dir to be linked to even if file exists
in target.
This statement from the man page is adhered to too strongly IMHO:
"This option works best when copying into an empty destination hierarchy, as
rsync treats existing files as definitive (so it never looks in the link-dest
dirs when a destination file already exists)".
I was suprised by this behaviour
2015 Apr 16
0
rsync --delete
Wow, it took me a few seconds to figure out what you were trying to do.
What's wrong with rm?
Also I think trying to leverage the side of disqualifying all source files
just to get the delete effect (very clever but somewhat obtuse!) risks
creating a temporary file of some kind in the target at the start of the
operation, and if you cant even mkdir then that exceeds disk quota
immediately
2015 May 15
0
feature request: rsync dereference symlinks on cmdline
This post
http://unix.stackexchange.com/questions/153262/get-rsync-to-dereference-symlinked-dirs-presented-on-cmdline-like-find-h
explains most of what i want, but basically, looking for a find -H option to rsync.
Reason is so that I can hit a source (or target!) dir in rsync by making a nice
dir of symlink maps.
For eg openVZ names their containers with ID#s which isnt very condusive to
2015 Jul 01
0
cut-off time for rsync ?
What is taking time, scanning inodes on the destination, or recopying the entire
backup because of either source read speed, target write speed or a slow interconnect
between them?
Do you keep a full new backup every day, or are you just overwriting the target
directory?
/kc
On Wed, Jul 01, 2015 at 10:06:57AM +0200, Dirk van Deun said:
>> If your goal is to reduce storage, and scanning
2015 Sep 09
0
large rsync fails with assertion error
Ok I found a bug about this:
https://bugzilla.samba.org/show_bug.cgi?id=6542
and it says fixed by upgrade. I found a way to upgrade. Using:
rsync version 3.1.1 protocol version 31
on receiving side that issues the rsync command, and
rsync version 3.1.1 protocol version 31
on the remote sending side.
Im still getting the same thing:
rsync: hlink.c:126: match_gnums: Assertion `gnum >=
2018 Oct 09
0
rsync --server command line options
. is the 'current directory' notation in unix.
.. is the parent directory.
/kc
On Mon, Oct 08, 2018 at 01:57:09PM -0700, Parke via rsync said:
>Hello,
>
>I ran the following commands:
>
>rsync /tmp/foo remote:
>rsync remote:/tmp/foo .
>
>On the remote computer, the following commands were executed:
>
>rsync --server -e.LsfxC . .
2015 Apr 17
1
Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)
How do you handle snapshotting? or do you leave that to the block/fs virtualization
layer?
/kc
On Fri, Apr 17, 2015 at 01:35:27PM +1200, Henri Shustak said:
>> Our backup procudures have provision for looking back at previous directories, but there is not much to be gained with recycled directories. Without recycling, and after a failure, the latest available backup may not have much
2018 Jan 24
1
glob exclude vs include behaviour
not a bug, buy colour me confused:
/tmp/foo$ mkdir a
/tmp/foo$ touch a/foo
/tmp/foo$ touch a/.baz
/tmp/foo$ cd ..
/tmp$ rsync -avP --exclude=a/* foo bar
sending incremental file list
created directory bar
foo/
foo/a/
sent 71 bytes received 20 bytes 182.00 bytes/sec
total size is 0 speedup is 0.00
/tmp$ ls -la bar
total 20
drwxr-xr-x 3 user user 4096 Jan 24 14:17 .
drwxrwxrwt 70 root
2015 Apr 16
0
rsync --delete
problem is he's trying to rsync into the target dir and have the
side effect of delete. so an empty dir would necessarily need to be
in the target of course and thus created there, triggering the quota block.
he tried to avoid this by using device files then 'blocking all device files'
but i think rsync figures out first there's nothing to do, so it just stops
and doesnt do the
2015 Jun 30
0
cut-off time for rsync ?
If your goal is to reduce storage, and scanning inodes doesnt matter,
use --link-dest for targets. However, that'll keep a backup for every
time that you run it, by link-desting yesterday's copy.
Y end up with a backup tree dir per day, with files hardlinked against
all other backup dirs. My (and many others) here's solution is to
mv $ancientbackup $today; rsync --del
2015 Sep 09
2
large rsync fails with assertion error
rsyncing a tree of perhaps 30M files, getting this:
rsync: hlink.c:126: match_gnums: Assertion `gnum >= hlink_flist->ndx_start' failed.
then a bit more output and
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
Sounds to me like maintaining the metadata cache is important - and tuning the
filesystem to do so would be more beneficial than caching writes, especially
with a backup target where a write already written will likely never be read
again (and isnt a big deal if it is since so few files are changed compared to
the total # of inodes to scan).
Your report of the minutes for the re-sync shows the