Displaying 20 results from an estimated 24 matches for "kenchase23".
2015 Jul 17
3
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099
--- Comment #8 from Chip Schweiss <chip at innovates.com> ---
I would argue that optionally all directory scanning should be made parallel.
Modern file systems perform best when request queues are kept full. The
current mode of rsync scanning directories does nothing to take advantage of
this.
I currently use scripts to split a couple
2017 Mar 03
2
How do you exclude a directory that is a symlink?
...;--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
...all for most replies to avoid omitting the mailing list.
> >To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
> >Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
>--
>Ken Chase - ken at heavycomputing.ca skype:kenchase23 Toronto Canada
>Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
>
>--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Bef...
2015 Jul 01
5
cut-off time for rsync ?
> If your goal is to reduce storage, and scanning inodes doesnt matter,
> use --link-dest for targets. However, that'll keep a backup for every
> time that you run it, by link-desting yesterday's copy.
The goal was not to reduce storage, it was to reduce work. A full
rsync takes more than the whole night, and the destination server is
almost unusable for anything else when it
2015 Apr 15
1
Can I let rsync only transer a part of file within specific byte ranges?
Hi all,
Suppose I have a file on the remote rsync server:
rsync://path/to/myfile
And I want to only retrieve a part of the file based a ranges of bytes to
my local host, say, 0-499, means only transfer the first 500 bytes of
that file.
Is this possible with rsync client?
Regards
--
.: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :.
2015 Apr 16
0
rsync --delete
...t;--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jul 01
0
cut-off time for rsync ?
...that are older than one week on
>the source side is a waste of time and effort, as the rsync is done
>every day, so they can safely be assumed to be in sync already.
>
>Dirk van Deun
>--
>Ceterum censeo Redmond delendum
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jul 02
1
cut-off time for rsync ?
> What is taking time, scanning inodes on the destination, or recopying the entire
> backup because of either source read speed, target write speed or a slow interconnect
> between them?
It takes hours to traverse all these directories with loads of small
files on the backup server. That is the limiting factor. Not
even copying: just checking the timestamp and size of the old copies.
2015 Jul 02
1
cut-off time for rsync ?
...linux (or freebsd or any unix) can be told to cache
metadata more aggressively than data - not much point for the latter on a backup
server. The former would be great. I dont know how big metadata is in ram either
for typical OS's, per inode.
/kc
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jul 16
1
Fwd: rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote:
> As with any dedup solution, performance does take a hit and its often
> not worth it unless you have a lot of duplication in the data.
This is so only in some volumes in our case, but it appears that zfs
permits this to be enabled/disabled on a per-volume basis. That would
work for us.
Is there a way to save cycles by offering zfs
2015 Apr 17
1
Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)
...t;--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2018 Jan 24
1
glob exclude vs include behaviour
...ser user 4096 Jan 24 14:17 ..
drwxr-xr-x 2 user user 4096 Jan 24 14:16 a
/tmp$ ls -la bar/foo/a
total 8
drwxr-xr-x 2 user user 4096 Jan 24 14:16 .
drwxr-xr-x 3 user user 4096 Jan 24 14:16 ..
where's .baz in the copy?
how does * match .baz?
/kc
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jun 15
1
rsync very slow with large include/exclude file list
I investigated the rsync code and found the reason why.
For every file in the source, it searches the entire filter-list looking to
see if that filename is on the exclude/include list. Most aren't, so it
compares (350K - 72K) * 72K names (the non-listed files) plus (72K * 72K/2)
names (the ones that are listed), for a total of about 22,608,000,000
strcmp's. That's 22 BILLION
2015 Jun 30
0
cut-off time for rsync ?
...t;--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Jul 14
1
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
...t;--
>Please use reply-all for most replies to avoid omitting the mailing list.
>To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Apr 16
3
rsync --delete
Hi, Rsync.
I want to help rsink delete a folder with a large number of files and folders. Tried this:
rsync -a --no-D --delete /dev/null /home/rc-41/data/000000000000061/2015-04-01-07-04/
skipping non-regular file "null"
rsync -a --no-D --delete /dev/zero /home/rc-41/data/000000000000061/2015-04-01-07-04/
skipping non-regular file "zero"
That's how it turns out
rsync -a
2015 Apr 07
0
Patch for rsync --link-dest won't link even if existing file is out of date (fwd)
...I have detailed an example of this scenario at
http://unix.stackexchange.com/questions/193308/rsyncs-link-dest-option-does-not-link-identical-files-if-an-old-file-exists
which also indicates --delete-before and --whole-file do not help at all.
/kc
--
Ken Chase - ken at heavycomputing.ca skype:kenchase23 +1 416 897 6284 Toronto Canada
Heavy Computing - Clued bandwidth, colocation and managed linux VPS @151 Front St. W.
2015 Apr 16
2
Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)
rsync folks,
Henri Shustak <henri.shustak at gmail.com> wrote:
> LBackup always starts a new backup snapshot with an empty directory. I
> have been looking at extending --link-dest options to scan beyond just
> the previous successful backup to (failed backups / older backups).
> However, there are all kinds of edge cases which are worth considering
> with such a changes. At
2015 Apr 15
1
rsync --link-dest won't link even if existing file is out of date
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 04/14/2015 11:35 PM, Henri Shustak wrote:
>> Ill take a look but I imagine I cant backup the 80 Million files
>> I need to in under the 5 hours i have for nightly
>> maintenance/backups. Currently it's possible by recycling
>> directories...
I would expect that recycling directories actually makes this worse.
With an
2015 Jul 13
6
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote:
> The think here is that you are into "backup" tools rather than the
> general purpose tool that rsync is intended to be.
Yes, that is true. Rsync serves so well as a core component to backup, I
can be blind about "something other than rsync".
I'll look at the tools you suggest. However, you've made be