Displaying 20 results from an estimated 8000 matches similar to: "[Bug 3099] Please parallelize filesystem scan"
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
Sounds to me like maintaining the metadata cache is important - and tuning the
filesystem to do so would be more beneficial than caching writes, especially
with a backup target where a write already written will likely never be read
again (and isnt a big deal if it is since so few files are changed compared to
the total # of inodes to scan).
Your report of the minutes for the re-sync shows the
2015 Jul 01
5
cut-off time for rsync ?
> If your goal is to reduce storage, and scanning inodes doesnt matter,
> use --link-dest for targets. However, that'll keep a backup for every
> time that you run it, by link-desting yesterday's copy.
The goal was not to reduce storage, it was to reduce work. A full
rsync takes more than the whole night, and the destination server is
almost unusable for anything else when it
2015 Jul 02
1
cut-off time for rsync ?
> What is taking time, scanning inodes on the destination, or recopying the entire
> backup because of either source read speed, target write speed or a slow interconnect
> between them?
It takes hours to traverse all these directories with loads of small
files on the backup server. That is the limiting factor. Not
even copying: just checking the timestamp and size of the old copies.
2017 Mar 03
2
How do you exclude a directory that is a symlink?
Considering you cant INCLUDE a directory that is a symlink... which would
be really handy right now for me to resolve a mapping of 103 -> meaningful_name
for backups, instead im resorting to temporary bind mounts of 103 onto
meaningful_name, and when the bind mount isnt there, the --del is emptying
meaningful_name accidentally at times.
I think both situations could benefit from a
2015 Apr 15
1
Can I let rsync only transer a part of file within specific byte ranges?
Hi all,
Suppose I have a file on the remote rsync server:
rsync://path/to/myfile
And I want to only retrieve a part of the file based a ranges of bytes to
my local host, say, 0-499, means only transfer the first 500 bytes of
that file.
Is this possible with rsync client?
Regards
--
.: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :.
2015 Jul 16
1
Fwd: rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 17:38:35 -0400, Selva Nair wrote:
> As with any dedup solution, performance does take a hit and its often
> not worth it unless you have a lot of duplication in the data.
This is so only in some volumes in our case, but it appears that zfs
permits this to be enabled/disabled on a per-volume basis. That would
work for us.
Is there a way to save cycles by offering zfs
2015 Jun 15
1
rsync very slow with large include/exclude file list
I investigated the rsync code and found the reason why.
For every file in the source, it searches the entire filter-list looking to
see if that filename is on the exclude/include list. Most aren't, so it
compares (350K - 72K) * 72K names (the non-listed files) plus (72K * 72K/2)
names (the ones that are listed), for a total of about 22,608,000,000
strcmp's. That's 22 BILLION
2015 Jul 02
8
[Bug 11378] New: Please add a '--line-buffered' option to rsync to make logging/output more friendly with pipes/syslog/CI systems/etc.
https://bugzilla.samba.org/show_bug.cgi?id=11378
Bug ID: 11378
Summary: Please add a '--line-buffered' option to rsync to make
logging/output more friendly with pipes/syslog/CI
systems/etc.
Product: rsync
Version: 3.1.1
Hardware: All
OS: All
Status: NEW
2015 Apr 16
3
rsync --delete
Hi, Rsync.
I want to help rsink delete a folder with a large number of files and folders. Tried this:
rsync -a --no-D --delete /dev/null /home/rc-41/data/000000000000061/2015-04-01-07-04/
skipping non-regular file "null"
rsync -a --no-D --delete /dev/zero /home/rc-41/data/000000000000061/2015-04-01-07-04/
skipping non-regular file "zero"
That's how it turns out
rsync -a
2015 Jun 30
4
cut-off time for rsync ?
Hi,
I used to rsync a /home with thousands of home directories every
night, although only a hundred or so would be used on a typical day,
and many of them have not been used for ages. This became too large a
burden on the poor old destination server, so I switched to a script
that uses "find -ctime -7" on the source to select recently used homes
first, and then rsyncs only those. (A
2015 Apr 15
1
rsync --link-dest won't link even if existing file is out of date
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 04/14/2015 11:35 PM, Henri Shustak wrote:
>> Ill take a look but I imagine I cant backup the 80 Million files
>> I need to in under the 5 hours i have for nightly
>> maintenance/backups. Currently it's possible by recycling
>> directories...
I would expect that recycling directories actually makes this worse.
With an
2015 Apr 16
2
Recycling directories and backup performance. Was: Re: rsync --link-dest won't link even if existing file is out of date (fwd)
rsync folks,
Henri Shustak <henri.shustak at gmail.com> wrote:
> LBackup always starts a new backup snapshot with an empty directory. I
> have been looking at extending --link-dest options to scan beyond just
> the previous successful backup to (failed backups / older backups).
> However, there are all kinds of edge cases which are worth considering
> with such a changes. At
2015 Apr 06
3
rsync --link-dest won't link even if existing file is out of date
This has been a consideration. But it pains me that a tiny change/addition
to the rsync option set would save much time and space for other legit use
cases.
We know rsync very well, we dont know ZFS very well (licensing kept the
tech out of our linux-centric operations). We've been using it but we're
not experts yet.
Thanks for the suggestion.
/kc
On Mon, Apr 06, 2015 at 12:07:05PM
2015 Jul 13
6
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 15:40:51 +0100, Simon Hobson wrote:
> The think here is that you are into "backup" tools rather than the
> general purpose tool that rsync is intended to be.
Yes, that is true. Rsync serves so well as a core component to backup, I
can be blind about "something other than rsync".
I'll look at the tools you suggest. However, you've made be
2015 Jul 13
3
rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
On Mon, 13 Jul 2015 02:19:23 +0000, Andrew Gideon wrote:
> Look at tools like inotifywait, auditd, or kfsmd to see what's easily
> available to you and what best fits your needs.
>
> [Though I'd also be surprised if nobody has fed audit information into
> rsync before; your need doesn't seem all that unusual given ever-growing
> disk storage.]
I wanted to take this
2005 Sep 15
4
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099
wayned@samba.org changed:
What |Removed |Added
----------------------------------------------------------------------------
Severity|normal |enhancement
Status|NEW |RESOLVED
Resolution| |WONTFIX
2015 Apr 06
6
rsync --link-dest won't link even if existing file is out of date
Feature request: allow --link-dest dir to be linked to even if file exists
in target.
This statement from the man page is adhered to too strongly IMHO:
"This option works best when copying into an empty destination hierarchy, as
rsync treats existing files as definitive (so it never looks in the link-dest
dirs when a destination file already exists)".
I was suprised by this behaviour
2005 Sep 14
0
[Bug 3099] New: Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099
Summary: Please parallelize filesystem scan
Product: rsync
Version: 2.6.4
Platform: All
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P3
Component: core
AssignedTo: wayned@samba.org
ReportedBy: hpa@zytor.com
QAContact:
2013 Feb 10
0
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099
--- Comment #6 from Arie Skliarouk <skliarie at gmail.com> 2013-02-10 06:45:30 UTC ---
Any hope for the bug to be resolved? It is really inconvenient to have
production database to be down for double amount of time than what is really
necessary.
--
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
------- You are
2015 Jul 17
0
[Bug 3099] Please parallelize filesystem scan
https://bugzilla.samba.org/show_bug.cgi?id=3099
--- Comment #7 from Rainer <rainer at voigt-home.net> ---
Hi,
I'm experiencing the very same problem: I'm trying to sync a set of VMWare disk
files (about 2.5TB) with not too many changes, and direct copying is still
faster than the checksumming by a quite large margin because of the sequential
checksumming on source and target just