similar to: skip directory if checksum matches

Displaying 20 results from an estimated 10000 matches similar to: "skip directory if checksum matches"

2010 Jul 16
4
--compare-dest weirdness
Hi All, I am writing a backup program for my computer. brief outline is as follows. Running ubuntu 10.04 2 main partitions, / and /home, both ext3. 1 external usb hdd, ext3, mounted to /backups/main. once every couple of days, rsync backs up, using following command, everything worth backing up in / and /home partitions to a folder /backups/main/Full. command: "rsync -vrhRupElog
2018 Aug 21
2
[Bug 13587] New: Add a --dry-run way to show destination for each item
https://bugzilla.samba.org/show_bug.cgi?id=13587 Bug ID: 13587 Summary: Add a --dry-run way to show destination for each item Product: rsync Version: 3.1.2 Hardware: All OS: All Status: NEW Severity: normal Priority: P5 Component: core Assignee: wayned at samba.org
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2007 Dec 09
3
OT: Rsync question
Hello All, I have an off topic question about rsync and was wondering if i can get some kind person help with it. I have two servers with each server have three same directories on them /dir1/ /dir2/ /dir3/ . How would i achieve this by using rsync? I have tried rsync -avrt --delete server_ip:/dir1/ /dir2/ /dir3/ /dir1/ /dir2/ /dir3/ this does not do anything except give errors. Someone on IRC
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi, Some questions: -Did you by any chance change the cluster.quorum-type option from the default values? -Is filename.shareKey supposed to be any empty file? Looks like the file was fallocated with the keep-size option but never written to. (On the 2 data bricks, stat output shows Size =0, but non zero Blocks and yet a 'regular empty file'). -Do you have some sort of a
2007 Nov 21
2
Access control question.
Hello, I have a general administrative question concerning Samba shares. I have a large amount of data that about 25 users have limited access to. I only want these users to have access to a sub-set of this data, but I also only want the users to see that which they have access to. So, for example, suppose that the share looks like thus: /smbshare /smbshare/dir1 /smbshare/dir2
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ?? ??????? Original Message ??????? On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ?? > > On 04/09/2018 04:36 PM, mabi wrote: > > >
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello, I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Thanks, Mabi ? ??????? Original Message ??????? On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hi Ravi,
2001 Nov 16
1
include/exclude directory question
I want to send a subset of directories sepcified in "include" arguments to a client, but I am creating all the peer and parent directories as well, although they are empty - here is basically what I'm doing. assuming I have /staging/upgrade/dir1, /staging/upgrade/dir2 and /staging/upgrade/dir3 on the source tree. --include "*/" -include upgrade/dir1/** -include
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote: > Hello, > > I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? > > For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Sorry Mabi,? I haven't had a chance to dig deeper into this. The workaround of
2007 Apr 30
3
Incremental backup and empty dirs
Hi, I use rsync with such options: OPTIONS="-a -u -z -v -S --delete-during --ignore-errors \ -b --backup-dir=${PATH_BACKUP}/${DATE_YESTERDAY} \ --exclude-from=$IGNORE" rsync $OPTIONS ${PATH_SRC} \ ${PATH_BACKUP}/current Everything works as it should be, deleted files are transfered everyday to a new catalog, determinated by a variable ${DATE_YESTERDAY}. But
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2003 Feb 25
2
Difference in behaviour with --backup
When I do rsync -av --backup --backup-dir=/dir1/dir2/Backup \ /dir1/dir2/dir3 machine:/dir1/dir2/dir3 I get /dir1/dir2/Backup/dir3/... /Backup/dir3/dir4/... i.e. the tree under 'dir3' (my source tree) gets created under .../Backup. This is fine. But when I do the same thing with a single file like rsync -av --backup --backup-dir=/dir1/dir2/Backup \ /dir1/dir2/file
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2013 Dec 04
1
Query on make
Greetings, I have created several scripts which needs to be packaged. I have done my groundwork on rpmbuild Let us say I have 4 directories with scripts in them dir1, dir2, dir3 and dir4 I want to create different packages which will contain the compiled code (err.. shc) pack1: dir1,dir2 pack2: dir1,dir3 pack3: dir1,dir3 I want the resulting rpm packages will be residing elsewhere (perhaps
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there