search for: dir5

Displaying 20 results from an estimated 23 matches for "dir5".

Did you mean: dir
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular empty file Device: 23h/35d Inode: 744413 Links: 2 Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN) Access: 2018-05-15 08:54:20.296048887 +0200 M...
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
...abi wrote: > Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. > > NODE 1: > > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > Device: 23h/35d Inode: 744413 Links: 2 > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN) > Access: 2018-05-15 08:54...
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
...nk you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. > > > > NODE 1: > > > > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? > > > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > > > > Device: 23h/35d Inode: 744413 Links: 2 > > > > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN) > >...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
...fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. > > > > > > NODE 1: > > > > > > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? > > > > > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > > > > > > Device: 23h/35d Inode: 744413 Links: 2 > > > > > > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) G...
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
...answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. >>>> >>>> NODE 1: >>>> >>>> File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? >>>> >>>> Size: 0 Blocks: 38 IO Block: 131072 regular empty file >>>> >>>> Device: 23h/35d Inode: 744413 Links: 2 >>>> >>>> Access: (0644/-rw-r--r--) Uid: (20936/ U...
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...ast by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > > > NODE1: > > > > STAT: > > > > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > > > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > > > > Device: 23h/35d Inode: 6822549 Links: 2 > > > > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOW...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...r xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` After...
2009 Dec 26
2
Question regarding if statement in while loop
...et(aux, select=c(3,5:7,9:10,13))) a <<- EBLUP.area(Y,cbind(w,1),sigma2ei,n) #The EBLUP.area function is a function already in R. } # It gives a bunch of output, some of what I need. #THIS IS THE LOOP I'M HAVING A PROBLEM WITH: results <- data.frame(length=nrow(dir5)) i <- 3 while (i <=some number) { eblest(i, dir5, sterr5, weight5, aux5) out <<- cbind(i, a$EBLUP, a$mse) results <- cbind(results, out) i <- i+1 } *********************************************************************** I have tried running the eblest function for a specific set...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...t; > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? > > Sorry I should have been clearer. Yes the brick on the 3rd node. > > `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` > > `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfil...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38 IO Block: 131072 regular empty file Device: 23h/35d Inode: 6822549 Links: 2 Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN) Access: 2018...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...e: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? > Size: 0 Blocks: 38 IO Block: 131072 regular empty file > Device: 23h/35d Inode: 6822549 Links: 2 > Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNO...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...>> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? >> Sorry I should have been clearer. Yes the brick on the 3rd node. >> >> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile` >> >> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problem...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
.../myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09 06:58:54.178133] E [MSGID: 113015] [posix.c:1208:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfile.zip/OC_DEFAULT_MODULE [No such file or directory] ? Hope that helps to find out the issue. ??????? Original Message ??????? On April 9, 2018 9:37 AM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hello, > > Last Friday I...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...ry is not empty. For your information I have upgraded my GlusterFS in offline mode and the upgrade went smoothly. What can I do to fix that issue? Best regards, Mabi [2018-04-09 06:58:46.906089] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 0-myvol-private-dht: renaming /dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/azipfile.zip (hash=myvol-private-replicate-0/cache=myvol-private-replicate-0) => /dir1/di2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfile.zip (hash=myvol-private-replicate-0/cache=<nul>) [2018-04-09 06:58:53.692440] W [MSGID: 114031] [c...
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
...39;t work: $ rm -rf /tmp/foo $ rsync -ai --min-size 10M --prune-empty-dirs /home/idallen/test /tmp/foo cd+++++++++ test/ cd+++++++++ test/dir1/ cd+++++++++ test/dir2/ cd+++++++++ test/dir3/ cd+++++++++ test/dir4/ >f+++++++++ test/dir4/BIGFILE cd+++++++++ test/dir5/ >f+++++++++ test/dir5/BIGFILE cd+++++++++ test/dir6/ >f+++++++++ test/dir6/BIGFILE Wrong. I don't want all those dir1, dir2, dir3 empty directories. I don't want *any* empty directories, at any level. What am I missing? -- | Ian! D. Allen - idallen@idallen.ca - Ot...
2008 Jan 24
1
zfs showing more filesystem using ls than df actually has
...21 dev drwxr-xr-x 2 root sys 512 Dec 18 16:20 devices dr-xr-xr-x 2 root root 512 Jan 17 2007 devl dr-xr-xr-x 2 root root 512 Jan 17 2007 dhpg drwxrwxrwx 3 root root 512 Jun 21 2007 dir3 drwxr-xr-x 2 root root 512 Jun 21 2007 dir5 drwxrwxrwx 4 root root 512 Oct 3 13:14 dnadir dr-xr-xr-x 2 root root 512 Jan 17 2007 doe dr-xr-xr-x 2 root root 512 Jan 17 2007 dteast -rw-r--r-- 1 root root 838656 Mar 26 2007 dumpevolution -rw-r--r-- 1 root root 553 Apr 4 20...
2013 Feb 10
3
Re: Diff using send-receive code
...file system snapshots using the send receive code. The output of our utility looks like this- (I''ve tested it on a small subvol with minimal changes just to give an idea) root@nafisa-M-6319:/mnt/btrfs# btrfs sub diff -p /mnt/btrfs/snap1 /mnt/btrfs/snap2 Directory Deleted path = dir5 File Deleted path = 1.c File Written path = 3.c data written : "3.c was changed" File Moved path from = 4.c path to = dir1/4.c new files created = 0 new dir created = 0 files deleted = 1 changed links = 1 dirs deleted = 1 files written = 1 We want the diff output to look mor...