Displaying 20 results from an estimated 49 matches for "dir4".
Did you mean:
dir
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
NODE 1:
File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
Size: 0 Blocks: 38 IO Block: 131072 regular empty file
Device: 23h/35d Inode: 744413 Links: 2
Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN)
Access: 2018-05-15 08:54:20.296048887 +0...
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
...PM, mabi wrote:
> Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
>
> NODE 1:
>
> File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
> Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> Device: 23h/35d Inode: 744413 Links: 2
> Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN)
> Access: 2018-05-15...
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
...; Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
> >
> > NODE 1:
> >
> > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
> >
> > Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> >
> > Device: 23h/35d Inode: 744413 Links: 2
> >
> > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNOWN) Gid: (20936/ UNKNOWN)
>...
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
...your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
> > >
> > > NODE 1:
> > >
> > > File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
> > >
> > > Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> > >
> > > Device: 23h/35d Inode: 744413 Links: 2
> > >
> > > Access: (0644/-rw-r--r--) Uid: (20936/ UNKNO...
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote:
> Dear all,
>
> I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
>
> It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
...fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster.
>>>>
>>>> NODE 1:
>>>>
>>>> File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey?
>>>>
>>>> Size: 0 Blocks: 38 IO Block: 131072 regular empty file
>>>>
>>>> Device: 23h/35d Inode: 744413 Links: 2
>>>>
>>>> Access: (0644/-rw-r--r--) Uid: (209...
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all,
I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread.
It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
> >
> > NODE1:
> >
> > STAT:
> >
> > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
> >
> > Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> >
> > Device: 23h/35d Inode: 6822549 Links: 2
> >
> > Access: (0644/-rw-r--r--) Uid: (20909/ U...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...ed.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
`setfattr -x trusted.afr.myvol-private-client-1
/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...t; >
> > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
>
> Sorry I should have been clearer. Yes the brick on the 3rd node.
>
> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
>
> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problemat...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38 IO Block: 131072 regular empty file
Device: 23h/35d Inode: 6822549 Links: 2
Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/ UNKNOWN)
Access:...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
> Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> Device: 23h/35d Inode: 6822549 Links: 2
> Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOWN) Gid: (20909/...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...>>> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
>> Sorry I should have been clearer. Yes the brick on the 3rd node.
>>
>> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
>>
>> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/pr...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
.../data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09 06:58:54.178133] E [MSGID: 113015] [posix.c:1208:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfile.zip/OC_DEFAULT_MODULE [No such file or directory]
?
Hope that helps to find out the issue.
??????? Original Message ???????
On April 9, 2018 9:37 AM, mabi <mabi at protonmail.ch> wrote:
> ??
>
> Hello,
>
> Last Fri...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...rectory is not empty.
For your information I have upgraded my GlusterFS in offline mode and the upgrade went smoothly.
What can I do to fix that issue?
Best regards,
Mabi
[2018-04-09 06:58:46.906089] I [MSGID: 109066] [dht-rename.c:1741:dht_rename] 0-myvol-private-dht: renaming /dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/azipfile.zip (hash=myvol-private-replicate-0/cache=myvol-private-replicate-0) => /dir1/di2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfile.zip (hash=myvol-private-replicate-0/cache=<nul>)
[2018-04-09 06:58:53.692440] W [MSGID: 11403...
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
...directory
hierarchies that contain the big files. This doesn't work:
$ rm -rf /tmp/foo
$ rsync -ai --min-size 10M --prune-empty-dirs /home/idallen/test /tmp/foo
cd+++++++++ test/
cd+++++++++ test/dir1/
cd+++++++++ test/dir2/
cd+++++++++ test/dir3/
cd+++++++++ test/dir4/
>f+++++++++ test/dir4/BIGFILE
cd+++++++++ test/dir5/
>f+++++++++ test/dir5/BIGFILE
cd+++++++++ test/dir6/
>f+++++++++ test/dir6/BIGFILE
Wrong. I don't want all those dir1, dir2, dir3 empty directories.
I don't want *any* empty directories, at any level.
What...
2016 Dec 14
4
[PATCH 0/4] sysprep: Remove various backup files.
https://bugzilla.redhat.com/show_bug.cgi?id=1401320
This series contains two new operations.
The second -- and least controversial -- is "passwd-backups" which
removes files such as /etc/passwd-, /etc/shadow- and so on.
The first one ("backup-files") searches the whole guest filesystem for
any regular file which looks like an editor backup file, such as "*~"
and
2016 Dec 14
5
[PATCH v3 0/5] sysprep: Remove various backup files.
v3:
- Split out test for "unix-like" guest OSes into separate commit.
- Add guestfish --format=qcow2 to the test (x2).
Rich.
2007 Nov 24
3
Share root directory appears in subdirectories. (Well, can't actually see it but can cd into it, even if its not there.) (Serious bug?)
...h), according to smbd -V.
As mount helper I use mount.cifs, compiled from samba-3.0.26a.
The kernels on the server and client are the Debian default kernels
(2.6.18-5-486 and 2.6.18-5-686).
The directory structure looks like:
/dir1/dir2/dir3
where dir2 is the mountpoint.
If I 'cd' into dir4 from dir3, I see the contest of dir2. It may have to
do with the fact, that the name of dir4 is the
same as dir2 ...
Example:
/coffee/cup$ ls
Dir contents of cup
/coffee/cup$ cd foo
/coffee/cup/foo$ ls
cup, water
/coffee/cup/foo$ cd cup
/coffee/cup/foo/cup$ ls
The contents of /coffee/cu...
2016 Dec 14
6
[PATCH v2 0/4] sysprep: Remove various backup files.
In v2:
- The backup-files operation now operates on a conservative whitelist
of filesystems, so it won't touch anything in /usr. Consequently
it also runs much more quickly, about 4 seconds on the barebones
virt-builder fedora-25 image.
- Call Gc.compact () in visit_tests.
- Added documentation to fnmatch.mli.