Displaying 20 results from an estimated 62 matches for "myvol".
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...has 9 files to be healed but are not being healed automatically.
All nodes were always online and there was no network interruption so I am wondering if this might not really be a split-brain issue but something else.
I found some interesting log entries on the client log file (/var/log/glusterfs/myvol-private.log) which I have included below in this mail. It looks like some renaming has gone wrong because a directory is not empty.
For your information I have upgraded my GlusterFS in offline mode and the upgrade went smoothly.
What can I do to fix that issue?
Best regards,
Mabi
[2018-04-09 0...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...;
> > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
> >
> > NODE1:
> >
> > STAT:
> >
> > File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
> >
> > Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> >
> > Device: 23h/35d Inode: 6822549 Links: 2
> >
> > Access...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38 IO Block: 131072 regular empty file
Device: 23h/35d Inode: 6822549 Links: 2
Access: (0644/-rw-r--r--) Uid: (20909/ UNKNOW...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09 06:58:54.178133] E [MSGID: 113015] [posix.c:1208:posix_opendir] 0-myvol-private-posix: opendir...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...r answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
`setfattr -x trusted.afr.myvol-private-client-1
/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/d...
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
...ion but how do I delete the trusted.afr xattrs on this brick?
> >
> > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
>
> Sorry I should have been clearer. Yes the brick on the 3rd node.
>
> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
>
> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
> Size: 0 Blocks: 38 IO Block: 131072 regular empty file
> Device: 23h/35d Inode: 6822549 Links: 2
> Access: (0644/-rw-r--r--) Uid...
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
...w do I delete the trusted.afr xattrs on this brick?
>>>
>>> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
>> Sorry I should have been clearer. Yes the brick on the 3rd node.
>>
>> `setfattr -x trusted.afr.myvol-private-client-0 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile`
>>
>> `setfattr -x trusted.afr.myvol-private-client-1 /data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir...
2012 Mar 20
1
issues with geo-replication
...#39;m looking to see if anyone can tell me this is already
working for them or if they wouldn't mind performing a quick test.
I'm trying to set up a geo-replication instance on 3.2.5 from a local
volume to a remote directory. This is the command I am using:
gluster volume geo-replication myvol ssh://root at remoteip:/data/path start
I am able to perform a geo-replication from a local volume to a remote
volume with no problem using the following command:
gluster volume geo-replication myvol ssh://root at remoteip::remotevol start
The steps I am using to implement this:
1: Create key f...
2013 Apr 22
1
failure creating a snapshot volume within a lvm-based pool
Hi
I have defined a logical pool and a volume within it
# virsh vol-create-as images_lvm myvol 2G
Vol myvol created
# virsh vol-list images_lvm
Name Path
-----------------------------------------
myvol /dev/libvirt_images_vg/myvol
if I try to create another volume using the previous one as backing-vol,
the creation fails with what looks like an incorrect...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3: gv4:/data/gv01-arbiter (arbiter)
Brick4: gv2:/data/glusterfs
Br...
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
...Also if you know any "log file sanitizer tool" which can replace sensitive file names with random file names in log files that would like to use it as right now I have to do that manually.
??
NODE 1 brick log:
[2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory]
NODE 2 brick log:
[2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir...
2015 Nov 30
1
Re: libvirtd doesn't attach Sheepdog storage VDI disk correctly
...ation unit='bytes'>750780416</allocation>
<target>
<path>lubuntu1404.iso</path>
<format type='unknown'/>
</target>
</volume>
======================================================
2.) creating a new volum using an xml-file (myvol.xml)
====================================================
<volume>
<name>myvol</name>
<key>sheep/myvol</key>
<source>
</source>
<capacity unit='bytes'>53687091200</capacity>
<allo...
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
...any "log file sanitizer tool" which can replace sensitive file names with random file names in log files that would like to use it as right now I have to do that manually.
>
> NODE 1 brick log:
>
> [2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory]
>
> NODE 2 brick log:
>
> [2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-pr...
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
...file sanitizer tool" which can replace sensitive file names with random file names in log files that would like to use it as right now I have to do that manually.
>>
>> NODE 1 brick log:
>>
>> [2018-05-15 06:54:20.176679] E [MSGID: 113015] [posix.c:1211:posix_opendir] 0-myvol-private-posix: opendir failed on /data/myvol-private/brick/cloud/data/admin/files_encryption/keys/files/dir/dir/anotherdir/dir/OC_DEFAULT_MODULE [No such file or directory]
>>
>> NODE 2 brick log:
>>
>> [2018-05-15 06:54:20.176415] E [MSGID: 113015] [posix.c:1211:posix_opend...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...>
<gfid:420c76a8-1598-4136-9c77-88c8d59d24e7>
<gfid:ea6dbca2-f7e3-4015-ae34-04e8bf31fd4f>
...
And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999):
# grep -c gfid heal-info.fpack
80578
# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0
Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possib...
2015 Nov 24
2
libvirtd doesn't attach Sheepdog storage VDI disk correctly
Hi,
I am trying to use libvirt with sheepdog.
I am using Debian 8 stable with libvirt V1.21.0
I am encountering a Problem which already has been reported.
=================================================================
See here: http://www.spinics.net/lists/virt-tools/msg08363.html
=================================================================
qemu/libvirtd is not setting the path
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...Regards,
Karthik
On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko <gvs at webkontrol.ru> wrote:
> Hi folks,
>
> I'm troubled moving an arbiter brick to another server because of I/O load
> issues. My setup is as follows:
>
> # gluster volume info
>
> Volume Name: myvol
> Type: Distributed-Replicate
> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gv0:/data/glusterfs
> Brick2: gv1:/data/glusterfs
> Brick3: gv4:/data/gv0...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.myvol-client-6=0x000000010000000100000000
trusted.bit-rot.version=0x02000000000000005a0d2f650005bf97
trusted.gfid=0xe46e9a655128456bba0d98568d432717
root at gv3 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absol...
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$ ls -la /myvol-1/.trashcan/test1/b1/
leads to an outage of the geo-replication.
error on master-01 and master-02 :
[2018-03-12 13:37:14.827204] I [master(/brick1/mvol1):1385:crawl]
_GMaster: slave...