similar to: New 3.12.7 possible split-brain on replica 3

Displaying 20 results from an estimated 700 matches similar to: "New 3.12.7 possible split-brain on replica 3"

2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ?? ??????? Original Message ??????? On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ?? > > On 04/09/2018 04:36 PM, mabi wrote: > > >
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Thank you Ravi for your fast answer. As requested you will find below the "stat" and "getfattr" of one of the files and its parent directory from all three nodes of my cluster. NODE 1: File: ?/data/myvolume-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/OC_DEFAULT_MODULE/filename.shareKey? Size: 0 Blocks: 38 IO Block: 131072 regular
2018 May 17
0
New 3.12.7 possible split-brain on replica 3
Hi mabi, Some questions: -Did you by any chance change the cluster.quorum-type option from the default values? -Is filename.shareKey supposed to be any empty file? Looks like the file was fallocated with the keep-size option but never written to. (On the 2 data bricks, stat output shows Size =0, but non zero Blocks and yet a 'regular empty file'). -Do you have some sort of a
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello, I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Thanks, Mabi ? ??????? Original Message ??????? On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hi Ravi,
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote: > Hello, > > I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? > > For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Sorry Mabi,? I haven't had a chance to dig deeper into this. The workaround of
2018 May 15
0
New 3.12.7 possible split-brain on replica 3
On 05/15/2018 12:38 PM, mabi wrote: > Dear all, > > I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. > > It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node
2018 May 15
2
New 3.12.7 possible split-brain on replica 3
Dear all, I have upgraded my replica 3 GlusterFS cluster (and clients) last Friday from 3.12.7 to 3.12.9 in order to fix this bug but unfortunately I notice that I still have exactly the same problem as initially posted in this thread. It looks like this bug is not resolved as I just got right now 3 unsynched files on my arbiter node like I used to do before upgrading. This problem started
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already working for them or if they wouldn't mind performing a quick test. I'm trying to set up a geo-replication instance on 3.2.5 from a local volume to a remote directory. This is the command I am using: gluster volume geo-replication myvol ssh://root at remoteip:/data/path start I am able to perform a geo-replication
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3:
2015 Nov 24
2
libvirtd doesn't attach Sheepdog storage VDI disk correctly
Hi, I am trying to use libvirt with sheepdog. I am using Debian 8 stable with libvirt V1.21.0 I am encountering a Problem which already has been reported. ================================================================= See here: http://www.spinics.net/lists/virt-tools/msg08363.html ================================================================= qemu/libvirtd is not setting the path
2013 Apr 22
1
failure creating a snapshot volume within a lvm-based pool
Hi I have defined a logical pool and a volume within it # virsh vol-create-as images_lvm myvol 2G Vol myvol created # virsh vol-list images_lvm Name Path ----------------------------------------- myvol /dev/libvirt_images_vg/myvol if I try to create another volume using the previous one as backing-vol, the creation fails with what looks like an incorrect
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect