similar to: gluster client timeouts / found conflict

Displaying 20 results from an estimated 100 matches similar to: "gluster client timeouts / found conflict"

2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
Hello list. I'm testing GlusterFS AFR mode as a solution for implementing a highly available webDAV file storage for our production environment. Whlie doing performance tests I've notticed a strange behavior: the files which are uploaded via a webDAV server, end up without extended attributes, which removes the ability to self-heal. The set up is a simple testing environment with 2
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2010 Apr 19
1
Permission Problems
Hello List, first of all my configuration: I have 2 GlusterPlatform 3.0.3 Servers virtualized on VMWare Esxi 4. With one Volume exported as "raid 1". I mounted the share with the GlusterClient 3.0.2 with the following /etc/fstab line: /etc/glusterfs/client.vol /mnt/images glusterfs defaults 0 0 The client.vol looks like this: # auto generated by
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello: i had met the problem twice when i copy some files into the GFS space . i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear. in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2017 Nov 07
0
error logged in fuse-mount log file
Hi, I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file. what does this error mean? should i worry about this and how do i resolve this? [2017-11-07 11:59:17.218973] W [MSGID: 109005] [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory selfheal fail ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, gfid
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2017 Nov 19
0
gluster share as home
Hi Gluster Group, I've been using gluster as storage back end for oVirt for some years now without the slightest hitch at all. Excited with this I wanted to switch our home share from NFS over to a replica 3 gluster volume as well. Since small file performance was not particular good I applied all performance enhancing settings I could find in the gluster blog and on other sites. Those
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community, we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation. The version is 3.8.11 on CentOS 7. The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare. After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day. Also there seem to be problems
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone: We have a glusterfs clusters, version is 3.2.7. The volume info is as below: Volume Name: gfs1 Type: Distributed-Replicate Status: Started Number of Bricks: 94 x 3 = 282 Transport-type: tcp We native mount the volume in all nodes. When we access the file ?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain. While we can access the same file on
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 02:55 PM, yayo (j) wrote: > 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > But it does say something. All these gfids of completed heals in > the log below are the for the ones that you have given the > getfattr output of. So what is likely happening is there is an >
2017 Nov 10
0
Error logged in fuse-mount log file
Hi, Comments inline. Regards, Nithya On 9 November 2017 at 15:05, Amudhan Pandian <amudh_an at hotmail.com> wrote: > resending mail from another id, doubt on whether mail reaches mailing list. > > > ---------- Forwarded message ---------- > From: *Amudhan P* <amudhan83 at gmail.com> > Date: Tue, Nov 7, 2017 at 6:43 PM > Subject: error logged in fuse-mount log
2018 Jan 17
1
Gluster endless heal
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2017 Jul 25
0
recovering from a replace-brick gone wrong
Hi All, I have a 4 node cluster with a 4 brick distribute replica 2 volume on it running version 3.9.0-2 on CentOS 7. I use the cluster to provide shared volumes in a virtual environment as our storage only serves block storage. For some reason I decided to make the bricks for this volume directly on the block device rather than abstracting with LVM for easy space management. The bricks have
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2017 Jul 21
1
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2017 Nov 13
0
Error logged in fuse-mount log file
Adding Ashish . Hi Amudhan, Can you check the gfids for every dir in that heirarchy? Maybe one of the parent dirs has a gfid mismatch. Regards, Nithya On 13 November 2017 at 17:39, Amudhan P <amudhan83 at gmail.com> wrote: > Hi Nithya, > > I have checked gfid in all the bricks in disperse set for the folder. it > all same there is no difference. > > regards >
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0