similar to: gluster share as home

Displaying 20 results from an estimated 300 matches similar to: "gluster share as home"

2017 Oct 26
2
not healing one file
Hi Karthik, thanks for taking a look at this. I'm not working with gluster long enough to make heads or tails out of the logs. The logs are attached to this mail and here is the other information: # gluster volume info home Volume Name: home Type: Replicate Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a Status: Started Snapshot Count: 1 Number of Bricks: 1 x 3 = 3 Transport-type: tcp
2017 Oct 26
0
not healing one file
Hi Richard, Thanks for the informations. As you said there is gfid mismatch for the file. On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different. This is not considered as split-brain because we have two good copies here. Gluster 3.10 does not have a method to resolve this situation other than the manual intervention [1]. Basically what you need to do is remove the
2017 Oct 26
0
not healing one file
Thanks for this report. This week many of the developers are at Gluster Summit in Prague, will be checking this and respond next week. Hope that's fine. Thanks, Amar On 25-Oct-2017 3:07 PM, "Richard Neuboeck" <hawk at tbi.univie.ac.at> wrote: > Hi Gluster Gurus, > > I'm using a gluster volume as home for our users. The volume is > replica 3, running on
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2017 Oct 25
2
not healing one file
Hi Gluster Gurus, I'm using a gluster volume as home for our users. The volume is replica 3, running on CentOS 7, gluster version 3.10 (3.10.6-1.el7.x86_64). Clients are running Fedora 26 and also gluster 3.10 (3.10.6-3.fc26.x86_64). During the data backup I got an I/O error on one file. Manually checking for this file on a client confirms this: ls -l
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2018 Jan 17
1
Gluster endless heal
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community, we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation. The version is 3.8.11 on CentOS 7. The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare. After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day. Also there seem to be problems
2017 Jul 21
1
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 02:55 PM, yayo (j) wrote: > 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > But it does say something. All these gfids of completed heals in > the log below are the for the ones that you have given the > getfattr output of. So what is likely happening is there is an >
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Nov 16
0
Missing files on one of the bricks
Hello, we are using glusterfs 3.10.3. We currently have a gluster heal volume full running, the crawl is still running. Starting time of crawl: Tue Nov 14 15:58:35 2017 Crawl is in progress Type of crawl: FULL No. of entries healed: 0 No. of entries in split-brain: 0 No. of heal failed entries: 0 getfattr from both files: # getfattr -d -m . -e hex
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2018 May 02
0
Healing : No space left on device
Oh, and *there is* space on the device where the brick's data is located. ??? /dev/mapper/fedora-home?? 942G??? 868G?? 74G? 93% /export Le 02/05/2018 ? 11:49, Hoggins! a ?crit?: > Hello list, > > I have an issue on my Gluster cluster. It is composed of two data nodes > and an arbiter for all my volumes. > > After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Jul 25
0
recovering from a replace-brick gone wrong
Hi All, I have a 4 node cluster with a 4 brick distribute replica 2 volume on it running version 3.9.0-2 on CentOS 7. I use the cluster to provide shared volumes in a virtual environment as our storage only serves block storage. For some reason I decided to make the bricks for this volume directly on the block device rather than abstracting with LVM for easy space management. The bricks have