Displaying 20 results from an estimated 200 matches similar to: "Healing : No space left on device"
2018 May 02
0
Healing : No space left on device
Oh, and *there is* space on the device where the brick's data is located.
??? /dev/mapper/fedora-home?? 942G??? 868G?? 74G? 93% /export
Le 02/05/2018 ? 11:49, Hoggins! a ?crit?:
> Hello list,
>
> I have an issue on my Gluster cluster. It is composed of two data nodes
> and an arbiter for all my volumes.
>
> After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this
2018 Jan 24
4
Replacing a third data node with an arbiter one
Hello,
The subject says it all. I have a replica 3 cluster :
gluster> volume info thedude
?
Volume Name: thedude
Type: Replicate
Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude
Brick2:
2018 Jan 26
0
Replacing a third data node with an arbiter one
On 01/24/2018 07:20 PM, Hoggins! wrote:
> Hello,
>
> The subject says it all. I have a replica 3 cluster :
>
> gluster> volume info thedude
>
> Volume Name: thedude
> Type: Replicate
> Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
>
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem.
Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?:
> Yes, you would need to reduce it to replica 2 and then convert it to
> arbiter.
> 1. Ensure there are no pending heals, i.e. heal info shows zero entries.
> 2. gluster volume remove-brick thedude replica 2
> ngluster-3.network.hoggins.fr:/export/brick/thedude force
> 3. gluster volume
2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote:
>
>
> On 01/24/2018 07:20 PM, Hoggins! wrote:
>
> Hello,
>
> The subject says it all. I have a replica 3 cluster :
>
> gluster> volume info thedude
>
> Volume Name: thedude
> Type: Replicate
> Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
>
2018 Jan 29
0
Replacing a third data node with an arbiter one
On 01/29/2018 08:56 PM, Hoggins! wrote:
> Thank you, for that, however I have a problem.
>
> Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?:
>> Yes, you would need to reduce it to replica 2 and then convert it to
>> arbiter.
>> 1. Ensure there are no pending heals, i.e. heal info shows zero entries.
>> 2. gluster volume remove-brick thedude replica 2
>>
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2017 Dec 19
0
How to make sure self-heal backlog is empty ?
Mine also has a list of files that seemingly never heal. They are usually isolated on my arbiter bricks, but not always. I would also like to find an answer for this behavior.
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hoggins!
Sent: Tuesday, December 19, 2017 12:26 PM
To: gluster-users <gluster-users at
2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions:
1. What volume type is this? What tuning have you done? gluster v info output would be helpful here.
2. How big are your bricks?
3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open.
4. Other than
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
??
??????? Original Message ???????
On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
>
> On 04/09/2018 04:36 PM, mabi wrote:
>
> >
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
??????? Original Message ???????
On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote:
> ??
2018 Jan 26
0
Replacing a third data node with an arbiter one
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.co
> m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> > gluster> volume info thedude
> >
2017 Jul 27
0
GFID is null after adding large amounts of data
Hi Cluster Community,
we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation.
The version is 3.8.11 on CentOS 7.
The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare.
After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day.
Also there seem to be problems
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health