Displaying 20 results from an estimated 5000 matches similar to: "peer detach fails"
2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote:
>
>
> On 01/24/2018 07:20 PM, Hoggins! wrote:
>
> Hello,
>
> The subject says it all. I have a replica 3 cluster :
>
> gluster> volume info thedude
>
> Volume Name: thedude
> Type: Replicate
> Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
>
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2 for glusterfs-4.0 that we should change this to `replica 2
arbiter
2018 Jan 26
0
Replacing a third data node with an arbiter one
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.co
> m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> > gluster> volume info thedude
> >
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely
supported in v3.8?
Kind regards,
Mitja
On 25/02/2018 13:55, Jim Kinney wrote:
> gluster volume add-brick volname replica 3 arbiter 1
> brickhost:brickpath/to/new/arbitervol
>
> Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a
> change in command will happen so it won't count the
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the bug in the past
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > It stopped being an outstanding
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and RHEV/KVM data, trying to figure out if it's
>related.
>
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
Hi,
It should be there, see https://review.gluster.org/#/c/14502/ <https://review.gluster.org/#/c/14502/>
BR,
Martin
> On 25 Feb 2018, at 15:52, Mitja Miheli? <mitja.mihelic at arnes.si> wrote:
>
> I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8?
>
> Kind regards,
> Mitja
>
> On 25/02/2018 13:55, Jim Kinney wrote:
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol
Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica.
On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote:
>Hi!
>
>I am using GlusterFS on CentOS7 with
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are
2017 Oct 24
3
gfid entries in volume heal info that do not heal
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find <brickpath> -samefile
<brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full
gfid>"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks!
From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com]
Sent: Monday, October 23, 2017 1:52 AM
To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com>
Cc: gluster-users <Gluster-users at gluster.org>
Subject: Re:
2017 Oct 23
0
gfid entries in volume heal info that do not heal
I'm not so lucky. ALL of mine show 2 links and none have the attr data
that supplies the path to the original.
I have the inode from stat. Looking now to dig out the path/filename
from xfs_db on the specific inodes individually.
Is the hash of the filename or <path>/filename and if so relative to
where? /, <path from top of brick>, ?
On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> You can change the replica count. Add a fourth server, add it's brick to
existing volume with gluster volume add-brick vol0 replica 4
newhost:/path/to/brick
This doesn't add space, but only a new replica, increasing the number of
copies
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> correct. a new server will NOT add space in this manner. But the original
Q was about rebalancing after adding a 4th server. If you are using
distributed/replication, then yes, a new server with be adding a portion of
it's space to add more space to the cluster.
Wait, in a distribute-replicate,
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running process.
I'm running several finds at once with gnu parallel but it will still
take some time.
2018 Feb 25
2
Convert replica 2 to replica 2+1 arbiter
Hi!
I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version.
I currently have a replica 2 running and I would like to get rid of the
split-brain problem before it occurs. This is one of the possible solutions.
Is it possible to and an arbiter to this volume?
I have read in a thread from 2016 that this feature is planned for
version 3.8.
Is the feature available? If so, could you give
2017 Oct 23
0
gfid entries in volume heal info that do not heal
Hi Jim & Matt,
Can you also check for the link count in the stat output of those hardlink
entries in the .glusterfs folder on the bricks.
If the link count is 1 on all the bricks for those entries, then they are
orphaned entries and you can delete those hardlinks.
To be on the safer side have a backup before deleting any of the entries.
Regards,
Karthik
On Fri, Oct 20, 2017 at 3:18 AM, Jim