Displaying 20 results from an estimated 10000 matches similar to: "replicate a distributed volume"
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all,
as the title says - i'm looking to change a volume from dist/repl -> dist.
we're currently running 3.2.7. a few of questions for you gurus out there:
- is this possible to do on 3.2.7?
- is this possible to do with 3.4.1? (would involve upgrade)
- are there any pitfalls i should be aware of?
many thanks in advance,
regards,
paul
-------------- next part --------------
An
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the distribution count also unchanged.
> >
> > I can probably find
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> I will try to explain how you can end up in split-brain even with cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 May 23
0
gluster volume create failed: Host is not in 'Peer in Cluster' state
All,
Running glusterfs-4.0.2-1 on CentOS 7.5.1804
I have 10 servers running in a pool. All show as connected when I do
gluster peer status and gluster pool list.
There is 1 volume running that is distributed on servers 1-5.
I try using a brick in server7 and it always gives me:
/volume create: GDATA: failed: Host server7 is not in 'Peer in Cluster'
state/
Now that is even ON server7
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
>
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi,
Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?
Regards,
Nithya
On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote:
> We currently have a 3 node gluster setup each has a 100TB brick (total
> 300TB,
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total
300TB, usable 100TB due to replica factor 3)
We would like to expand the existing volume by adding another 3 nodes, but
each will only have a 50TB brick. I think this is possible, but will it
affect gluster performance and if so, by how much. Assuming we run a
rebalance with force option, will this distribute the existing data
2013 Jun 17
1
Ability to change replica count on an active volume
Hi, all
As the title
I found that gluster fs 3.3 has the ability to change replica count in the
official document:
http://www.gluster.org/community/documentation/index.php/WhatsNew3.3
But I couldnt find any manual about how to do it.
Has this feature been added already, or will be supported soon?
thanks.
Wang Li
-------------- next part --------------
An HTML attachment was scrubbed...
2011 Jan 14
1
mixing tcp/ip and ib/rdma in distributed replicated volume for disaster recovery.
Hi,
we would like to build a gluster storage systems that combines our
need for performance with our need for disaster recovery. I saw a
couple of posts indicating that this is possible
(http://gluster.org/pipermail/gluster-users/2010-February/003862.html)
but am not 100% clear if that is possible
Let's assume I have a total of 6 storage servers and bricks and want
to spread them across 2
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave,
On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote:
> I've configured 6 bricks as distributed-replicated with replica 2,
> expecting that all active bricks would be usable so long as a quorum of
> at least 4 live bricks is maintained.
>
The client quorum is configured per replica sub volume and not for the
entire volume.
Since you have a
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug:
[2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle]
0-gv0-stripe-0: Failed to get stripe-size
[2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk]
0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument)
Is there a fix for this in 3.3.1 or do we need to move to git HEAD to
make this work?
M.
--
2018 Jan 24
1
fault tolerancy in glusterfs distributed volume
I have made a distributed replica3 volume with 6 nodes. I mean this:
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.2:/brick
Brick2: 10.0.0.3:/brick
Brick3: 10.0.0.1:/brick
Brick4: 10.0.0.5:/brick
Brick5: 10.0.0.6:/brick
Brick6:
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Hi Mauro Tridici,
>From the information provided it appears like you have placed 2 bricks of a
subvolume on one host. Please confirm.
The number of hosts that could go down without losing access to data can be
derived based on the brick configuration/distribution. Please let us know
the brick distribution plan.
Regards,
Sunil kumar Acharya
Senior Software Engineer
Red Hat
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes.
It depends on the way you are going to add new bricks on the existing volume 'vol"
I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down.
When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add.
Suppose you want to add 6