search for: replicas

Displaying 20 results from an estimated 1304 matches for "replicas".

Did you mean: replica
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and not for the entire volume. Since you have a
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi, I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to set these up in a raid 10 which will? give me 2TB useable. So Mirrored and concatenated? The command I am running is as per documents but I get a warning error, how do I get this to proceed please as the documents do not say. gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through. ---------- Forwarded message ---------- From: Martin Toth <snowmailer at gmail.com> Date: Thu, Sep 21, 2017 at 9:17 AM Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] To: gluster-users at gluster.org Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com Hello all fellow GlusterFriends, I would like you to comment /
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? Kind regards, Mitja On 25/02/2018 13:55, Jim Kinney wrote: > gluster volume add-brick volname replica 3 arbiter 1 > brickhost:brickpath/to/new/arbitervol > > Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a > change in command will happen so it won't count the
2018 Feb 25
2
Convert replica 2 to replica 2+1 arbiter
Hi! I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version. I currently have a replica 2 running and I would like to get rid of the split-brain problem before it occurs. This is one of the possible solutions. Is it possible to and an arbiter to this volume? I have read in a thread from 2016 that this feature is planned for version 3.8. Is the feature available? If so, could you give
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica. On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote: >Hi! > >I am using GlusterFS on CentOS7 with
2005 Nov 04
2
Samba PDC + OpenLDAP replica
...rtual networks (VLANs), and all worked fine. However, I decided that it would be nice (from an administrative point of view) to have all user/client data on same departmental master OpenLDAP server, which would work as a backend for division level Samba PDC servers in different VLANs via LDAP replicas (our department contains many subdepartments, or divisions, and most of them have their own VLANs). So, I read Samba documentation and I understood that it is possible to make such a system, where Samba server uses LDAP replica as it's backend. First I transferred all user/client data to ma...
2017 Sep 22
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Procedure looks good. Remember to back up Gluster config files before update: /etc/glusterfs /var/lib/glusterd If you are *not* on the latest 3.7.x, you are unlikely to be able to go back to it because PPA only keeps the latest version of each major branch, so keep that in mind. With Ubuntu, every time you update, make sure to download and keep a manual copy of the .Deb files. Otherwise you
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi, With replica 2 volumes one can easily end up in split-brains if there are frequent disconnects and high IOs going on. If you use replica 3 or arbiter volumes, it will guard you by using the quorum mechanism giving you both consistency and availability. But in replica 2 volumes, quorum does not make sense since it needs both the nodes up to guarantee consistency, which costs availability. If
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
Hi, It should be there, see https://review.gluster.org/#/c/14502/ <https://review.gluster.org/#/c/14502/> BR, Martin > On 25 Feb 2018, at 15:52, Mitja Miheli? <mitja.mihelic at arnes.si> wrote: > > I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? > > Kind regards, > Mitja > > On 25/02/2018 13:55, Jim Kinney wrote:
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Sep 16
0
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2010 Nov 27
1
GlusterFS replica question
Hi, For small lab environment I want to use GlusterFS with only ONE node. After some time I would like to add the second node as the redundant node (replica). Is it possible in GlusterFS 3.1 without downtime? Cheers PK