similar to: Questio about replication

Displaying 20 results from an estimated 20000 matches similar to: "Questio about replication"

2012 Feb 28
2
Dovecot clustering with dsync-based replication
This document describes a design for a dsync-replicated Dovecot cluster. This design can be used to build at least two different types of dsync clusters, which are both described here. Ville has also drawn overview pictures of these two setups, see http://www.dovecot.org/img/dsync-director-replication.png and http://www.dovecot.org/img/dsync-director-replication-ssh.png First of all, why dsync
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2017 Jan 16
2
Initial replication halts with "The handle is invalid." (msDS-NC-Replica-Locations corrupted?)
On Sun, 15 Jan 2017 20:14:12 -0500 Adam Tauno Williams via samba <samba at lists.samba.org> wrote: > On Sun, 2017-01-15 at 14:39 -0500, Adam Tauno Williams via samba > wrote: > > Adding a Windows2008RC to an SerNET S4 4.5.3 (forest level 2008R2) > > domain hangs at replication CN=Configuration received 1630 out of > > approximately 1663 objects. > > Only
2017 Nov 13
1
halo replication not working
Hi all I have two data center in two different region (A and B), I have created glusterfs volume with replica 3, one replica is in one region and other two replica is in another region. I have enabled the hale replication feature, I mount the volume in data center A with its public ip and mount the volume in data center B with its public ip. when I copy data in data center A, the data is
2017 Aug 25
2
GlusterFS as virtual machine storage
On 8/25/2017 12:56 AM, Gionatan Danti wrote: > > >> WK wrote: >> 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2 >> node with a VM > > This is true even if I manage locking at application level (via > virlock or sanlock)? We ran Rep2 for years on 3.4.? It does work if you are really,really? careful,? But in a crash on one side, you might
2017 Aug 25
4
GlusterFS as virtual machine storage
> This is true even if I manage locking at application level (via virlock > or sanlock)? Yes. Gluster has it's own quorum, you can disable it but that's just a recipe for a disaster. > Also, on a two-node setup it is *guaranteed* for updates to one node to > put offline the whole volume? I think so, but I never took the chance so who knows. > On the other hand, a 3-way
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and not for the entire volume. Since you have a
2020 Apr 27
2
Quota calculation with MySQL backend and replication.
> On 27/04/2020 12:35 Reio Remma <reio at mrstuudio.ee> wrote: > > > On 27.04.2020 12:15, Reio Remma wrote: > > Hello! > > > > Over the weekend I converted our Dovecot server from Maildir quota to > > MySQL backed quota and then provisioned a fresh replica server and > > seeded it via Dovecot replication. > > > > This morning most
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> Hi, after many VM crashes during upgrades of Gluster, losing network >> connectivity on one node etc. I would advise running replica 2 with >> arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2005 Mar 04
1
[OT] - Why should I answer a Newbie questio n, therethick!
> -----Original Message----- > From: Ronald Wiplinger [mailto:ronald@elmit.com] > Sometimes it is not the "if" you make a search, often is for > new comers > "what" to aks for. > If you do not know the specific term, than you need to ask somewhere, > and I think the list is good for that. Sure. So say, "I tried a Googling for X, but I didn't
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ Which states that "In a replica 2 volume... If we set the client-quorum
2004 Feb 22
2
LDAP replication
Hi all, I know this is not ldap list, but I'm setting SAMBA LDAP BDC; I think many of you have experience with this. I setup a replica, I haven't done the following I followed 1. http://howto.aphroland.de/HOWTO/LDAP/ReplicationOverSSLConfigureOpenLDAP 2. http://howto.aphroland.de/HOWTO/LDAP/ReplicationOverSSLSlaveServer 3.
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2005 Mar 04
3
[OT] - Why should I answer a Newbie questio n,therethick!
> -----Original Message----- > From: Paul Fielding [mailto:paul@fielding.ca] > Frankly, I agree. If you don't like the question, feel it's > lame or dumb, > or don't like that someone hasn't done their research, then > delete the message. Well, sometimes that works. But I've been on a lot of lists where newbies who thought they were being ignored
2011 Dec 01
2
Creating striped replicated volume
We are having trouble creating a stripe 2 replica 2 volume across 4 hosts: user at gluster-fs-host-0:/gfsr$ sudo gluster volume create sr stripe 2 replica 2 glusterfs-host-0:/gfsr glusterfs-host-1:/gfsr glusterfs-host-2:/gfsr glusterfs-host-3:/gfsr wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path> We are on glusterfs 3.2.5
2011 Aug 24
1
Adding/Removing bricks/changing replica value of a replicated volume (Gluster 3.2.1, OpenSuse 11.3/11.4)
Hi! Until now, I use Gluster in a 2-server setup (volumes created with replica 2). Upgrading the hardware, it would be helpful to extend to volume to replica 3 to integrate the new machine and adding the respective brick and to reduce it later back to 2 and removing the respective brick when the old machine is cancelled and not used anymore. But it seems that this requires to delete and
2017 Oct 10
2
samba getting stuck, highwatermark replication issue?
Hi James, Thanks for the quick reply. On 10/09/2017 08:52 PM, lingpanda101 via samba wrote: > You should be able to fix the 'replPropertyMetaData' errors with; > > samba-tool dbcheck --cross-ncs --fix --yes > 'fix_replmetadata_unsorted_attid' Yep, worked great! Fixed all of those replPropertyMetaData errors! :-) > The highwatermark doesn't necessarily