Displaying 20 results from an estimated 12240 matches for "replicability".
2011 Feb 19
2
reading simulations
Hi to all the people (again),
I'm doing some simulations with the memisc package of an own function, but
I've the problem that I didn't know how to read the result of such
simulations. My function is:
> Torre<-function(a1,N1,a2,N2)
+ {Etorre<-(a1*N1)/(1+a1*N1)
+ Efuera<-(a2*N2)/(1+a2*N2)
+ if(Etorre>Efuera)Subir=TRUE
+ if (Etorre<Efuera)Subir=FALSE
+
2012 Feb 28
2
Dovecot clustering with dsync-based replication
This document describes a design for a dsync-replicated Dovecot cluster.
This design can be used to build at least two different types of dsync
clusters, which are both described here. Ville has also drawn overview
pictures of these two setups, see
http://www.dovecot.org/img/dsync-director-replication.png and
http://www.dovecot.org/img/dsync-director-replication-ssh.png
First of all, why dsync
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Mar 21
3
replicator crashing - oom
I have the following in my log:
Mar 21 14:46:59 bubba dovecot: replicator: Panic: data stack: Out of
memory when allocating 1073741864 bytes
Mar 21 14:46:59 bubba dovecot: replicator: Error: Raw backtrace:
/usr/local/lib/dovecot/libdovecot.so.0(+0x97c90) [0x7f4638a7cc90] ->
/usr/local/lib/dovecot/libdovecot.so.0(+0x97d6e) [0x7f4638a7cd6e] ->
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
2019 Mar 07
2
Removing a mailbox from a dovecot cluster
Le mar. 5 mars 2019 ? 10:08, Gerald Galster via dovecot <dovecot at dovecot.org>
a ?crit :
>
> you could try to stop replication for the user you are deleting:
> doveadm replicator remove [-a replicator_socket_path] username
>
>
Good idea! But I have a problem. I tried to stop the replication (doveadm
replicator remove <myuser>) for an user on both server. Verified with
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2018 May 08
2
replicator: User listing returned failure
> I don't know if it makes a difference, I don't have quotes on my
> mail_plugins:
I don't have quotes too (It's difference between config file and dovecot
-n output)
> Did you check permissons on the replication fifos?
For what? I think that problems on slave. How it should work in
automatic mode?
I repeat that from slave manually all works fine:
>> As I
2017 Mar 22
2
replicator crashing - oom
Where would I find the core file? I'm not finding anything obvious.
The replicator path is /usr/local/libexec/dovecot/replicator
Daniel
On 3/22/2017 12:52 AM, Aki Tuomi wrote:
> Can you provide us gdb bt full dump?
>
> gdb /usr/libexec/dovecot/replicator /path/to/core
>
> on some systems, it's /usr/lib/dovecot/replicator
>
> Aki
>
> On 21.03.2017 23:48,
2016 Jul 31
4
Sieve Script Replication Gliches (Report #2)
Hi,
I've observed some odd behaviour with dsync replication between two
hosts, specifically to do with sieve script replication.
In short, I have two hosts which replicate in a master-master type setup
where almost all of the reads and writes happen to just one of the two
hosts.
They are both running 2.2.devel (9dc6403), which is close to the latest
2.2 -git . Pigeonhole is running
2017 Aug 01
3
How to delete geo-replication session?
Hi,
I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below:
$ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete
Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
Hi...
Started playing with gluster. And the heal functions is my "target" for
testing.
Short description of my test
----------------------------
* 4 replicas on single machine
* glusterfs mounted locally
* Create file on glusterfs-mounted directory: date >data.txt
* Append to file on one of the bricks: hostname >>data.txt
* Trigger a self-heal with: stat data.txt
=>
2011 Mar 04
1
swapping database mid replication
The Xapian documentation talks about a situation where you don't want to
do two database swaps while in the process of replicating. It says...
To confuse the replication system, the following needs to happen:
1. Start with two databases, A and B.
2. Start a replication of database A.
3. While the replication is in progress, swap B in place of A (ie, by
moving the files around, such that B
2017 Sep 29
1
2.2.32 'doveadm replicator replicate -f' segfault
Very minor bug; not specifying the user mask with 'doveadm replicator
replicate -f' causes a segfault:
server:~# doveadm replicator replicate -f
Segmentation fault
server:~# doveadm replicator replicate -f '*'
123 users updated
server:~# gdb /usr/bin/doveadm core.2418
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
This GDB was configured as
2019 Dec 17
2
DSync replication error
Please tell me about Dsync replicator.
The following are set to enable the "replication" plug-in
/etc/dovecot/conf.d/10-mail.conf
*---mail_plugins = notify replication---*
And I made the following file
/etc/dovecot/conf.d/30-dsync.conf
*---service replicator { process_min_avail = 1} dsync_remote_cmd = ssh
-l%{login} %{host} doveadm dsync-server
2018 Feb 21
2
Geo replication snapshot error
Hi all,
I use gluster 3.12 on centos 7.
I am writing a snapshot program for my geo-replicated cluster.
Now when I started to run tests with my application I have found
a very strange behavior regarding geo-replication in gluster.
I have setup my geo-replication according to the docs:
http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
Both master and slave clusters are
2011 May 07
1
Gluster "Peer Rejected"
Hello All,
I have 8 servers.?
7 of the 8 say that gbe02 is in state State: Peer Rejected (Connected).
gbe08 says it is connected to the other?7? but they are all State: Peer Rejected
(Connected)
So it would appear that gbe02 is out of sync with the group.
I triggered a manual self heal by doing a the recommended ./find on a gluster
mount.
I'm stuck... I cannot find ANY docs on this