search for: replica2

Displaying 12 results from an estimated 12 matches for "replica2".

Did you mean: replica
2017 Aug 25
2
GlusterFS as virtual machine storage
...ND turn off quorum. You could then run on the single node until you can save/copy those VM images, preferably by migrating off that volume completely. Create a remote pool using SSHFS if you have nothing else available. THEN I would go back and fix the gluster cluster and migrate back into it. Replica2/Replica3 does not matter if you lose your Gluster network switch, but again the Arb or Rep3 setup makes it easier to recover. I suppose the only advantage of Replica2 is that you can use a cross over cable and not worry about losing the switch, but bonding/teaming works well and there are bondi...
2019 Oct 16
0
BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting
...alid mailbox name 'Foldername-temp-1': Missing namespace prefix 'INBOX.' > I've never fixed this because I haven't figured out how to reproduce it. If it happens with you all the time, could you try: - Get a copy of both replica sides, e.g. under /tmp/replica1 and /tmp/replica2 - Make sure dsync still crashes with them, e.g. doveadm -o mail=maildir:/tmp/replica1 sync maildir:/tmp/replica2 - Delete all mails and dovecot.index* files (but not dovecot.mailbox.log) - Make sure dsync still crashes - Send me the replicas - they should no longer contain anything sensitive A...
2019 Oct 17
1
BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting
...rname-temp-1': Missing > namespace prefix 'INBOX.' > > I've never fixed this because I haven't figured out how to reproduce it. > If it happens with you all the time, could you try: > > - Get a copy of both replica sides, e.g. under /tmp/replica1 and > /tmp/replica2 > - Make sure dsync still crashes with them, e.g. doveadm -o > mail=maildir:/tmp/replica1 sync maildir:/tmp/replica2 > - Delete all mails and dovecot.index* files (but not dovecot.mailbox.log) > - Make sure dsync still crashes > - Send me the replicas - they should no longer cont...
2017 Aug 25
0
GlusterFS as virtual machine storage
...uld then run on the single node until you can > save/copy those VM images, preferably by migrating off that volume > completely. Create a remote pool using SSHFS if you have nothing else > available. THEN I would go back and fix the gluster cluster and > migrate back into it. > > Replica2/Replica3 does not matter if you lose your Gluster network > switch, but again the Arb or Rep3 setup makes it easier to recover. I > suppose the only advantage of Replica2 is that you can use a cross > over cable and not worry about losing the switch, but bonding/teaming > works well and...
2017 Aug 25
0
GlusterFS as virtual machine storage
Il 25-08-2017 08:32 Gionatan Danti ha scritto: > Hi all, > any other advice from who use (or do not use) Gluster as a replicated > VM backend? > > Thanks. Sorry, I was not seeing messages because I was not subscribed on the list; I read it from the web. So it seems that Pavel and WK have vastly different experience with Gluster. Any plausible cause for that difference? > WK
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> Hi, after many VM crashes during upgrades of Gluster, losing network >> connectivity on one node etc. I would advise running replica 2 with >> arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an
2019 Sep 25
4
BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting
Hi all! I have two dovecot servers with dsync replication over tcp. Replication works fine except for one user. # doveadm replicator status username priority fast sync full sync success sync failed customer at example.com none 00:00:33 07:03:23 03:22:31 y If i run dsync manually, i get the following error message: dsync-local(customer at example.com): Debug: brain M: -- Mailbox
2017 Aug 25
0
GlusterFS as virtual machine storage
On 8/25/2017 2:21 PM, lemonnierk at ulrar.net wrote: >> This concern me, and it is the reason I would like to avoid sharding. >> How can I recover from such a situation? How can I "decide" which >> (reconstructed) file is the one to keep rather than to delete? >> > No need, on a replica 3 that just doesn't happen. That's the main > advantage of it,
2017 Aug 25
2
GlusterFS as virtual machine storage
> > This concern me, and it is the reason I would like to avoid sharding. > How can I recover from such a situation? How can I "decide" which > (reconstructed) file is the one to keep rather than to delete? > No need, on a replica 3 that just doesn't happen. That's the main advantage of it, that and the fact that you can perform operations on your servers
2017 Jul 13
2
Replicated volume, one slow brick
I have been trying to figure out how glusterfs-fuse client will handle it when 1 of 3 bricks in a 3-way replica is slower than the others. It looks like a glusterfs-fuse client will send requests to all 3 bricks when accessing a file. But what happens when one of the bricks is not responding in time? We saw an issue when we added external load to the raid volume where the brick was located. The
2017 Aug 26
2
GlusterFS as virtual machine storage
Il 26-08-2017 01:13 WK ha scritto: > Big +1 on what was Kevin just said.? Just avoiding the problem is the > best strategy. Ok, never run Gluster with anything less than a replica2 + arbiter ;) > However, for the record,? and if you really, really want to get deep > into the weeds on the subject, then the? Gluster people have docs on > Split-Brain recovery. > > https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/ > > and if you Google...
2017 Oct 09
3
Peer isolation while healing
Hi everyone, I've been using gluster for a few month now, on a simple 2 peers replicated infrastructure, 22Tb each. One of the peers has been offline last week during 10 hours (raid resync after a disk crash), and while my gluster server was healing bricks, I could see some write errors on my gluster clients. I couldn't find a way to isolate my healing peer, in the documentation or