similar to: Asterisk database weird behaviour

Displaying 20 results from an estimated 6000 matches similar to: "Asterisk database weird behaviour"

2011 Nov 23
1
Corosync init-script broken on CentOS6
Hello all, I am trying to create a corosync/pacemaker cluster using CentOS 6.0. However, I'm having a great deal of difficulty doing so. Corosync has a valid configuration file and an authkey has been generated. When I run /etc/init.d/corosync I see that only corosync is started. >From experience working with corosync/pacemaker before, I know that this is not enough to have a functioning
2011 Sep 29
1
CentOS 6: corosync and pacemaker won't stop (patch)
Hi, I cannot 'halt' my CentOS 6 servers while running corosync+pacemaker. I believe the runlevels used to stop corosync and pacemaker are not in the correct order and create the infinite "Waiting for corosync services to unload..." loop thing. This is my first time with this cluster technology but apparently pacemaker has to be stopped /before/ corosync. Applying the following
2016 Nov 25
1
Pacemaker bugs?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I think I stumbled on at least two bugs in the CentOS 7.2 pacemaker package, though I'm not quite sure if or where to report it. I'm using the following package to set up a 2-node active/passive cluster: [root at clnode1 ~]# rpm -q pacemaker pacemaker-1.1.13-10.el7_2.4.x86_64 The installation is up-to-date on both nodes as of the
2012 Aug 15
1
ocfs2_controld binary
I have been reading loads of threads over different mailing lists about ocfs2_controld, so have anyone ever built Cluster stack (openAIS, pacemaker, corosync + OCFS2 1.4) from source and got o2cb agent working with pacemaker? Got this from messages: /var/log/messages:Aug 14 15:05:20 ip-172-16-2-12 o2cb(resO2CB:0)[4239]: ERROR: Setup problem: couldn't find command:
2016 Feb 13
0
Ocfs2 with corosync and pacemaker on oracle Linux 7
Dear All, Would like your advise if anyone have setup ocfs2 on oracle Linux 7 (free) with corosync and also pacemaker. I've search the net but all guide seem to be dated to 2012 and somehow outdated or I don't get it. I know that the ocfs2 default cluster lock o2cb don't support lock CTDB need thus need to change that to pacemaker and corosync. Please help. Regards, Min Wai.
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi, We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18 nodes and 159 resources all in one Corosync/Pacemaker cluster as suggested by our vendor. We''re getting mixed messages on how large of a Corosync/Pacemaker cluster will work well between our vendor an others. 1. Are there Lustre Corosync/Pacemaker clusters out there of this size or larger? 2.
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone, I need some help to setup my configuration failover system. My goal is to have a redundance system using Xen + DRBD + Corosync + Pacemaker On Xen I will have one virtual machine. When this computer has network down, I will do a Live migration to the second computer. The first configuration I will need is a crossover cable, won''t I? It is really necessary? Ok, I did it. eth0
2020 Jan 11
1
Dovecot HA/Resilience
If you just want active/standby, you can simply use corosync/pacemaker as other already suggest and don?t use Director. I have a dovecot HA server that uses floating IP and pacemaker to managed it, and it works quite well. The only real hard part is having a HA storage. You can simply use a NFS storage shared by both servers (as long as only one has the floating IP, you won?t have issue with the
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc. https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master> The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts. The playbook installing higher up in the directory > On 05 Jun 2017,
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message ----- From: "hvjunk" <hvjunk at gmail.com> To: "Adam Ru" <ad.ruckel at gmail.com> Cc: gluster-users at gluster.org Sent: Monday, June 5, 2017 9:29:03 PM Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Sorry, got sidetracked with invoicing etc.
2011 Apr 12
0
5.5->5.6 upgrade hiccough
I spoke a bit too soon about the upgrade being flawless. It turns out that one subsystem started failing after the upgrade, although it is not an out-of-the-box one. I've included the description here in case someone runs into something similarly weird (especially with scripts calling MySQL?) and because although I have a work-around, I'm not sure (but can guess) as to the actual cause
2020 Jan 10
0
Dovecot HA/Resilience
Hello, you need to "clone" the first server, change the ip address, mount the same maildir storage and use some mechanism to share the accounts database. Then you need to put a TCP load-balancer in front of the servers an you are good to go. This is the easiest solution if you already have in the network an appliance that can do LB. For instance if you already have a firewall with
2020 Jan 10
0
Dovecot HA/Resilience
Yes, but it works for small systems if you set IP source address persistence on LB or even better, if you set priority to be Active/Standby. I couldn't find a good example with dovecot director and backend on the same server, so adding another two machines seems overkill for small setups. If someone has a working example for this please make it public ! Quote from
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk, could you please tell me have you had time to check my previous post? Could you please send me mentioned link to your Gluster Ansible scripts? Thank you, Adam On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote: > Hi hvjunk (Hi Hendrik), > > "centos-release-gluster" installs "centos-gluster310". I assume it > picks the
2011 Dec 20
1
OCFS2 problems when connectivity lost
Hello, We are having a problem with a 3-node cluster based on Pacemaker/Corosync with 2 primary DRBD+OCFS2 nodes and a quorum node. Nodes run on Debian Squeeze, all packages are from the stable branch except for Corosync (which is from backports for udpu functionality). Each node has a single network card. When the network is up, everything works without any problems, graceful shutdown of
2018 Jul 07
1
two 2-node clusters or one 4-node cluster?
On Thu, Jul 5, 2018 at 7:10 PM, Digimer <lists at alteeve.ca> wrote: First of all thanks for all your answers, all useful in a way or another. I have yet to dig sufficiently deep in Warren considerations, but I will do it, I promise! Very interesting arguments The concerns of Alexander are true in an ideal world, but when your role is to be an IT Consultant and you are not responsible for
2017 Jun 01
0
Floating IPv6 in a cluster (as NFS-Ganesha VIP)
Hi all, thank you very much for support! I filed the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1457724 I'll try to test it again to get some errors / warnings from log. Best regards, Jan On Wed, May 31, 2017 at 12:25 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > On 05/31/2017 07:03 AM, Soumya Koduri wrote: > > +Andrew and Ken > > > > On
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote: > Hi, > > I'm trying to build an active/active virtualization cluster using a Ceph > RBD as backing for each libvirt-managed LXC. I know live migration for LXC > isn't yet possible, but I'd like to build my infrastructure as if it were. > That is, I would like to be sure proper locking is in place for
2020 Jan 10
2
Dovecot HA/Resilience
Also you should probably use dovecot director to ensure same user sessions end up on same server, as it's not supported to access same user on different backends in this scenario. Aki > On 10/01/2020 19:49 Adrian Minta <adrian.minta at gmail.com> wrote: > > > > Hello, > > you need to "clone" the first server, change the ip address, mount the same
2011 Mar 11
1
Samba in Pacemaker-Cluster: CTDB fails to get recovery lock
I'm currently testing fail-over with a two-node active-active cluster (with node dig and node dag): Both nodes are up, one is manually killed. CTDB on the node that's still alive should perform a recovery and everything should working again. What's infrequently happening is: After killing the pacemaker-process on dag (and dag consequently being fenced), dig's CTDB tries to