similar to: 5.5->5.6 upgrade hiccough

Displaying 20 results from an estimated 4000 matches similar to: "5.5->5.6 upgrade hiccough"

2011 Nov 23
1
Corosync init-script broken on CentOS6
Hello all, I am trying to create a corosync/pacemaker cluster using CentOS 6.0. However, I'm having a great deal of difficulty doing so. Corosync has a valid configuration file and an authkey has been generated. When I run /etc/init.d/corosync I see that only corosync is started. >From experience working with corosync/pacemaker before, I know that this is not enough to have a functioning
2011 Sep 29
1
CentOS 6: corosync and pacemaker won't stop (patch)
Hi, I cannot 'halt' my CentOS 6 servers while running corosync+pacemaker. I believe the runlevels used to stop corosync and pacemaker are not in the correct order and create the infinite "Waiting for corosync services to unload..." loop thing. This is my first time with this cluster technology but apparently pacemaker has to be stopped /before/ corosync. Applying the following
2015 Nov 10
1
OT: bacula question
Am 10.11.2015 um 22:36 schrieb Devin Reade <gdr at gno.org>: > > bat is a native GUI, so UNIX only. we use bat GUI on windows ... -- LF
2016 Nov 25
1
Pacemaker bugs?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I think I stumbled on at least two bugs in the CentOS 7.2 pacemaker package, though I'm not quite sure if or where to report it. I'm using the following package to set up a 2-node active/passive cluster: [root at clnode1 ~]# rpm -q pacemaker pacemaker-1.1.13-10.el7_2.4.x86_64 The installation is up-to-date on both nodes as of the
2016 Feb 13
0
Ocfs2 with corosync and pacemaker on oracle Linux 7
Dear All, Would like your advise if anyone have setup ocfs2 on oracle Linux 7 (free) with corosync and also pacemaker. I've search the net but all guide seem to be dated to 2012 and somehow outdated or I don't get it. I know that the ocfs2 default cluster lock o2cb don't support lock CTDB need thus need to change that to pacemaker and corosync. Please help. Regards, Min Wai.
2012 Aug 15
1
ocfs2_controld binary
I have been reading loads of threads over different mailing lists about ocfs2_controld, so have anyone ever built Cluster stack (openAIS, pacemaker, corosync + OCFS2 1.4) from source and got o2cb agent working with pacemaker? Got this from messages: /var/log/messages:Aug 14 15:05:20 ip-172-16-2-12 o2cb(resO2CB:0)[4239]: ERROR: Setup problem: couldn't find command:
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi, We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18 nodes and 159 resources all in one Corosync/Pacemaker cluster as suggested by our vendor. We''re getting mixed messages on how large of a Corosync/Pacemaker cluster will work well between our vendor an others. 1. Are there Lustre Corosync/Pacemaker clusters out there of this size or larger? 2.
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone, I need some help to setup my configuration failover system. My goal is to have a redundance system using Xen + DRBD + Corosync + Pacemaker On Xen I will have one virtual machine. When this computer has network down, I will do a Live migration to the second computer. The first configuration I will need is a crossover cable, won''t I? It is really necessary? Ok, I did it. eth0
2020 Jan 11
1
Dovecot HA/Resilience
If you just want active/standby, you can simply use corosync/pacemaker as other already suggest and don?t use Director. I have a dovecot HA server that uses floating IP and pacemaker to managed it, and it works quite well. The only real hard part is having a HA storage. You can simply use a NFS storage shared by both servers (as long as only one has the floating IP, you won?t have issue with the
2015 May 11
1
Bacula backup system
--On Monday, May 11, 2015 02:26:17 PM -0700 John R Pierce <pierce at hogranch.com> wrote: > never met a unix that didn't come with Perl already installed, or as a > base option SunOS-4 :) Didn't have emacs, either, nor an ANSI-C compiler. And the OS came on QIC-150 tape (ie: 150 MB total capacity). Not that that defeats the argument ... Devin
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc. https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master> The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts. The playbook installing higher up in the directory > On 05 Jun 2017,
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message ----- From: "hvjunk" <hvjunk at gmail.com> To: "Adam Ru" <ad.ruckel at gmail.com> Cc: gluster-users at gluster.org Sent: Monday, June 5, 2017 9:29:03 PM Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Sorry, got sidetracked with invoicing etc.
2011 May 03
0
[Bug 845] Received disconnect from ???: 2: Corrupted MAC on input.
https://bugzilla.mindrot.org/show_bug.cgi?id=845 Devin Reade <gdr at gno.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |gdr at gno.org --- Comment #12 from Devin Reade <gdr at gno.org> 2011-05-03 14:08:48 EST --- [More details for
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
Background: ---------- I'm trying to debug a two-node pacemaker/corosync cluster where I want to be able to do live migration of KVM/qemu VMs. Storage is backed via dual-primary DRBD (yes, fencing is in place). When moving the VM between nodes via 'pcs resource move RES NODENAME', the live migration fails although pacemaker will shut down the VM and restart it on the other node.
2020 Jan 10
0
Dovecot HA/Resilience
Hello, you need to "clone" the first server, change the ip address, mount the same maildir storage and use some mechanism to share the accounts database. Then you need to put a TCP load-balancer in front of the servers an you are good to go. This is the easiest solution if you already have in the network an appliance that can do LB. For instance if you already have a firewall with
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk, could you please tell me have you had time to check my previous post? Could you please send me mentioned link to your Gluster Ansible scripts? Thank you, Adam On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote: > Hi hvjunk (Hi Hendrik), > > "centos-release-gluster" installs "centos-gluster310". I assume it > picks the
2011 Dec 20
1
OCFS2 problems when connectivity lost
Hello, We are having a problem with a 3-node cluster based on Pacemaker/Corosync with 2 primary DRBD+OCFS2 nodes and a quorum node. Nodes run on Debian Squeeze, all packages are from the stable branch except for Corosync (which is from backports for udpu functionality). Each node has a single network card. When the network is up, everything works without any problems, graceful shutdown of
2020 Jan 10
0
Dovecot HA/Resilience
Yes, but it works for small systems if you set IP source address persistence on LB or even better, if you set priority to be Active/Standby. I couldn't find a good example with dovecot director and backend on the same server, so adding another two machines seems overkill for small setups. If someone has a working example for this please make it public ! Quote from
2011 Mar 11
1
Samba in Pacemaker-Cluster: CTDB fails to get recovery lock
I'm currently testing fail-over with a two-node active-active cluster (with node dig and node dag): Both nodes are up, one is manually killed. CTDB on the node that's still alive should perform a recovery and everything should working again. What's infrequently happening is: After killing the pacemaker-process on dag (and dag consequently being fenced), dig's CTDB tries to
2017 Dec 08
0
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
Hi, Can u please explain for what purpose pacemaker cluster used here? Regards, Jiffin On Thursday 07 December 2017 06:59 PM, Tomalak Geret'kal wrote: > > Hi guys > > I'm wondering if anyone here is using the GlusterFS OCF resource > agents with Pacemaker on CentOS 7? > > yum install centos-release-gluster > yum install glusterfs-server