similar to: Status of STONITH support in the puppetlabs corosync module?

Displaying 20 results from an estimated 300 matches similar to: "Status of STONITH support in the puppetlabs corosync module?"

2017 Feb 05
2
NUT configuration complicated by Stonith/Fencing cabling
Hello List, Any suggestions to solve the following would be most appreciated. Setup: Active/Passive Two Node Cluster. Two UPSes (APC Smart-UPS 1500 C) with USB communication cables cross connected (ie UPS-webserver1 monitored by webserver2, and vice versa) to allow for stonith/fencing OS OpenSuse Leap 42.2 NUT version 2.7.1-2.41-x86_64 Fencing agent: external/nut Problem: When power fails to a
2017 Feb 10
2
NUT configuration complicated by Stonith/Fencing cabling
Roger, Thanks for your reply. As I understand it, for reliable fencing a node cannot be responsible for fencing itself, as it may not be functioning properly. Hence my "cross over" setup. The direct USB connection from Webserver1 to UPS-Webserver2 means that Webserver1 can fence (cut the power to) Webserver2 if the cluster software decides that it is necessary. If my UPSes were able to
2017 Feb 13
2
NUT configuration complicated by Stonith/Fencing cabling
On 02/13/2017 04:39 PM, Charles Lepple wrote: > On Feb 13, 2017, at 8:08 AM, Tim Richards <tims_tank at hotmail.com> wrote: >> Feb 13 23:11:42 systemd[1] Starting LSB: UPS monitoring software (deprecated, remote/local)... >> Feb 13 23:11:43 usbhid-ups[2093] Startup successful >> Feb 13 23:11:43 upsd[1 932] Starting NUT UPS drivers ..done >> Feb 13 23:11:43 upsd[21
2011 Nov 23
1
Corosync init-script broken on CentOS6
Hello all, I am trying to create a corosync/pacemaker cluster using CentOS 6.0. However, I'm having a great deal of difficulty doing so. Corosync has a valid configuration file and an authkey has been generated. When I run /etc/init.d/corosync I see that only corosync is started. >From experience working with corosync/pacemaker before, I know that this is not enough to have a functioning
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi, We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18 nodes and 159 resources all in one Corosync/Pacemaker cluster as suggested by our vendor. We''re getting mixed messages on how large of a Corosync/Pacemaker cluster will work well between our vendor an others. 1. Are there Lustre Corosync/Pacemaker clusters out there of this size or larger? 2.
2017 Feb 10
0
NUT configuration complicated by Stonith/Fencing cabling
On Sun, 5 Feb 2017, Tim Richards wrote: > Setup: Active/Passive Two Node Cluster. Two UPSes (APC Smart-UPS 1500 C) > with USB communication cables cross connected (ie UPS-webserver1 > monitored by webserver2, and vice versa) to allow for stonith/fencing > > OS OpenSuse Leap 42.2 > NUT version 2.7.1-2.41-x86_64 > Fencing agent: external/nut > > Problem: When power
2017 Feb 12
0
NUT configuration complicated by Stonith/Fencing cabling
On Feb 10, 2017, at 5:48 PM, Tim Richards <tims_tank at hotmail.com> wrote: > > I am trying to kill two birds with one stone, that is UPS protection from power failure and cluster node fencing (Stonith) with the UPS ability to cut power to a node. Somebody has done this, as there exists a fencing agent using NUT in the Pacemaker/Corosync (Linux-HA cluster software), I just don't
2012 Jun 23
0
puppetlabs-corosync help using multiple primitive operations
Setting up a HA iSCSI / NFS target using this document, http://www.linbit.com/fileadmin/tech-guides/ha-iscsi.pdf, and I am unable to find a way to use the puppetlabs-corosync module to emulate this command crm(live)configure# primitive p_drbd_coraid23 ocf:linbit:drbd \ params drbd_resource=coraid23 \ op monitor interval=29 role=Master \ op monitor interval=31 role=Slave crm(live)configure#
2017 Feb 14
0
NUT configuration complicated by Stonith/Fencing cabling
Charles and Manuel, Thanks for the help. Charles pointer to the IP address that was 'not listening' gave me the hint. I had assigned the listening interface to the crossover cable connected network cards link between the two nodes. I changed it to the switch connected network cards and bingo. The lack of power in the other node was telling the surviving node on reboot that the
2017 Feb 13
0
NUT configuration complicated by Stonith/Fencing cabling
On Feb 13, 2017, at 8:08 AM, Tim Richards <tims_tank at hotmail.com> wrote: > > Feb 13 23:11:42 systemd[1] Starting LSB: UPS monitoring software (deprecated, remote/local)... > Feb 13 23:11:43 usbhid-ups[2093] Startup successful > Feb 13 23:11:43 upsd[1 932] Starting NUT UPS drivers ..done > Feb 13 23:11:43 upsd[21 04] not listening on 192.168.1.22 port 3.493 > Feb 13
2017 Feb 13
2
NUT configuration complicated by Stonith/Fencing cabling
Charles, Thanks for your reply. Indeed you may be right that the NUT fencing agent might be written with networked UPSes in mind, as healthy nodes could use the network to issue "fence" orders to remove unhealthy ones. I will post here if I find more info. The problem with the resupply of services is that NUT doesn't restart on the node that comes back up. To recap, I pull the
2011 Mar 30
7
XCP XenAPI fencing script (clustering support)
Hi, I think this would be the best forum, let me know if not. I am in the middle of writing fencing scripts for Citrix XenServer virtual machines (specifically for use with Redhat Clustering, but will also work from pacemaker etc) and I noticed that XCP uses the same XenAPI (from what I can tell). Just wondering if someone would be able to test the scripts on XCP and let me know if they work.
2012 Mar 05
12
Cluster xen
Bonjour, J''aimerai mettre en place un cluster sous Xen ou XenServer avec 2 serveurs dell R 710. J''aimerai pouvoir monter un cluster en utilisant l''espace disque entiere des 2 serveurs cumulés ainsi que la mémoire Quelles sont vos retour d''expériences et vos configurations? Merci d''avance Cordialement Mat
2016 Apr 22
1
Storage cluster advise, anybody?
On Fri, 22 Apr 2016, Digimer wrote: > Then you would use pacemaker to manage the floating IP, fence > (stonith) a lost node, and promote drbd->mount FS->start nfsd->start > floating IP. My favorite acronym: stonith -- shoot the other node in the head. -- Paul Heinlein heinlein at madboa.com 45?38' N, 122?6' W
2005 Aug 08
1
Missing dependencies for HA
CentOS 4.1 and Heartbeat 2.0.0. I'm trying to install the rpm's for heartbeat and heartbeat-stonith and get these failed dependencies. error: Failed dependencies: libcrypto.so.0.9.7 is needed by heartbeat-2.0.0-1.i586 libnet.so.0 is needed by heartbeat-2.0.0-1.i586 librpm-4.1.so is needed by heartbeat-2.0.0-1.i586 librpmdb-4.1.so is needed by
2007 Jun 28
1
Heartbeat for Centos 5- Can't build RPMS or install prebuilt RPMS
I am stuck. This is X86_64 platform. In the extras repos, there is the SRPMS for heartbeat along with the RPMS for it. I have downloaded both. But I can't build the RPMS from the SRPM as it fails compiling something in BUILD/heartbeat-2.0.8/lib/crm/pengine Additionally, I can't install the RPMS: rpm -Uvh heartbeat-2.0.8-3.el5.centos.i386.rpm heartbeat-2.0.8-3.el5.centos.x86_64.rpm
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2014 Jun 15
1
Question about clustering
Hi list, I'm new to clustering, and I'm running a little cluster at home. The cluster is running on a workstation hardware and running on Centos 6.5. Component: corosync, pacemaker, drbd and pcs. All works good. This cluster has different resources: 1) drbd0 2) drbd1 3) drbd0_fs 4) drbd1_fs 5) pgsql 6) smb + nmb 7) libvirt (lbs) 8) libvirt_guests (lsb) I've this constraint
2011 May 10
3
DRBD, Xen, HVM and live migration
Hi, I want to combine all the above mentioned technologies. The Linbit pages warn not to use the drbd: VBD with HVM DomUs. This page however: http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en (thank you Jean), simply puts two DRBD devices in dual primary mode and starts Xen DomUs while pointing to the DRBD devices with phy: in the DomU config files.
2011 Jan 19
8
Xen on two node DRBD cluster with Pacemaker
Hi all, could somebody point me to what is considered a sound way to offer Xen guests on a two node DRBD cluster in combination with Pacemaker? I prefer block devices over images for the DomU''s. I understand that for live migration DRBD 8.3 is needed, but I''m not sure as to what kind of resource agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD config