similar to: two 2-node clusters or one 4-node cluster?

Displaying 20 results from an estimated 9000 matches similar to: "two 2-node clusters or one 4-node cluster?"

2018 Jul 07
1
two 2-node clusters or one 4-node cluster?
On Thu, Jul 5, 2018 at 7:10 PM, Digimer <lists at alteeve.ca> wrote: First of all thanks for all your answers, all useful in a way or another. I have yet to dig sufficiently deep in Warren considerations, but I will do it, I promise! Very interesting arguments The concerns of Alexander are true in an ideal world, but when your role is to be an IT Consultant and you are not responsible for
2017 Dec 07
4
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
Hi guys I'm wondering if anyone here is using the GlusterFS OCF resource agents with Pacemaker on CentOS 7? yum install centos-release-gluster yum install glusterfs-server glusterfs-resource-agents The reason I ask is that there seem to be a few problems with them on 3.10, but these problems are so severe that I'm struggling to believe I'm not just doing something wrong. I created
2016 Nov 25
1
Pacemaker bugs?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I think I stumbled on at least two bugs in the CentOS 7.2 pacemaker package, though I'm not quite sure if or where to report it. I'm using the following package to set up a 2-node active/passive cluster: [root at clnode1 ~]# rpm -q pacemaker pacemaker-1.1.13-10.el7_2.4.x86_64 The installation is up-to-date on both nodes as of the
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone, I need some help to setup my configuration failover system. My goal is to have a redundance system using Xen + DRBD + Corosync + Pacemaker On Xen I will have one virtual machine. When this computer has network down, I will do a Live migration to the second computer. The first configuration I will need is a crossover cable, won''t I? It is really necessary? Ok, I did it. eth0
2017 Dec 08
0
GlusterFS, Pacemaker, OCF resource agents on CentOS 7
Hi, Can u please explain for what purpose pacemaker cluster used here? Regards, Jiffin On Thursday 07 December 2017 06:59 PM, Tomalak Geret'kal wrote: > > Hi guys > > I'm wondering if anyone here is using the GlusterFS OCF resource > agents with Pacemaker on CentOS 7? > > yum install centos-release-gluster > yum install glusterfs-server
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi, I've just created again the gluster with NFS ganesha. Glusterfs version 3.8 When I run the command gluster nfs-ganesha enable - it returns a success. However, looking at the pcs status, I see this: [root at tlxdmz-nfs1 ~]# pcs status Cluster name: ganesha-nfs Stack: corosync Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum Last updated: Wed Dec 20
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi, We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18 nodes and 159 resources all in one Corosync/Pacemaker cluster as suggested by our vendor. We''re getting mixed messages on how large of a Corosync/Pacemaker cluster will work well between our vendor an others. 1. Are there Lustre Corosync/Pacemaker clusters out there of this size or larger? 2.
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi, In your ganesha-ha.conf do you have your virtual ip adresses set something like this?: VIP_tlxdmz-nfs1="192.168.22.33" VIP_tlxdmz-nfs2="192.168.22.34" Renaud De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo Envoy??: 20 d?cembre 2017 04:35 ??: gluster-users at gluster.org Objet?: [Gluster-users]
2011 Nov 23
1
Corosync init-script broken on CentOS6
Hello all, I am trying to create a corosync/pacemaker cluster using CentOS 6.0. However, I'm having a great deal of difficulty doing so. Corosync has a valid configuration file and an authkey has been generated. When I run /etc/init.d/corosync I see that only corosync is started. >From experience working with corosync/pacemaker before, I know that this is not enough to have a functioning
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
I checked, and I have it like this: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-nfs" # # The gluster server from which to mount the shared data volume. HA_VOL_SERVER="tlxdmz-nfs1" # # N.B. you may use short names or long names; you may not use IP addrs. # Once you select one, stay with it as it will be mildly unpleasant to # clean up
2010 Jul 19
2
redundant networked secure file system recommendation
Hi all, We are currently running a NFS-based server centric setup. I would like to set up something where I can easily have more than one redundant server, security/authentication (this part seems a little flaky with NFS, at least did several years ago), with the capability to easily add/remove servers as necessary, take redundant servers down for maintenance, etc. Total volume we expect to run
2012 Aug 02
1
XEN HA Cluster with LVM fencing and live migration ? The right way ?
Hi, I am trying to build a rock solid XEN High availability cluster. The platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA fiber channel to the SAN (HP EVA). XEN is running smoothly and I''m even amazed with the live migration performances (this is the first time I have the chance to try it in such a nice environment). XEN apart the SLES heartbeat cluster is
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary (v8.3.8) and Pacemaker. Everything seems to be working great, except during testing of hard-boot scenarios. Whenever I hard-boot one of the nodes, the other node is successfully fenced and marked ?Outdated? * <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"
2011 Dec 20
1
OCFS2 problems when connectivity lost
Hello, We are having a problem with a 3-node cluster based on Pacemaker/Corosync with 2 primary DRBD+OCFS2 nodes and a quorum node. Nodes run on Debian Squeeze, all packages are from the stable branch except for Corosync (which is from backports for udpu functionality). Each node has a single network card. When the network is up, everything works without any problems, graceful shutdown of
2011 Jul 14
1
mount.ocfs2: Invalid argument while mounting /dev/mapper/xenconfig_part1 on /etc/xen/vm/. Check 'dmesg' for more information on this error.
Hello, this is my scenario: 1)I've created a Pacemaker cluster with the following ocfs package on opensuse 11.3 64bit ocfs2console-1.8.0-2.1.x86_64 ocfs2-tools-o2cb-1.8.0-2.1.x86_64 ocfs2-tools-1.8.0-2.1.x86_64 2)I've configured the cluster as usual : <resources> <clone id="dlm-clone"> <meta_attributes id="dlm-clone-meta_attributes">
2008 Mar 05
3
cluster with 2 nodes - heartbeat problem fencing
Hi to all, this is My first time on this mailinglist. I have a problem with Ocfs2 on Debian etch 4.0 I'd like when a node go down or freeze without unmount the ocfs2 partition the heartbeat not fence the server that work well ( kernel panic ). I'd like disable or heartbeat or fencing. So we can work also with only 1 node. Thanks
2008 Jul 21
5
OCFS processes active after a umount [SEC=UNOFFICIAL]
Hello, I have two OCFS file file systems mounted at /ocfs_1 and /ocfs_2. I have unmounted both OCFS file systems and was trying to then offline and unload OCFS. The offline command failed with - # ./o2cb offline Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active Looking at the processes on this box shows a number of OCFS processes are still active -
2012 Jan 10
3
Clustering solutions - mail, www, storage.
Hi all. I am currently working for a hosting provider in a 100+ linux hosts' environment. We have www, mail HA solutions, as storage we mainly use NFS at the moment. We are also using DRBD, Heartbeat, Corosync. I am now gathering info to make a cluster with: - two virtualization nodes (active master and passive slave); - two storage nodes (for vm files) used by mentioned virtualization nodes
2006 Jul 10
1
2 Node cluster crashing
Hi, We have a two node cluster running SLES 9 SP2 connecting directly to an EMC CX300 for storage. We are using OCFS(OCFS2 DLM 0.99.15-SLES) for the voting disk etc, and ASM for data files. The system has been running until last Friday when the whole cluster went down with the following error messages in the /var/log/messages files : rac1: Jul 7 14:56:23 rac1 kernel:
2013 Sep 19
1
Looking for Asterisk+Pacemaker+Corosync+DRBD example
I'm trying to setup a pair of FreePBX-4.211.64 boxes using Pacemaker, Corosync, and DRBD. All the examples I've found so far use Heartbeat, but Heartbeat is not in the repositories and doesn't want to compile from source. Does anyone have a working configuration they can share or a tutorial they can point me to? Also, what does drbdlinks bring to the party? Isn't just linking