Displaying 20 results from an estimated 2000 matches similar to: "RAID1 over IP?"
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi,
We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18
nodes and 159 resources all in one Corosync/Pacemaker cluster as
suggested by our vendor. We''re getting mixed messages on how large of a
Corosync/Pacemaker cluster will work well between our vendor an others.
1. Are there Lustre Corosync/Pacemaker clusters out there of this
size or larger?
2.
2011 Nov 23
1
Corosync init-script broken on CentOS6
Hello all,
I am trying to create a corosync/pacemaker cluster using CentOS 6.0.
However, I'm having a great deal of difficulty doing so.
Corosync has a valid configuration file and an authkey has been generated.
When I run /etc/init.d/corosync I see that only corosync is started.
>From experience working with corosync/pacemaker before, I know that
this is not enough to have a functioning
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone,
I need some help to setup my configuration failover system.
My goal is to have a redundance system using Xen + DRBD + Corosync +
Pacemaker
On Xen I will have one virtual machine. When this computer has network
down, I will do a Live migration to the second computer.
The first configuration I will need is a crossover cable, won''t I? It is
really necessary? Ok, I did it. eth0
2011 Sep 29
1
CentOS 6: corosync and pacemaker won't stop (patch)
Hi,
I cannot 'halt' my CentOS 6 servers while running corosync+pacemaker.
I believe the runlevels used to stop corosync and pacemaker are not in the
correct order and create the infinite "Waiting for corosync services to
unload..." loop thing.
This is my first time with this cluster technology but apparently pacemaker
has to be stopped /before/ corosync.
Applying the following
2011 Jan 19
8
Xen on two node DRBD cluster with Pacemaker
Hi all,
could somebody point me to what is considered a sound way to offer Xen guests
on a two node DRBD cluster in combination with Pacemaker? I prefer block
devices over images for the DomU''s. I understand that for live migration DRBD
8.3 is needed, but I''m not sure as to what kind of resource
agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD config
2016 Nov 25
1
Pacemaker bugs?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi!
I think I stumbled on at least two bugs in the CentOS 7.2 pacemaker package,
though I'm not quite sure if or where to report it.
I'm using the following package to set up a 2-node active/passive cluster:
[root at clnode1 ~]# rpm -q pacemaker
pacemaker-1.1.13-10.el7_2.4.x86_64
The installation is up-to-date on both nodes as of the
2020 Jan 10
2
Dovecot HA/Resilience
Also you should probably use dovecot director to ensure same user sessions end up on same server, as it's not supported to access same user on different backends in this scenario.
Aki
> On 10/01/2020 19:49 Adrian Minta <adrian.minta at gmail.com> wrote:
>
>
>
> Hello,
>
> you need to "clone" the first server, change the ip address, mount the same
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
Hi hvjunk,
could you please tell me have you had time to check my previous post?
Could you please send me mentioned link to your Gluster Ansible scripts?
Thank you,
Adam
On Sun, May 28, 2017 at 2:47 PM, Adam Ru <ad.ruckel at gmail.com> wrote:
> Hi hvjunk (Hi Hendrik),
>
> "centos-release-gluster" installs "centos-gluster310". I assume it
> picks the
2011 May 10
3
DRBD, Xen, HVM and live migration
Hi,
I want to combine all the above mentioned technologies.
The Linbit pages warn not to use the drbd: VBD with HVM DomUs.
This page however:
http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en
(thank you Jean), simply puts two DRBD devices in dual primary mode and
starts Xen DomUs while pointing to the DRBD devices with phy: in the
DomU config files.
2018 Mar 07
4
Kernel NFS on GlusterFS
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings!
I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it
using the standard defaults over TCP/IP. Everything worked very
nicely usnig a real, static --mgsnode=a.b.c.x value which was the
actual IP of the MGS/MDS system1 node.
I am now trying to integrate it with Pacemaker-1.1.7. I believe I
have most of the set-up completed with a particular exception. The
"lctl
2012 Aug 15
1
ocfs2_controld binary
I have been reading loads of threads over different mailing lists about ocfs2_controld, so have anyone ever built Cluster stack (openAIS, pacemaker, corosync + OCFS2 1.4) from source and got o2cb agent working with pacemaker?
Got this from messages:
/var/log/messages:Aug 14 15:05:20 ip-172-16-2-12 o2cb(resO2CB:0)[4239]: ERROR: Setup problem: couldn't find command:
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2020 Jan 10
3
Dovecot HA/Resilience
Thank you all for the replies....
I have the test environment with the same configuration. But I have been
asked to go with same environment for HA/Resilience in Live.
Yes, I have only one Live server. It is configured in "Maildir" format. The
data stores on a Network / Shared Storage (But definitely not local disk,
its a mount point).
I have been asked to create a HA/Resilience for
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc.
https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master <https://bitbucket.org/dismyne/gluster-ansibles/src/6df23803df43/ansible/files/?at=master>
The .service files are the stuff going into SystemD, and they call the test-mounts.sh scripts.
The playbook installing higher up in the directory
> On 05 Jun 2017,
2013 Sep 19
1
Looking for Asterisk+Pacemaker+Corosync+DRBD example
I'm trying to setup a pair of FreePBX-4.211.64 boxes using Pacemaker,
Corosync, and DRBD.
All the examples I've found so far use Heartbeat, but Heartbeat is not in
the repositories and doesn't want to compile from source.
Does anyone have a working configuration they can share or a tutorial they
can point me to?
Also, what does drbdlinks bring to the party? Isn't just linking
2020 Jan 11
1
Dovecot HA/Resilience
If you just want active/standby, you can simply use corosync/pacemaker as other already suggest and don?t use Director.
I have a dovecot HA server that uses floating IP and pacemaker to managed it, and it works quite well.
The only real hard part is having a HA storage.
You can simply use a NFS storage shared by both servers (as long as only one has the floating IP, you won?t have issue with the
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
----- Original Message -----
From: "hvjunk" <hvjunk at gmail.com>
To: "Adam Ru" <ad.ruckel at gmail.com>
Cc: gluster-users at gluster.org
Sent: Monday, June 5, 2017 9:29:03 PM
Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot
Sorry, got sidetracked with invoicing etc.
2016 Jun 22
8
KVM HA
Hi,
I have two KVM hosts (CentOS 7) and would like them to operate as High Availability servers,
automatically migrating guests when one of the hosts goes down.
My question is: Is this even possible? All the documentation for HA that I've found appears to not
do this. Am I missing something?
My configuration so fare includes:
* SAN Storage Volumes for raw device mappings for guest vms
2017 May 29
1
Floating IPv6 in a cluster (as NFS-Ganesha VIP)
Hi all,
I love this project, Gluster and Ganesha are amazing. Thank you for this
great work!
The only thing that I miss is IPv6 support. I know that there are some
challenges and that?s OK. For me it?s not important whether Gluster servers
use IPv4 or IPv6 to speak each other and replicate data.
The only thing that I?d like to have is a floating IPv6 for clients when I
use Ganesha (just IPv6,