Displaying 20 results from an estimated 8000 matches similar to: "2 hosts, mirrored VM storage, avoiding conflicting VM instances"
2016 Apr 22
1
Storage cluster advise, anybody?
On Fri, 22 Apr 2016, Digimer wrote:
> Then you would use pacemaker to manage the floating IP, fence
> (stonith) a lost node, and promote drbd->mount FS->start nfsd->start
> floating IP.
My favorite acronym: stonith -- shoot the other node in the head.
--
Paul Heinlein
heinlein at madboa.com
45?38' N, 122?6' W
2009 Sep 29
0
RAID + DRBD + iSCSI + Multipath
Hello,
We''ve been using Xen on several servers with direct-attached storage for
a number of years. Now we''re looking to buy more Xen servers and
re-purpose two of the older ones as SANs to hold the domU filesystems,
in order to achieve redundancy and live migration ability. I''m looking
for comments on our proposed design.
SAN 1 - existing server w/ 4 GB, 2x
2009 Jul 28
2
DRBD on a xen host: crash on high I/O
Hello,
I have a couple of Dell 2950 III, both of them with CentOS 5.3, Xen,
drbd 8.2 and cluster suite.
Hardware: 32DB RAM, RAID 5 with 6 SAS disks (one hot spare) on a PERC/6
controller.
I configured DRBD to use the main network interfaces (bnx2 driver), with
bonding and crossover cables to have a direct link.
The normal network traffic uses two different network cards.
There are two DRBD
2016 Apr 22
0
Storage cluster advise, anybody?
On 22/04/16 03:18 PM, Valeri Galtsev wrote:
> Dear Experts,
>
> I would like to ask everybody: what would you advise to use as a storage
> cluster, or as a distributed filesystem.
>
> I made my own research of what I can do, but I hit a snag with my
> seemingly best choice, so I decided to stay away from it finally, and ask
> clever people what they would use.
>
>
2014 Jun 15
1
Question about clustering
Hi list,
I'm new to clustering, and I'm running a little cluster at home. The
cluster is running on a workstation hardware and running on Centos 6.5.
Component: corosync, pacemaker, drbd and pcs. All works good.
This cluster has different resources:
1) drbd0
2) drbd1
3) drbd0_fs
4) drbd1_fs
5) pgsql
6) smb + nmb
7) libvirt (lbs)
8) libvirt_guests (lsb)
I've this constraint
2006 Feb 08
1
Heartbeat and mount --bind for NFS v4.
Hi all.
This is probably more of a HA list, or possibly even linux-practices,
question but all hosts concerned are running CentOS and I reckon some
of you guys might have some good suggestions. Feel free to tell me to
piss off. :)
I'm building a new CentOS, DRBD, Heartbeat and NFS HA cluster. We
already have boxes running similar setups on FC2/3 running NFS v2/3
with the Ultra Monkey
2012 Aug 06
0
Problem with mdadm + lvm + drbd + ocfs ( sounds obvious, eh ? :) )
Hi there
First of all apologies for the lenghty message, but it's been a long weekend.
I'm trying to setup a two node cluster with the following configuration:
OS: Debian 6.0 amd64
ocfs: 1.4.4-3 ( debian package )
drbd: 8.3.7-2.1
lvm2: 2.02.66-5
kernel: 2.6.32-45
mdadm: 3.1.4-1+8efb9d1+squeeze1
layout:
0- 2 36GB scsi disks in a raid1 array , with mdadm.
1- 1 lvm2 VG above the raid1 ,
2016 Apr 22
1
Storage cluster advise, anybody?
Hi Valeri
On Fri, Apr 22, 2016 at 10:24 PM, Digimer <lists at alteeve.ca> wrote:
> On 22/04/16 03:18 PM, Valeri Galtsev wrote:
>> Dear Experts,
>>
>> I would like to ask everybody: what would you advise to use as a storage
>> cluster, or as a distributed filesystem.
>>
>> I made my own research of what I can do, but I hit a snag with my
>>
2005 Nov 30
0
CEEA:2005-1130-2 CentOS 4 x86_64 drbd / heartbeat - enhancement update (Extras Only)
CentOS Errata and Enhancement Advisory 2005:1130-2
CentOS 4 x86_86 drbd / heartbeat - Enhancement Update (Extras Only)
We are pleased to add drbd (with integrated heartbeat) to the CentOS
extras repository.
DRBD is a block device which is designed to build high availability
clusters. This is done by mirroring a whole block device via (a
dedicated) network. You could see it as a network raid-1.
2005 Nov 30
0
CEEA:2005-1130-2 CentOS 4 i386 drbd / heartbeat - enhancement update (Extras Only)
CentOS Errata and Enhancement Advisory 2005:1130-2
CentOS 4 i386 drbd / heartbeat - Enhancement Update (Extras Only)
We are pleased to add drbd (with integrated heartbeat) to the CentOS
extras repository.
DRBD is a block device which is designed to build high availability
clusters. This is done by mirroring a whole block device via (a
dedicated) network. You could see it as a network raid-1.
2005 Nov 30
0
CentOS-announce Digest, Vol 9, Issue 16
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When
2013 Aug 09
0
Hyper-V driver API version support
Hello
The "version" function is not supported by the hyperv driver:
$ virsh --connect=hyperv://hypervhost version
Compiled against library: libvirt 1.1.1
Using library: libvirt 1.1.1
Using API: Hyper-V 1.1.1
error: failed to get the hypervisor version
error: this function is not supported by the connection driver:
virConnectGetVersion
But we need this funtion for the
2017 Feb 05
2
NUT configuration complicated by Stonith/Fencing cabling
Hello List,
Any suggestions to solve the following would be most appreciated.
Setup: Active/Passive Two Node Cluster. Two UPSes (APC Smart-UPS 1500 C) with USB communication cables cross connected (ie UPS-webserver1 monitored by webserver2, and vice versa) to allow for stonith/fencing
OS OpenSuse Leap 42.2
NUT version 2.7.1-2.41-x86_64
Fencing agent: external/nut
Problem: When power fails to a
2017 Feb 10
2
NUT configuration complicated by Stonith/Fencing cabling
Roger,
Thanks for your reply.
As I understand it, for reliable fencing a node cannot be responsible for fencing itself, as it may not be functioning properly. Hence my "cross over" setup. The direct USB connection from Webserver1 to UPS-Webserver2 means that Webserver1 can fence (cut the power to) Webserver2 if the cluster software decides that it is necessary. If my UPSes were able to
2012 Nov 26
2
Status of STONITH support in the puppetlabs corosync module?
Greetings -
Hoping to hear from hunner or one of the other maintainers of the
puppetlabs corosync module - there is a note on the git project page that
there is currently no way to configure STONITH. Is this information
current?
If so, has anybody come up with a simple method of managing STONITH with
corosync via puppet?
--
You received this message because you are subscribed to the
2011 May 31
1
How do I diagnose what's going wrong with a Gluster NFS mount?
Hi,
Has anyone even seen this before - an NFS mount through Gluster that gets
the filesystem size wrong and is otherwise garbled and dangerous?
Is there a way within Gluster to fix it, or is the lesson that Gluster's NFS
sometimes can't be relied on? What have the experiences been running an
external NFS daemon with Gluster? Is that fairly straightforward? Might like
to get the
2017 Feb 10
0
NUT configuration complicated by Stonith/Fencing cabling
On Sun, 5 Feb 2017, Tim Richards wrote:
> Setup: Active/Passive Two Node Cluster. Two UPSes (APC Smart-UPS 1500 C)
> with USB communication cables cross connected (ie UPS-webserver1
> monitored by webserver2, and vice versa) to allow for stonith/fencing
>
> OS OpenSuse Leap 42.2
> NUT version 2.7.1-2.41-x86_64
> Fencing agent: external/nut
>
> Problem: When power
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2017 Feb 13
2
NUT configuration complicated by Stonith/Fencing cabling
Charles,
Thanks for your reply. Indeed you may be right that the NUT fencing agent might be written with networked UPSes in mind, as healthy nodes could use the network to issue "fence" orders to remove unhealthy ones. I will post here if I find more info.
The problem with the resupply of services is that NUT doesn't restart on the node that comes back up. To recap, I pull the
2005 Aug 08
1
Missing dependencies for HA
CentOS 4.1 and Heartbeat 2.0.0.
I'm trying to install the rpm's for heartbeat and heartbeat-stonith and get these failed dependencies.
error: Failed dependencies:
libcrypto.so.0.9.7 is needed by heartbeat-2.0.0-1.i586
libnet.so.0 is needed by heartbeat-2.0.0-1.i586
librpm-4.1.so is needed by heartbeat-2.0.0-1.i586
librpmdb-4.1.so is needed by