Displaying 20 results from an estimated 7000 matches similar to: "as promised description of my XEN HA setup"
2011 Jan 19
8
Xen on two node DRBD cluster with Pacemaker
Hi all,
could somebody point me to what is considered a sound way to offer Xen guests
on a two node DRBD cluster in combination with Pacemaker? I prefer block
devices over images for the DomU''s. I understand that for live migration DRBD
8.3 is needed, but I''m not sure as to what kind of resource
agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD config
2011 May 10
3
DRBD, Xen, HVM and live migration
Hi,
I want to combine all the above mentioned technologies.
The Linbit pages warn not to use the drbd: VBD with HVM DomUs.
This page however:
http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en
(thank you Jean), simply puts two DRBD devices in dual primary mode and
starts Xen DomUs while pointing to the DRBD devices with phy: in the
DomU config files.
2006 Jun 07
14
HA Xen on 2 servers!! No NFS, special hardware, DRBD or iSCSI...
I''ve been brainstorming...
I want to create a 2-node HA active/active cluster (In other words I want to run a handful of
DomUs on one node and a handful on another). In the event of a failure I want all DomUs to fail
over to the other node and start working immediately. I want absolutely no
single-points-of-failure. I want to do it with free software and no special hardware. I want
2010 Jun 08
21
My future plan
My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated:
Each Node:
Dell R210 Intel X3430 Quad Core 8GB RAM
Intel PT 1Gbps Server Dual Port NIC using linux "bonding"
Small pair of HDDs for OS (Probably in RAID1)
Each node will run about 10 - 15 customer guests
Storage Server:
Some Intel Quad Core Chip
2GB RAM (Maybe more?)
LSI
2010 Jun 14
49
iSCSI and LVM
Hi Everyone,
I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise).
What is the best way to go about this? Should I:
a) Split the RAID10 array into 2 partition on the
2008 Jan 06
1
DRBD NFS load issues
My NFS setup is a heartbeat setup on two servers running Active/Passive
DRBD. The NFS servers themselves are 1x 2 core Opterons with 8G ram and
5TB space with 16 drives and a 3ware controller. They're connected to a
HP procurve switch with bonded ethernet. The sync-rates between the two
DRBD nodes seem to safely reach 200Mbps or better. The processors on the
active NFS servers run with a load
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys,
I have setup my hard disc with 3 partitions:
1- 256MB on /boot;
2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN);
3- 498GB exported with vblade-persist to my network (eth1 for the AoE
protocol).
On dom0 hypervisor01:
vblade-persist setup 0 0 eth1 /dev/sda3
vblade-persist start all
How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s?
Including the
2008 Jul 31
6
drbd 8 primary/primary and xen migration on RHEL 5
Greetings.
I''ve reviewed the list archives, particularly the posts from Zakk, on
this subject, and found results similar to his. drbd provides a
block-drbd script, but with full virtualization, at least on RHEL 5,
this does not work; by the time the block script is run, the qemu-dm has
already been started.
Instead I''ve been simply musing the possibility of keeping the drbd
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody,
I am in the process of setting up a really cool xen serverfarm. Backend
storage will be an LVMed AoE-device on top of DRBD.
The goal is to have the backend storage completely redundant.
Picture:
|RAID| |RAID|
|DRBD1| <----> |DRBD2|
\ /
|VMAC|
| AoE |
|global LVM VG|
/ | \
|Dom0a| |Dom0b| |Dom0c|
| |
2009 Oct 26
6
LVM over Xen + Network
Hi,
We are planning to have LVM being used over a network of 3 h/w machines(500
GB Disk each)
Each hardware machine will have 2-3 domUs.
Can we store these domUs as a Logical Volumes stored across Network of these
3 machines?
Can one DomU exceed the 500 GB (physical drive size) and store say 1 TB of
data across the networked Physical Volumes?
Has anyone done this before?
Thanks and regards,
2007 Nov 13
2
lvm over nbd?
I have a system with a large LVM VG partition.
I was wondering if there is a way i could share the partition
using nbd and have the nbd-client have access the LVM
as if it was local.
SYSTEM A: /dev/sda3 is a LVM partition and is assigned to
VG volgroup1. I want to share /dev/sda3 via nbd-server
SYSTEM B: receives A''s /dev/sda3 as /dev/nbd0. I want to
access it as VG volgroup1.
I am
2008 Jun 09
1
Stability of Image Files
Hi,
I think I''ve nearly evaluated all choices for which Virtualisation
option to go to. Disappointed so far that I can''t get my PCI Cards
visible in a guest, but never mind.
On the same hardware (2 Gig RAM, AMD Athlon 64 3500 and a Fedora Core 8
i386 install), OpenVZ has a tendency to "freeze" up every now and again
in SSH.KVM/Qemu seem to slow, Xen seems really
2010 Apr 30
5
Mount drbd/gfs logical volume from domU
Hi list,
I setup on 2 Xen Dom0s drbd/gfs a logical volume, this works as primary/primary so both DomUs will be able to write on them at the same time. But I dont know how to mount them from my domUs, I can see them with fdisk -l. The partition is /dev/xvdb1
SHould I install gfs on domUs and mount them on each as gfs partitions?
[root@p3x0501 ~]# fdisk -l
Disk /dev/xvda: 5368 MB, 5368709120
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd
2013 Jan 20
10
iscsi on xen
I wonder if someone can point me in right directions. I have two dell
servers I setup iscsi so I have four 2 tb hard drives and i had used lvm
to create one big partiton and share it using iscsi. How I go about
assigning sections of iscsi for virtual hard drives . should go about
assigning Should I export the whole 8TB as one iscsi and then use lvm to
create smaller virtual disk. Or should I
2012 Sep 14
2
HA-OCFS2?
Is it possible to create a highly-available OCFS2 cluster (i.e., A storage cluster that mitigates the single point of failure [SPoF] created by storing an OCFS2 volume on a single LUN)?
The OCFS2 Project Page makes this claim...
> OCFS2 is a general-purpose shared-disk cluster file system for Linux capable of
providing both high performance and high availability.
...but without backing-up
2009 Jun 11
6
NAS Storage server question
Hello all,
At our office a have a server running 3 Xen domains. Mail server, etc.
I want to make this setup more redundant.
There are a few howtos on the combination of Xen, DRBD, and heartbeat.
That is probably the best way.
Another option I am looking at is a piece of shared storage,
a machine running CentOS with a large software RAID 5 array.
What is the best means of sharing the storage?
2012 Mar 05
12
Cluster xen
Bonjour,
J''aimerai mettre en place un cluster sous Xen ou XenServer avec 2
serveurs dell R 710.
J''aimerai pouvoir monter un cluster en utilisant l''espace disque entiere
des 2 serveurs cumulés ainsi que la mémoire
Quelles sont vos retour d''expériences et vos configurations?
Merci d''avance
Cordialement
Mat
2008 Nov 20
27
lenny amd64 and xen.
I''ve installed debian lenny amd64, it is frozen now.
I''ve install kernel for xen support but it doesn''t start.
It says "you need to load kernel first" but I''ve installed all the packages
concerning xen, also packages related to the kernel.
Perhaps lenny doesn''t support xen anymore?
Any solution?
2009 Jul 21
2
Best Practices for PV Disk IO?
I was wondering if anyone''s compiled a list of places to look to
reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably
acceptable performance from my setup (Dom0 as a iSCSI initiator,
providing phy volumes to DomUs), at about 45MB/sec writes, and
80MB/sec reads (this is to a IET target running in blockio mode).
As always, reducing latency for small disk operations