Displaying 20 results from an estimated 8000 matches similar to: "cLVM on Debian/Lenny"
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization
installations. To this date we only dealt with a small number of VM''s
(~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the
"storage guy" I find it quite convenient to present to the dom0s one
LUN per VM that makes live migration possible but without the cluster
file system or cLVM
2007 Mar 22
6
Xen and SAN : snapshot XOR live-migration ?
Please tell me if I am wrong :
Xen needs LVM to perform domU snapshots and snapshots must be performed
by dom0.
By the way, a LVM volume group should not be used by more that one
kernel at the same time. So if we use a SAN storage, one volume group
should be activated on only one server and deactivated on the others.
But if we do that, it should not be possible to perform live migration
of
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd
2010 Mar 08
4
Error with clvm
Hi,
I get this error when i try to start clvm (debian lenny) :
This is a clvm version with openais
# /etc/init.d/clvm restart
Deactivating VG ::.
Stopping Cluster LVM Daemon: clvm.
Starting Cluster LVM Daemon: clvmCLVMD[86475770]: Mar 8 11:25:27 CLVMD
started
CLVMD[86475770]: Mar 8 11:25:27 Our local node id is -1062730132
CLVMD[86475770]: Mar 8 11:25:27 Add_internal_client, fd = 7
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on
with our CLVM cluster.
Background:
4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches.
Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage
array.
Machines are running CentOS 5.8 with the Xen kernels. These blades host
various VMs for a project. The iSCSI
2006 Dec 06
5
LVM & volume groups
Can anybody tell me if it makes a difference if domU''s have separate LVM
volume groups?
For instance, the Xen User Manual
( http://tx.downloads.xensource.com/downloads/docs/user/#SECTION03330000000000000000) says, when setting up a domU''s disks with LVM, to do a
vgcreate vg /dev/sda10
Should each domU have it''s own volume group, or can all the domU''s share
2010 Jun 14
49
iSCSI and LVM
Hi Everyone,
I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise).
What is the best way to go about this? Should I:
a) Split the RAID10 array into 2 partition on the
2009 Sep 16
3
Dependency problem between cman and openais with last cman update
Hi. Today I have updated my cluster installation with the version
cman-2.0.115-1 through yum update. When I have started the cman service
, it fails. If I execute cman_tool debug I get the following error :
[CMAN ] CMAN 2.0.115 (built Sep 16 2009 12:28:10) started
aisexec: symbol lookup error:
/usr/libexec/lcrso/service_cman.lcrso: undefined symbol:
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2014 Feb 12
3
Right way to do SAN-based shared storage?
I'm trying to set up SAN-based shared storage in KVM, key word being
"shared" across multiple KVM servers for a) live migration and b)
clustering purposes. But it's surprisingly sparsely documented. For
starters, what type of pool should I be using?
2009 Mar 12
5
Alternatives to cman+clvmd ?
I currently have a few CentOS 5.2 based Xen clusters at different sites.
These are built around a group of 3 or more Xen nodes (blades) and
some sort of shared storage (FC or iSCSI) carved up by LVM and allocated
to the domUs.
I am "managing" the shared storage (from the dom0 perspective) using
cman+clvmd, so that changes to the LVs (rename/resize/create/delete/etc)
are
2009 Oct 26
6
LVM over Xen + Network
Hi,
We are planning to have LVM being used over a network of 3 h/w machines(500
GB Disk each)
Each hardware machine will have 2-3 domUs.
Can we store these domUs as a Logical Volumes stored across Network of these
3 machines?
Can one DomU exceed the 500 GB (physical drive size) and store say 1 TB of
data across the networked Physical Volumes?
Has anyone done this before?
Thanks and regards,
2008 Feb 17
17
Exporting a PCI Device
Hi All,
just a quick question I could not figure out. Is there a way to export a
PCI device to multiple VMs (para) keeping it available to dom0? Xen
version is 3.0.4.
Thanks,
Jan
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2006 Dec 05
2
Pinging DomU from DomU
Hi,
I am running Xen 3.0.2 in bridged networking on RHEL4. Everything
seems to be working nicely, apart from a worrying issue.
I can ping from Dom0 -> DomUs okay.
I can ping from DomUs -> Dom0 okay.
HOWEVER.
I cannot ping DomU -> DomU.
---
# ping xxx.xxx.xxx.248
Do you want to ping broadcast? Then -b
---
Even when I try with the -b switch, I get no joy.
Thoughts? TIA!
Y.
2011 Jan 19
8
Xen on two node DRBD cluster with Pacemaker
Hi all,
could somebody point me to what is considered a sound way to offer Xen guests
on a two node DRBD cluster in combination with Pacemaker? I prefer block
devices over images for the DomU''s. I understand that for live migration DRBD
8.3 is needed, but I''m not sure as to what kind of resource
agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD config
2009 Apr 08
1
ocfs2_controld.cman
If I start ocfs2_controld.cman in parallel on a few nodes, only one of them
starts up, the others exit with one of these errors:
call_section_read at 370: Reading from section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
call_section_read at 387: Checkpoint "ocfs2:controld" does not have a section named "daemon_protocol"
call_section_read at
2007 Nov 22
2
2.6.22.9-xen (ubuntu) and Nvidia x86_64-100.14.19 - success
On my recently acquired Core2Duo I''ve installed debian etch amd64,
Then I''ve pulled the linux-source from since I need newer raid and
netcard drivers:
http://dk.archive.ubuntu.com/ubuntu/pool/main/l/linux-source-2.6.22/linux-source-2.6.22_2.6.22.orig.tar.gz
http://dk.archive.ubuntu.com/pub/ubuntu/pool/main/l/linux-source-2.6.22/linux-source-2.6.22_2.6.22-14.46.diff.gz
2010 Nov 12
4
Opinion on best way to use network storage
I need the community''s opinion on the best way to use my storage SAN to host
xen images. The SAN itself is running iSCSI and NFS. My goal is to keep
all my xen images on the SAN device, and to be able to easily move images
from one host to another as needed while minimizing storage requirements and
maximizing performance.
What I see are my options:
1) Export a directory through NFS.
2007 Feb 14
5
Cookbook/HowTo for using XEN to create a complete DMZ?
Hi Folks,
XEN seems to me to be the ideal partner to create a complete DMZ with
firewall, router, "Bastios Host(s)" etc within a single PC.
So far, I haven''t found any cookbook or how to (at least for the beginning).
Anyone knows of such thing?
Regards
Falko
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
2009 Jan 27
20
Xen SAN Questions
Hello Everyone,
I recently had a question that got no responses about GFS+DRBD clusters for Xen VM storage, but after some consideration (and a lot of Googling) I have a couple of new questions.
Basically what we have here are two servers that will each have a RAID-5 array filled up with 5 x 320GB SATA drives, I want to have these as useable file systems on both servers (as they will both be