Displaying 20 results from an estimated 400 matches similar to: "Error with clvm"
2009 Mar 12
5
Alternatives to cman+clvmd ?
I currently have a few CentOS 5.2 based Xen clusters at different sites.
These are built around a group of 3 or more Xen nodes (blades) and
some sort of shared storage (FC or iSCSI) carved up by LVM and allocated
to the domUs.
I am "managing" the shared storage (from the dom0 perspective) using
cman+clvmd, so that changes to the LVs (rename/resize/create/delete/etc)
are
2014 Feb 12
3
Right way to do SAN-based shared storage?
I'm trying to set up SAN-based shared storage in KVM, key word being
"shared" across multiple KVM servers for a) live migration and b)
clustering purposes. But it's surprisingly sparsely documented. For
starters, what type of pool should I be using?
2009 Apr 15
32
cLVM on Debian/Lenny
Hi -
Is there someone around who successfully got cLVM on Debian/Lenny
working? I was wondering if I was the only one facing problems with...
Thanks in anticipation,
--
Olivier Le Cam
Département des Technologies de l''Information et de la Communication
CRDP de l''académie de Versailles
_______________________________________________
Xen-users mailing list
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2010 Feb 27
17
XEN and clustering?
Hi.
I''m using Xen on RHEL cluster, and I have strange problems. I gave raw
volumes from storage to Xen virtual machines. With windows, I have a
problem that nodes don''t see the volume as same one.... for example:
clusternode1# clusvcadm -d vm:winxp
clusternode1# dd if=/dev/mapper/winxp of=/node1winxp
clusternode2# dd if=/dev/mapper/winxp of=/node2winxp
clusternode3# dd
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys,
I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment.
The environtment :
web1.example.com
web2.example.com
When cluster being quorum, the web1 reboots by web2. When web2 is going up,
web2 reboots by web1.
Does anybody know how to solving this "fence loop" ?
master_wins="1" is not working properly, qdisk also.
Below the cluster.conf, I
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on
with our CLVM cluster.
Background:
4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches.
Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage
array.
Machines are running CentOS 5.8 with the Xen kernels. These blades host
various VMs for a project. The iSCSI
2005 Feb 15
6
xen-testing and redhat-cluster devel
Hi,
I''m using xen on two-node redhat cluster (CVS devel version), using lvm
as storage backend.
redhat cluster is used to synchronize LVM metadata (using clvmd) and as
storage
for domain configs and dom-U kernels (with gfs).
Latest version of redhat cluster works with xen-2.0.4, but not with
xen-2.0-testing.
ccsd failed to start on 2.0-testing. Anyone knows what the problem is?
I
2006 Apr 12
2
bootup error - undefined symbol: lvm_snprintf after yum update
This is an x86_64 system that I just updated from 4.2 to 4.3
(including the csgfs stuff).
When I watch the bootup on the console, I see an error:
lvm.static: symbol lookup error: /usr/lib64/liblvm2clusterlock.so:
undefined symbol: lvm_snprintf
This error comes immediately after the "Activating VGs" line, so it
appears to be triggered by the vgchange command in the clvmd startup
2007 Jul 09
0
Cannot Start CLVMD on second node of cluster
Hi,
I'm trying to configure Red Hat GFS but have problems starting CLVMD on
the second node of a 3-nodes cluster. I can start ccsd, cman, and fenced
successfully, and clvmd on any other nodes first time.
-------------------------
CenOS 4.4
Kernel: 2.6.9-42.0.3.ELsmp
[root at server2 ~]# cman_tool nodes
Node Votes Exp Sts Name
1 1 3 M server3
2 1 3 M server4
3
2006 Nov 06
1
Segmentation fault on LVM
Hi all,
I have installed rhcs on a CentOS 4.4 server with clvmd. When server
reboots display a segmentation fault on line 504 in /etc/rc.d/rc.sysinit
file, here:
if [ -x /sbin/lvm.static ]; then
500 if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure
> /dev/null 2>&1 ; then
501 action $"Setting up Logical Volume Management:"
2011 Nov 30
1
CTDB + Likewise-open : What servername when joining AD?
Hi,
[Context : ubuntu 11.11 64bits, cman, clvmd, gfs2, ctdb, samba,
likewise-open, all running fine except...]
I've setup ctdb to manage public_addresses (to manage one virtual ip
actually), and I've explicitely told smb.conf that
netbios name = foobar_cluster
I've done that *after* I've manage to make samba, likewise-open and AD
to work nicely together.
I guess now all the
2019 Jan 11
1
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one.
It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further.
I?m doing
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody,
I am in the process of setting up a really cool xen serverfarm. Backend
storage will be an LVMed AoE-device on top of DRBD.
The goal is to have the backend storage completely redundant.
Picture:
|RAID| |RAID|
|DRBD1| <----> |DRBD2|
\ /
|VMAC|
| AoE |
|global LVM VG|
/ | \
|Dom0a| |Dom0b| |Dom0c|
| |
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization
installations. To this date we only dealt with a small number of VM''s
(~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the
"storage guy" I find it quite convenient to present to the dom0s one
LUN per VM that makes live migration possible but without the cluster
file system or cLVM
2012 Feb 23
2
lockmanager for use with clvm
Hi,
i am setting up a cluster of kvm hypervisors managed with libvirt.
The storage pool is on iscsi with clvm. To prevent that a vm is
started on more than one hypervisor, I want to use a lockmanager
with libvirt.
I could only find sanlock as lockmanager, but AFSIK sanlock will not
work in my setup as I don't have a shared filesystem. I have dlm running
for clvm. Are there lockmanager
2007 Nov 13
2
lvm over nbd?
I have a system with a large LVM VG partition.
I was wondering if there is a way i could share the partition
using nbd and have the nbd-client have access the LVM
as if it was local.
SYSTEM A: /dev/sda3 is a LVM partition and is assigned to
VG volgroup1. I want to share /dev/sda3 via nbd-server
SYSTEM B: receives A''s /dev/sda3 as /dev/nbd0. I want to
access it as VG volgroup1.
I am
2007 Mar 22
6
Xen and SAN : snapshot XOR live-migration ?
Please tell me if I am wrong :
Xen needs LVM to perform domU snapshots and snapshots must be performed
by dom0.
By the way, a LVM volume group should not be used by more that one
kernel at the same time. So if we use a SAN storage, one volume group
should be activated on only one server and deactivated on the others.
But if we do that, it should not be possible to perform live migration
of
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys,
I have setup my hard disc with 3 partitions:
1- 256MB on /boot;
2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN);
3- 498GB exported with vblade-persist to my network (eth1 for the AoE
protocol).
On dom0 hypervisor01:
vblade-persist setup 0 0 eth1 /dev/sda3
vblade-persist start all
How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s?
Including the
2010 Jun 14
49
iSCSI and LVM
Hi Everyone,
I am going to get a storage server which will be connected to my Xen hosts via iSCSI/Ethernet. I wish to use LVM for the DomU disks. The storage server will have a RAID10 array, and 2 Xen hosts will connect to this (Each will have a 50% share of the RAID10 array, space wise).
What is the best way to go about this? Should I:
a) Split the RAID10 array into 2 partition on the