Displaying 20 results from an estimated 300 matches similar to: ""Conga" Luci "Add a virtual Service" is missing"
2009 Jun 29
0
Conga Ricci Luci "Add a Virtual Service" tab missing or disabled
Hi guys ... I have a problem ...
here is my settup
luci
xen0 xen1
[root at xen1 ~]# clustat
Cluster Status for XEN_Cluster @ Mon Jun 29 03:31:23 2009
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
xen0.genx.local
2015 Apr 24
0
Cluster gets stopped
Hi
I am using a two node cluster to achieve high availability.
I am basically testing a scenario where in if i shutdown my node
(node-1) then the other node (node-2) should start functioning like
node-1. Currently what i am observing is that the entire cluster gets
into "Stopped" state.
Here is my cluster.conf file
************************
<?xml version="1.0"?>
2011 Nov 25
0
Failed to start a "virtual machine " service on RHCS in CentOS 6
Hi? All:
I have two physical machines as KVM hosts (clusterA.RHCS and clusterB.RHCS) , an iscsi target set into GFS.
All I want is a HA Cluster which could migrate all the virtual machines on a node to another when the first node failed into some error status.
So I created a cluster "cluster" using RHCS ,added the two hosts into the cluster . created a fence device .
for every virtual
2010 Sep 15
0
problem with gfs_controld
Hi,
We have two nodes with centos 5.5 x64 and cluster+gfs offering samba and
NFS services.
Recently one node displayed the following messages in log files:
Sep 13 08:19:07 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2
handle 2846d7ad00000000 MSG_PLOCK
Sep 13 08:19:07 NODE1 gfs_controld[3101]: send plock message error -1
Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2
2012 Mar 22
0
problems configuring cluster to lucci and virtual machines resource
I have Centos 6.1 and i'm configuring cluster with conga, lucci and ricci
these services are installed and running without problem. I need this type
configuration for a high availability of virtual machines which is run on
kvm. My problem is at the time of configuring "service group", when i click
to add appears the new windows wich say "add service to cluster" and i fill
2008 Nov 07
0
Cluster Broken pipe Error & node Reboot
Hi all,
I am running two node RHEL3U8 cluster of below cluster version on
HP servers connected via scsi channel to HP Storage (SAN) for oracle
database server.
Kernel & Cluster Version
Kernel-2.4.21-47.EL #1 SMP
redhat-config-cluster-1.0.7-1-noarch
clumanager-1.2.26.1-1-x86_64
Suddenly my active node got rebooted after analysed the logs it is
throwing below errors on syslog.I want
2008 Nov 23
1
Cluster fail over database getting stopped
Hi,
I am running RHEL3u8 two node cluster,which is running oracle 9i
database.I am facing problem while rebooting second node causing my
oracle database get stopped in the active node 1 which is running my
database.so i checked below probabilities to find out when the
database get stopped.
Version
clumanager-1.2.31-1.x86_64.rpm
I stopped both the node.
started first node
when the
2011 Apr 25
0
Need Help with Fence_xvm
Dear Xen Users,
I am new to this technology and I''ve been reading all over the places about how to setup a cluster infrasctructre using Xen DOM0 and Xen DOMu Domains.
So far I have quite a few parts running (cluster) but I am trying to add a (virtual) fencing method. I have read that I can use (Virtual Fencing) using the fence_xvmd-fence_xvm deamon and agents; that will allow me to
2010 Mar 24
3
mounting gfs partition hangs
Hi,
I have configured two machines for testing gfs filesystems. They are
attached to a iscsi device and centos versions are:
CentOS release 5.4 (Final)
Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009
i686 i686 i386 GNU/Linux
The problem is if I try to mount a gfs partition it hangs.
[root at node2 ~]# cman_tool status
Version: 6.2.0
Config Version: 29
Cluster Name:
2009 Feb 13
5
GFS + Restarting iptables
Dear List,
I have one last little problem with setting up an cluster. My gfs
Mount will hang as soon as I do an iptables restart on one of the
nodes..
First, let me describe my setup:
- 4 nodes, all running an updated Centos 5.2 installation
- 1 Dell MD3000i ISCSI SAN
- All nodes are connected by Dell?s Supplied RDAC driver
Everything is running stable when the cluster is started (tested
2009 Jun 21
1
Xen LVM DRBD live migration
Hi guys I have few problems with live migration ... and I need some
professional help :)
I have 2 xen servers ... CentOS 5.3 and I want to have a high available
cluster
Now let`s begin ....
xen0:
[root@xen0 ~]# fdisk -l
Disk /dev/sda: 218.2 GB, 218238025728 bytes
255 heads, 63 sectors/track, 26532 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start
2008 Jan 29
1
for loop help
Hi,
I have written the following code which works fine
step<-5
numSim<-15
N<-double(numSim)
A<-double(numSim)
F<-double(numSim)
M<-double(numSim)
genx<-double(numSim)
for (i in 1:numSim) {
N[i]<-20
PN<-(runif(N[i], 0, 1))
A[i]<-sum(ifelse(PN>0.2, 1, 0))
PF<- runif((A[i]*0.5), 0, 1)
F[i]<-sum(ifelse(PF>0.2, 1, 0))
PM<-
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on
with our CLVM cluster.
Background:
4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches.
Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage
array.
Machines are running CentOS 5.8 with the Xen kernels. These blades host
various VMs for a project. The iSCSI
2007 Aug 28
1
Conga / luci question
After playing with conga/luci i'm wondering that it's not possible to reset and/or remove the
generated cluster configuration. Somebody knows how to achieve this? I did not find any hint in the
RH manuals.
Regards
Joachim Backes <joachim.backes at rhrk.uni-kl.de>
University of Kaiserslautern,Computer Center [RHRK],
Systems and Operations, High Performance Computing,
D-67653
2007 Jul 17
0
conga / luci not working in centos 5
I am trying to create a cluster in cent5 but keep running in bugs. First I
ran into this bug:
"A problem occurred when installing packages: failed to locate/execute
module"
http://bugs.centos.org/view.php?id=1931
and followed the following suggestions in the above thread:
mv /etc/redhat-release /etc/redhat-release.orig
echo "Red Hat Enterprise Linux Server release 5
2007 Jun 12
2
conga, ricci and luci updates missing?
According to this link,
https://rhn.redhat.com/errata/RHBA-2007-0331.html
conga, ricci and luci have updates available.
I can't find these updates on the mirrors I checked.
They are in:
http://mirror.centos.org/centos/5/os/x86_64/CentOS/
but not in:
http://mirror.centos.org/centos/5/updates/x86_64/RPMS/
Anyone have any ideas about this?
TIA
Dave
2011 Dec 11
1
Samba PDC cluster with RHCS
Dear Sir,
I have implemented Samba PDC. Its working fine. But o do Highly Available,
I have been trying to make it in 2 node cluster. Everything is running
fine. But facing a problem, which I want to share.
When I shift PDC to another cluster node. Everything is shifting fine. But
my existing user can not log in. The can logged in again if I rejoined that
mechine again to domain. I am explaining
2010 Sep 30
10
using DRBD VBDs with Xen
Hi,
Not totally new to Xen but still very green and meeting some problems.
Feel free to kick me to the DRBD people if this is not relevent here.
I''ll be providing more info upon request but for now I''ll be brief.
Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21)
Xen hypervisor 4.0.1~rc6-1 and drbd-8.3.8.
One domU configured, with disk and swap image:
root =
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys,
I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment.
The environtment :
web1.example.com
web2.example.com
When cluster being quorum, the web1 reboots by web2. When web2 is going up,
web2 reboots by web1.
Does anybody know how to solving this "fence loop" ?
master_wins="1" is not working properly, qdisk also.
Below the cluster.conf, I