Displaying 17 results from an estimated 17 matches for "quorate".
2008 Sep 12
1
Regd: Ethernet Channel Bonding Clarification is Needed
...OOT=yes NETMASK=255.255.255.0
GATEWAY=192.168.13.1
IPADDR=192.168.13.110
4) Reboot the system for the changes to take effect.
After i am rebooted both the server then cluster node becomes simplex
and Services are started on both the nodes
The cluster output in primary node
Member Status: Quorate
Member Name Status
----------- ---------
primary Online, Local, rgmanager
secondary Offline
Service Name Owner (Last) State
------------ -------...
2007 Dec 26
0
Regd: Ethernet Channel Bonding issue in Cluster
...OT=yes NETMASK=255.255.255.0
GATEWAY=192.168.13.1
IPADDR=192.168.13.110
4) Reboot the system for the changes to take effect.
After i am rebooted both the server then cluster node becomes simplex
and Services are started on both the nodes
The cluster ouput in primary node
Member Status: Quorate
Member Name Status
----------- ------- ---------
primary Online, Local, rgmanager
secondary Offline
Service Name Owner (Last) State...
2007 Dec 27
0
Channel Bonding issue in Cluster Suite Setup
...NETMASK=255.255.255.0
GATEWAY=192.168.13.1
IPADDR=192.168.13.110
4) Reboot the system for the changes to take effect.
After i am rebooted both the server then cluster node becomes simplex
and Services are started on both the nodes
The cluster output in primary node
Member Status: Quorate
Member Name Status
----------- ------- ---------
primary Online, Local, rgmanager
secondary Offline
Service Name Owner (Last) State
----...
2007 Apr 04
1
Cluster Services
...eed to restart all
cluster related services on all remaining nodes in the cluster after the
node has been removed. This is a four node cluster so removing one node
obviously degrades the cluster to three nodes, but we can have the
remaining cluster members recalculate the number of votes to remain
quorate. It still surprises me that we have to bring down the entire
cluster just because we are deleting a node. Any feedback as to why
this might be?
2007 Jan 15
1
RHCS on CentOS4 - 2 node cluster problem
...er -
bootup hangs in this state. Omitting cluster service starts at boot time
by being selective (boot parameter confirm) brings up the box.
ccsd starts up (by service or by hand and with parameter -n), but
syslogs that it fails to get cluster infrastructure information. So the
cluster is in inquorate state. Anyone experienced with the RHCS knows
whether I can avoid switching down node 2 and the Apache service for
which the cluster runs? Documentation (manual and FAQ) is silent about
this. I verified that there is no network / NIC problem. How to get the
2 node cluster back into quorate stat...
2008 Sep 12
1
Ethernet Channel Bonding Clarification is Needed
...OOT=yes NETMASK=255.255.255.0
GATEWAY=192.168.13.1
IPADDR=192.168.13.110
4) Reboot the system for the changes to take effect.
After i am rebooted both the server then cluster node becomes simplex
and Services are started on both the nodes
The cluster output in primary node
Member Status: Quorate
Member Name Status
----------- ---------
primary Online, Local, rgmanager
secondary Offline
Service Name Owner (Last) State
------------ -------...
2011 Dec 20
1
OCFS2 problems when connectivity lost
Hello,
We are having a problem with a 3-node cluster based on
Pacemaker/Corosync with 2 primary DRBD+OCFS2 nodes and a quorum node.
Nodes run on Debian Squeeze, all packages are from the stable branch
except for Corosync (which is from backports for udpu functionality).
Each node has a single network card.
When the network is up, everything works without any problems, graceful
shutdown of
2017 Sep 10
4
Corosync on a home network
...but does not see the others.
For instance I see:
--------------%<----------------------
# corosync-quorumtool
Quorum information
------------------
Date: Sun Sep 10 12:56:56 2017
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 3
Ring ID: 3/28648
Quorate: No
Votequorum information
----------------------
Expected votes: 4
Highest expected: 4
Total votes: 1
Quorum: 3 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
3 1 192.168.1.52 (local)
----------------%&...
2009 Jun 29
0
Conga Ricci Luci "Add a Virtual Service" tab missing or disabled
Hi guys ... I have a problem ...
here is my settup
luci
xen0 xen1
[root at xen1 ~]# clustat
Cluster Status for XEN_Cluster @ Mon Jun 29 03:31:23 2009
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
xen0.genx.local 1
Online
xen1.genx.local...
2009 Jun 29
0
"Conga" Luci "Add a virtual Service" is missing
...failed over domain ....
I did migration and live migration from a dom0 to another ...
My problem is .... I don`t have "Add a virtual Service" tab to add A VM as a
cluster service ...
[root at xen1 ~]# clustat
Cluster Status for XEN_Cluster @ Mon Jun 29 11:29:33 2009
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
xen0.genx.local 1
Online
xen1.genx.local...
2012 May 11
1
problems with luci on CentOS 6.2
...Luci is unable to
reboot it for example and
if I select the node properties it shows me no status for Cluster
Daemons for this specific node.
All the other nodes are fully manageable from luci.
from command line everything seems to work fine.
net-cluster @ Sat May 12 00:53:33 2012
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
virtsrv1n1.mydomain.org 1 Online, Local, rgmanager
virtsrv2n2.mydomain.org 2 Online, rgmanager
virtsrv3n3.mydomain.org 3 Online, rgmanager
virtsrv4n4.mydom...
2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys,
I'm looking at the possibility of running a pair of servers with
Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or
other clustered FS) for the mail storage and ext3 for the root drive.
I'm currently using maildrop for delivery and Dovecot imap/pop3 with
the stores over NFS. I'm looking for better performance but still
keeping the HA element I have now with
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys,
I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment.
The environtment :
web1.example.com
web2.example.com
When cluster being quorum, the web1 reboots by web2. When web2 is going up,
web2 reboots by web1.
Does anybody know how to solving this "fence loop" ?
master_wins="1" is not working properly, qdisk also.
Below the cluster.conf, I
2018 Jul 07
1
two 2-node clusters or one 4-node cluster?
...verified it on my virtual 4-nodes based on CentOS 7.4,
where I have:
- modified corosync.conf on all nodes
- pcs cluster stop --all
- pcs cluster start --all
- wait a few minutes for resources to start
- shutdown cl3 and cl4
and this is the situation at the end, without downtime and with cluster
quorate
[root at cl1 ~]# pcs status
Cluster name: clorarhv1
Stack: corosync
Current DC: intracl2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with
quorum
Last updated: Sat Jul 7 15:25:47 2018
Last change: Thu Jul 5 18:09:52 2018 by root via crm_resource on intracl2
4 nodes configured
15 resources co...
2009 Feb 13
5
GFS + Restarting iptables
...1 setan 00030004 none
[1 2 3 4]
dlm 1 rgmanager 00040004 none
[1 2 3 4]
gfs 2 setan 00020004 none
[1 2 3 4]
------------------------------------
[root at badjak ~]# clustat
Cluster Status for mars @ Fri Feb 13 17:18:58 2009
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
gandaria.somedomain.tld
1 Online
goreng.somedomain.tld...
2018 Jul 05
5
two 2-node clusters or one 4-node cluster?
Hello,
I'm planning migration of current two clusters based on CentOS 6.x with
Cman/Rgmanager going to CentOS 7.x and Corosync/Pacemaker.
As the clusters and their services are on the same subnet, and there no
particular security concerns differentiating them, I'm also evaluating the
option to transform the two clusters into a unique 4-node one during the
upgrade.
Currently I'm
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]