similar to: CentOS 6.5 RHCS fence loops

Displaying 20 results from an estimated 900 matches similar to: "CentOS 6.5 RHCS fence loops"

2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2010 Mar 24
3
mounting gfs partition hangs
Hi, I have configured two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name:
2014 Jun 12
1
Xen PV domU reported as Xen-HVM
Hello, I am running two dom0s, one on CentOS 5 with Xen 4.1.2 (from Gitco) and the other one on CentOS 6 with Xen 4.2.4 (from Xen4CentOS). I host one LVM based domU on both from the same template (CentOS 6 PV) with the same Xen config (see below). However, the domU on Xen 4.1 reports itself as Xen PV while the domU on Xen4CentOS reports itself as Xen HVM. == First domU == virt-what and cPanel
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2007 Aug 14
3
NFS / DNS problem
Hi all, Today we have had a strange problem that has taken down our website, we understand what happened but not why so I am hoping someone has seen this before. We have our web servers (web1 web2 web3 ..... web10) mounting an NFS share (/export/data) from server nfs1. On the web server side we use autofs in the format nfs-dedicated:/export/data where nfs-dedicated is an alias in our
2011 Nov 25
0
Failed to start a "virtual machine " service on RHCS in CentOS 6
Hi? All: I have two physical machines as KVM hosts (clusterA.RHCS and clusterB.RHCS) , an iscsi target set into GFS. All I want is a HA Cluster which could migrate all the virtual machines on a node to another when the first node failed into some error status. So I created a cluster "cluster" using RHCS ,added the two hosts into the cluster . created a fence device . for every virtual
2011 Dec 11
1
Samba PDC cluster with RHCS
Dear Sir, I have implemented Samba PDC. Its working fine. But o do Highly Available, I have been trying to make it in 2 node cluster. Everything is running fine. But facing a problem, which I want to share. When I shift PDC to another cluster node. Everything is shifting fine. But my existing user can not log in. The can logged in again if I rejoined that mechine again to domain. I am explaining
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2011 Apr 25
0
Need Help with Fence_xvm
Dear Xen Users, I am new to this technology and I''ve been reading all over the places about how to setup a cluster infrasctructre using Xen DOM0 and Xen DOMu Domains. So far I have quite a few parts running (cluster) but I am trying to add a (virtual) fencing method. I have read that I can use (Virtual Fencing) using the fence_xvmd-fence_xvm deamon and agents; that will allow me to
2018 Jun 13
2
iproute2 problems
Yes, I am sure but added another broader rule: nsasia at db1:~$ sudo ufw allow from any port 655 proto udp same result for debug example. regards Robert >>> Rafael Wolf <rfwolf at gmail.com> 13-Jun-18 5:32 PM >>> Telnet will only do tcp not udp which tinc works on. Are you sure udp 655 is open? On Wed, Jun 13, 2018, 3:51 AM Robert Horgan <robert
2015 Apr 24
0
Cluster gets stopped
Hi I am using a two node cluster to achieve high availability. I am basically testing a scenario where in if i shutdown my node (node-1) then the other node (node-2) should start functioning like node-1. Currently what i am observing is that the entire cluster gets into "Stopped" state. Here is my cluster.conf file ************************ <?xml version="1.0"?>
2010 Sep 15
0
problem with gfs_controld
Hi, We have two nodes with centos 5.5 x64 and cluster+gfs offering samba and NFS services. Recently one node displayed the following messages in log files: Sep 13 08:19:07 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:07 NODE1 gfs_controld[3101]: send plock message error -1 Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2
2009 Jun 29
0
Conga Ricci Luci "Add a Virtual Service" tab missing or disabled
Hi guys ... I have a problem ... here is my settup luci xen0 xen1 [root at xen1 ~]# clustat Cluster Status for XEN_Cluster @ Mon Jun 29 03:31:23 2009 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ xen0.genx.local
2009 Jun 29
0
"Conga" Luci "Add a virtual Service" is missing
Hi there. I have 2 days trying to resolve this issue ... I have 2 xen servers in cluster ... and I installed Luci on another server separate from the 2 xen servers. I created a cluster ... I create a failed over domain .... I did migration and live migration from a dom0 to another ... My problem is .... I don`t have "Add a virtual Service" tab to add A VM as a cluster service ...
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
Hi! I am having same issue but I am running Ubuntu v16.04. It does not mount during boot, but works if I mount it manually. I am running the Gluster-server on the same machines (3 machines) Here is the /tc/fstab file /dev/sdb1 /data/gluster ext4 defaults 0 0 web1.dasilva.network:/www /mnt/glusterfs/www glusterfs defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
2013 May 29
1
Strange Beauvoir with hard and soft link
How to is possible this? > [root at lucatest ~]# ls -lid /var/log /var/log/ispconfig /var/log/ispconfig/httpd /var/log/ispconfig/httpd/prova.it /var/log/ispconfig/httpd/prova.it/test /var/www /var/www/clients /var/www/clients/client1 /var/www/clients/client1/web3 /var/www/clients/client1/web3/log /var/www/clients/client1/web3/log/test > 706 drwxr-xr-x. 15 root root 4096 29 mag 08:44
2017 Sep 01
2
ganesha error ?
Hi, I got these errors 3 times since I'm testing gluster with nfs-ganesha. The clients are php apps and when this happen, clients got strange php session error. Below, the first error only happen once but other errors happen every time a clients try to create a new session file. To make php apps work again, I had to restart the client. Do you have an idea of what's happening here ?
2017 Sep 02
0
ganesha error ?
On 09/02/2017 02:09 AM, Renaud Fortier wrote: > Hi, > > I got these errors 3 times since I?m testing gluster with nfs-ganesha. > The clients are php apps and when this happen, clients got strange php > session error. Below, the first error only happen once but other errors > happen every time a clients try to create a new session file. To make > php apps work again, I had
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com> wrote: > Hi! > I am having same issue but I am running Ubuntu v16.04. > It does not mount during boot, but works if I mount it manually. I am > running the Gluster-server on the same machines (3 machines) > Here is the /tc/fstab file > > /dev/sdb1 /data/gluster ext4 defaults 0 0 > >
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on with our CLVM cluster. Background: 4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches. Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage array. Machines are running CentOS 5.8 with the Xen kernels. These blades host various VMs for a project. The iSCSI