similar to: Unable to set the o2cb heartbeat to global

Displaying 20 results from an estimated 1000 matches similar to: "Unable to set the o2cb heartbeat to global"

2008 Mar 31
4
SuSe Hangs when /etc/init.d/o2cb online
Hello, I have a DELL MD3000i, and a couple of servers that I want to connect to the array. I have setup the main server with CentOS 5. - 2.6.18-53.el5 x86_64 - ocfs2-tools-1.2.7-1.el5 - ocfs2console-1.2.7-1.el5 - ocfs2-2.6.18-53.el5-1.2.8-2.el5 # Kernel Module And two OpenSuSe 10.3 Servers - 2.6.22.5-31-default x86_64 - ocfs2-tools-1.2.6-18 - ocfs2console-1.2.6-18 The module has been already
2007 Jul 07
2
Adding new nodes to OCFS2?
I looked around, found older post which seems not applicable anymore. I have a cluster of 2 nodes right now, which has 3 OCFS2 file systems. All the file systems were formatted with 4 node slots. I added the two news nodes (by hand, by ocfs2console and o2cb_ctl), so my /etc/ofcfs/cluster.conf looks right: node: ip_port = 7777 ip_address = 192.168.201.1 number = 0
2011 Jun 27
1
multiple cluster doesn't work
We're trying to setup 3 PRDM partitions (VMware) across 2 nodes. As long as only one is configured in cluster.conf, there's not problem. As soon as we try to use 2 or more we get issues. It looks the same as bug 636: http://oss.oracle.com/bugzilla/show_bug.cgi?id=636 I posted my cluster.conf and command line results there. I'm including them here in the hopes that someone on this
2008 Sep 10
4
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted".
Hi, I am trying to configure a two node cluster on SLES10SP2 using user level heartbeat. Here is my configuration. ocfs2-tools-1.4.0-0.3 **user level heartbeat** -> lsmod | grep ocfs ocfs2_user_heartbeat 20992 1 ocfs2_dlmfs 37776 1 ocfs2_dlm 204456 1 ocfs2_dlmfs ocfs2_nodemanager 223384 6 ocfs2_user_heartbeat,ocfs2_dlmfs,ocfs2_dlm configfs 44700 3 ocfs2_user_heartbeat,ocfs2_nodemanager
2008 Sep 18
2
o2hb_do_disk_heartbeat:982:ERROR
Hi everyone; I have a problem on my 10 nodes cluster with ocfs2 1.2.9 and the OS is RHEL 4.7 AS. 9 nodes can start o2cb service and mount san disks on startup however one node can not do that. My cluster configuration is : node: ip_port = 7777 ip_address = 192.168.5.1 number = 0 name = fa01 cluster = ocfs2 node: ip_port =
2008 Mar 05
3
cluster with 2 nodes - heartbeat problem fencing
Hi to all, this is My first time on this mailinglist. I have a problem with Ocfs2 on Debian etch 4.0 I'd like when a node go down or freeze without unmount the ocfs2 partition the heartbeat not fence the server that work well ( kernel panic ). I'd like disable or heartbeat or fencing. So we can work also with only 1 node. Thanks
2005 Oct 12
2
Unable to access cluster service
hello, I'm running Ubuntu Breezy with the OCFS2 modules in the standard kernel. I installed ocfs2console and ocfs2-tools I've formatted a partition with ocfs2. But I can't add any node or mount the device(with the ocfs2console). because I get a "Unable to access cluster service" I can't find the cause nor the solution to this. root@lenaeja:~# /etc/init.d/o2cb status
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd Hi On a drbd Primary node, when attempting to mount our cluster partition: sudo mount -t ocfs2 /dev/drbd1 /cluster we get: mount.ocfs2: Unable to access cluster service while trying to join the group We then call: sudo dpkg-reconfigure ocfs2-tools Setting cluster stack "o2cb": OK Starting O2CB cluster ocfs2: OK And all is well: Aug 22 13:48:23 uc1 kernel: [
2009 Oct 20
1
Fencing OCFS
I people, i install the ocfs in my virtual machine, with centos 5.3 and xen. But when i turn off the machine1, the ocfs start the fencing off the machine2. I read the doc in oracle.com, but could not solve the problem. Someone help? My conf and my package version. cluster.conf node: ip_port = 7777 ip_address = 192.168.1.35 number = 0 name = x1 cluster
2010 Sep 03
1
Servers reboot - may be OCFS2 related
Hello. What we have: 2x Debian 5.0 x64 - 2.6.32-20~bpo50+1 from backports DRBD + OCFS2 1.4.1-1 I have both node reboot every day on my tests. On heavy load it have to 1-3 hour to reboot, on idle about 20 hours. I not sure what it is a OCFS2 related but if I not mount OCFS2 partition - I don`t get this reboot. In attach screenshoot with system console on what error. Nothing special in logs.
2006 Sep 20
1
Error mounting ocfs2 partition...
Hi, I'm new to this list and to the ocfs2 system and to clustering in general (3 strikes?--ack!). I have some problems with my first attempt at configuring an ocfs2 system and need some perspective. My setup is a Compaq Proliant CL380 2 node cluster with a shared storage array. Both nodes are currently running SLES 10. However, I am just trying to get one node working. I have read the
2006 Mar 23
4
initial cluster setup
Hello list! Just pulled down ocfs2 this morning and installed it on two RHEL 4 2.6.9-34 systems. I have followed the manual, but am not seeing some things it describes and am not sure what the problem is. I guess my first question is - do I need to have a partition formatted as ocfs2 on all nodes in the cluster for this to work? I have formatted a partition via ocfs2console on node1, and
2007 Sep 06
1
60% full and writes fail..
I have a setup with lot's of small files (Maildir), in 4 different volumes and for some reason the volumes are full when they reach 60% usage (as reported by df ). This was ofcourse a bit of a supprise for me .. lots of failed writes, bounced messages and very angry customers. Has anybody on this list seen this before (not the angry customers ;-) ? Regards, =paulv # echo "ls
2009 Jan 14
1
Transport endpoint is not connected while mounting....
Does anyone have any idea what to try next? Here are the steps I have taken and the problem: (I wanted to post my question on the first line before I explained the problem and what I have tried) ---------- Node 0 has the file system mounted just fine and works great. When trying to mount on Node 1: `mount.ocfs2 /dev/mapper/data /cluster/ data` I get this error after about 30 seconds:
2008 Sep 24
1
help needed with ocfs2 on centos5.2
Hi, I was interested in a simple configuration with 2 machines sharing a drbd drive in primary-primary mode. I am done with drbdb, but I am a newbie to ocfs, so wanted some help. I have 2 machines, with centos5.2 , kernel 2.6.18-8.el5. I downloaded the following packages : ocfs2-2.6.18-92.el5-1.4.1-1.el5.x86_64.rpm ocfs2-tools-1.4.1-1.el5.x86_64.rpm ocfs2console-1.4.1-1.el5.x86_64.rpm (download
2008 Aug 21
5
VM node won't talk to host
I am trying to mount the same partition from a KVM ubuntu 8.04.1 virtual machine and on an ubuntu 8.04.1 host server. I am able to mount the partition just on fine on two ubuntu host servers, they both talk to each other. The logs on both servers show the other machine mounting and unmounting the drive. However, when I mount the drive in the KVM VM I get no communication to the host
2008 Oct 03
2
OCFS2 with Loop device
hi there i try to setup OCFS2 with loop device /dev/loop0 i've 4 servers running SLES10 SP2. internal ip's: 192.168.55.1, .2, .3 and .6 my cluster.conf: -------------------------------------------- node: ip_port = 7777 ip_address = 192.168.55.1 number = 0 name = www cluster = cawww node: ip_port = 7777 ip_address = 192.168.55.2
2007 Nov 29
1
Troubles with two node
Hi all, I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20). The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times on a month) the OCFS2 stop to work on both system. On the first node I'm getting no error in log files and after a forced shoutdown of the first node on the second I can see
2008 Jun 27
8
Boot from OCFS2
Dear List, I''m thinking about using xen productive in our datacenter, i''m still testing around with it. Now I got some questions, just for basic understanding, we got for example this environment: 2 Nodes 1 SCSI Pool server (Connected via scsi to both nodes) Now I want to build a "cluster" so i would like to make this: Node 1 -> Primary -| | --> domU
2010 Mar 18
1
OCFS2 works like standalone
I have installed OCFS2 on two nodes SuSE 10. Seems all works superb and nice from the first sight. But, /dev/sda ocfs2 rac1 is not sharing through net (port 7777) with rac0. On both nodes I have 500Mb /dev/sda disks that are mounted (and are ocfs2). But they did not share the content with each other (files and folders in it). So when I am creating the file in one node I am expecting to