Displaying 20 results from an estimated 400 matches similar to: "o2hb_do_disk_heartbeat:982:ERROR"
2008 Sep 18
0
Ocfs2-users Digest, Vol 57, Issue 14
I think I might have miss understood where it is failing, has this file
been added to the DB on the web site or does it fail when you try to
onfigure this?
Carle Simmonds
Infrastructure Consultant
Technology Services
Experian UK Ltd
__________________________________________________
Tel: +44 (0)115 941 0888 (main switchboard)
Mobile: +44 (0)7813 854834
E-Mail: carle.simmonds at uk.experian.com
2006 May 26
1
Another node is heartbeating in our slot!
All,
We are having some problems getting OCFS2 to run, we are using kernel
2.6.15 with OCFS2 1.2.1. Compiling the OCFS2 sources went fine and all
modules load perfectly.
However, we can only mount the OCFS2 volume on one machine at a time,
when we try to mount the volume on the 2 other machines we get an error
stating that another node is heartbeating in our slot. When we mount the
volume
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2007 Mar 16
2
re: o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1" another node is heartbeating in our slot!
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Folks,
I'm trying to wrap my head around something that happened in our environment.
Basically, we noticed the error in /var/log/messages with no other errors.
"Mar 16 13:38:02 dbo3 kernel: (3712,3):o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1": another node is
heartbeating in our slot!"
Usually there are a
2010 Oct 20
1
OCFS2 + iscsi: another node is heartbeating in our slot (over scst)
Hi,
I'm building a cluster containing two nodes with seperate common storage
server.
On storage server i have volume with ocfs2 fs which is sharing this
volume via iscsi target.
When node connected to the target i can local mount volume on node and
using it.
Unfortunately. on storage server ocfs2 logged to dmesg:
Oct 19 22:21:02 storage kernel: [ 1510.424144]
2008 Mar 05
0
ocfs2 and another node is heartbeating in our slot
Hello,
I have one cluster drbd8+ocfs2.
If I mount ocfs2 partition on node1 it's work but when I mount partition on
node 2 I receive in /var/log/messages this
-Mar 5 18:10:04 suse4 kernel: (2857,0):o2hb_do_disk_heartbeat:665 ERROR:
Device "drbd1": another node is heartbeating in our slot!
-Mar 5 18:10:04 suse4 kernel: WARNING: at include/asm/dma-mapping.h:44
dma_map_sg()
-Mar 5
2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings,
Last night I manually unpresented and deleted a LUN (a SAN snapshot)
that was presented to one node in a four node RAC environment running
OCFS2 v1.4.1-1. The system then rebooted with the following error:
Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR:
Heartbeat write timeout to device dm-24 after 120000 milliseconds
Oct 21 16:45:34 ausracdb03 kernel:
2007 Sep 06
1
60% full and writes fail..
I have a setup with lot's of small files (Maildir), in 4 different
volumes and for some
reason the volumes are full when they reach 60% usage (as reported by
df ).
This was ofcourse a bit of a supprise for me .. lots of failed
writes, bounced messages
and very angry customers.
Has anybody on this list seen this before (not the angry customers ;-) ?
Regards,
=paulv
# echo "ls
2007 Aug 22
1
mount.ocfs2: Value too large ...
Hallo,
I have two servers and both are connected to external array, each by own SAS connection. I need these servers to work simultaneously with data on array and I think that ocfs2 is suitable for this purpose.
One server is P4 Xeon (Gentoo linux, i386, 2.6.22-r2) and second is Opteron (Gentoo linux, x86_64, 2.6.22-r2). Servers are connected by ethernet, adapters are both Intel
2007 Jul 07
2
Adding new nodes to OCFS2?
I looked around, found older post which seems not applicable anymore. I
have a cluster of 2 nodes right now, which has 3 OCFS2 file systems. All
the file systems were formatted with 4 node slots. I added the two news
nodes (by hand, by ocfs2console and o2cb_ctl), so my
/etc/ofcfs/cluster.conf looks right:
node:
ip_port = 7777
ip_address = 192.168.201.1
number = 0
2008 Oct 03
2
OCFS2 with Loop device
hi there
i try to setup OCFS2 with loop device /dev/loop0
i've 4 servers running SLES10 SP2.
internal ip's: 192.168.55.1, .2, .3 and .6
my cluster.conf:
--------------------------------------------
node:
ip_port = 7777
ip_address = 192.168.55.1
number = 0
name = www
cluster = cawww
node:
ip_port = 7777
ip_address = 192.168.55.2
2008 Aug 21
5
VM node won't talk to host
I am trying to mount the same partition from a KVM ubuntu 8.04.1 virtual
machine and on an ubuntu 8.04.1 host server.
I am able to mount the partition just on fine on two ubuntu host servers, they
both talk to each other. The logs on both servers show the other machine
mounting and unmounting the drive.
However, when I mount the drive in the KVM VM I get no communication to the
host
2007 Nov 29
1
Troubles with two node
Hi all,
I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre
channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20).
The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times
on a month) the OCFS2 stop to work on both system. On the first node I'm
getting no error in log files and after a forced shoutdown of the first
node on the second I can see
2008 Apr 23
2
Transport endpoint is not connected
Hi,
I want to share a partition of an external disk between 3 machines. It
is located on a Sun StoreTek 3510 and bound (via multipathing) to all 3
machines.
One machine (say mach1) is the node I used to create the filesystem and
setup /etc/ocfs2/cluster.conf. I copied the file to the other 2
machines. All use the same /etc/multipath.conf[1], too. All start
/etc/init.d/ocfs2 at boot, all are
2009 Oct 20
1
Fencing OCFS
I people, i install the ocfs in my virtual machine, with centos 5.3 and xen.
But when i turn off the machine1, the ocfs start the fencing
off the machine2. I read the doc in oracle.com, but could not solve the
problem. Someone help?
My conf and my package version.
cluster.conf
node:
ip_port = 7777
ip_address = 192.168.1.35
number = 0
name = x1
cluster
2010 Sep 03
1
Servers reboot - may be OCFS2 related
Hello.
What we have:
2x Debian 5.0 x64 - 2.6.32-20~bpo50+1 from backports
DRBD + OCFS2 1.4.1-1
I have both node reboot every day on my tests. On heavy load it have to
1-3 hour to reboot, on idle about 20 hours.
I not sure what it is a OCFS2 related but if I not mount OCFS2 partition
- I don`t get this reboot.
In attach screenshoot with system console on what error.
Nothing special in logs.
2011 Jun 27
1
multiple cluster doesn't work
We're trying to setup 3 PRDM partitions (VMware) across 2 nodes. As long as
only one is configured in cluster.conf, there's not problem. As soon as we
try to use 2 or more we get issues.
It looks the same as bug 636:
http://oss.oracle.com/bugzilla/show_bug.cgi?id=636
I posted my cluster.conf and command line results there. I'm including them
here in the hopes that someone on this
2008 Sep 24
1
help needed with ocfs2 on centos5.2
Hi,
I was interested in a simple configuration with 2 machines sharing a drbd
drive in primary-primary mode.
I am done with drbdb, but I am a newbie to ocfs, so wanted some help.
I have 2 machines, with centos5.2 , kernel 2.6.18-8.el5. I downloaded the
following packages :
ocfs2-2.6.18-92.el5-1.4.1-1.el5.x86_64.rpm
ocfs2-tools-1.4.1-1.el5.x86_64.rpm
ocfs2console-1.4.1-1.el5.x86_64.rpm
(download
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2011 Dec 06
2
OCFS2 showing "No space left on device" on a device with free space
Hi ,
I am getting the error "No space left on device" on a device with free
space which is ocfs2 filesystem.
Additional information is as below,
[root at sai93 staging]# debugfs.ocfs2 -n -R "stats" /dev/sdb1 | grep -i
"Cluster Size"
Block Size Bits: 12 Cluster Size Bits: 15
[root at sai93 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release