Displaying 20 results from an estimated 2000 matches similar to: "Ocfs2-users Digest, Vol 57, Issue 14"
2008 Sep 18
2
o2hb_do_disk_heartbeat:982:ERROR
Hi everyone;
I have a problem on my 10 nodes cluster with ocfs2 1.2.9 and the OS is RHEL 4.7 AS.
9 nodes can start o2cb service and mount san disks on startup however one node can not do that. My cluster configuration is :
node:
ip_port = 7777
ip_address = 192.168.5.1
number = 0
name = fa01
cluster = ocfs2
node:
ip_port =
2006 May 26
1
Another node is heartbeating in our slot!
All,
We are having some problems getting OCFS2 to run, we are using kernel
2.6.15 with OCFS2 1.2.1. Compiling the OCFS2 sources went fine and all
modules load perfectly.
However, we can only mount the OCFS2 volume on one machine at a time,
when we try to mount the volume on the 2 other machines we get an error
stating that another node is heartbeating in our slot. When we mount the
volume
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2008 Mar 05
0
ocfs2 and another node is heartbeating in our slot
Hello,
I have one cluster drbd8+ocfs2.
If I mount ocfs2 partition on node1 it's work but when I mount partition on
node 2 I receive in /var/log/messages this
-Mar 5 18:10:04 suse4 kernel: (2857,0):o2hb_do_disk_heartbeat:665 ERROR:
Device "drbd1": another node is heartbeating in our slot!
-Mar 5 18:10:04 suse4 kernel: WARNING: at include/asm/dma-mapping.h:44
dma_map_sg()
-Mar 5
2011 Feb 28
2
ocfs2 crash with bugs reports (dlmmaster.c)
Hi,
After problem described in http://oss.oracle.com/pipermail/ocfs2-users/2010-
December/004854.html we've upgraded kernels and ocfs2-tools on every node.
The present versions are:
kernel 2.6.32-bpo.5-amd64 (from debian lenny-backports)
ocfs2-tolls 1.4.4-3 (from debian squeeze)
We didn't noticed any problems in logs untill last friday, when the whole
ocfs2 cluster crashed.
We know
2007 Jul 07
2
Adding new nodes to OCFS2?
I looked around, found older post which seems not applicable anymore. I
have a cluster of 2 nodes right now, which has 3 OCFS2 file systems. All
the file systems were formatted with 4 node slots. I added the two news
nodes (by hand, by ocfs2console and o2cb_ctl), so my
/etc/ofcfs/cluster.conf looks right:
node:
ip_port = 7777
ip_address = 192.168.201.1
number = 0
2008 Oct 03
2
OCFS2 with Loop device
hi there
i try to setup OCFS2 with loop device /dev/loop0
i've 4 servers running SLES10 SP2.
internal ip's: 192.168.55.1, .2, .3 and .6
my cluster.conf:
--------------------------------------------
node:
ip_port = 7777
ip_address = 192.168.55.1
number = 0
name = www
cluster = cawww
node:
ip_port = 7777
ip_address = 192.168.55.2
2007 Aug 22
1
mount.ocfs2: Value too large ...
Hallo,
I have two servers and both are connected to external array, each by own SAS connection. I need these servers to work simultaneously with data on array and I think that ocfs2 is suitable for this purpose.
One server is P4 Xeon (Gentoo linux, i386, 2.6.22-r2) and second is Opteron (Gentoo linux, x86_64, 2.6.22-r2). Servers are connected by ethernet, adapters are both Intel
2009 Oct 20
1
Fencing OCFS
I people, i install the ocfs in my virtual machine, with centos 5.3 and xen.
But when i turn off the machine1, the ocfs start the fencing
off the machine2. I read the doc in oracle.com, but could not solve the
problem. Someone help?
My conf and my package version.
cluster.conf
node:
ip_port = 7777
ip_address = 192.168.1.35
number = 0
name = x1
cluster
2008 Apr 23
2
Transport endpoint is not connected
Hi,
I want to share a partition of an external disk between 3 machines. It
is located on a Sun StoreTek 3510 and bound (via multipathing) to all 3
machines.
One machine (say mach1) is the node I used to create the filesystem and
setup /etc/ocfs2/cluster.conf. I copied the file to the other 2
machines. All use the same /etc/multipath.conf[1], too. All start
/etc/init.d/ocfs2 at boot, all are
2010 Sep 03
1
Servers reboot - may be OCFS2 related
Hello.
What we have:
2x Debian 5.0 x64 - 2.6.32-20~bpo50+1 from backports
DRBD + OCFS2 1.4.1-1
I have both node reboot every day on my tests. On heavy load it have to
1-3 hour to reboot, on idle about 20 hours.
I not sure what it is a OCFS2 related but if I not mount OCFS2 partition
- I don`t get this reboot.
In attach screenshoot with system console on what error.
Nothing special in logs.
2008 Sep 24
1
help needed with ocfs2 on centos5.2
Hi,
I was interested in a simple configuration with 2 machines sharing a drbd
drive in primary-primary mode.
I am done with drbdb, but I am a newbie to ocfs, so wanted some help.
I have 2 machines, with centos5.2 , kernel 2.6.18-8.el5. I downloaded the
following packages :
ocfs2-2.6.18-92.el5-1.4.1-1.el5.x86_64.rpm
ocfs2-tools-1.4.1-1.el5.x86_64.rpm
ocfs2console-1.4.1-1.el5.x86_64.rpm
(download
2010 Mar 18
1
OCFS2 works like standalone
I have installed OCFS2 on two nodes SuSE 10.
Seems all works superb and nice from the first sight.
But,
/dev/sda ocfs2 rac1 is not sharing through net (port 7777) with rac0.
On both nodes I have 500Mb /dev/sda disks that are mounted (and are ocfs2).
But they did not share the content with each other (files and folders in
it). So when I am creating the file in one node I am expecting to
2007 Sep 06
1
60% full and writes fail..
I have a setup with lot's of small files (Maildir), in 4 different
volumes and for some
reason the volumes are full when they reach 60% usage (as reported by
df ).
This was ofcourse a bit of a supprise for me .. lots of failed
writes, bounced messages
and very angry customers.
Has anybody on this list seen this before (not the angry customers ;-) ?
Regards,
=paulv
# echo "ls
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2005 Jun 28
1
Dual NICs for OCFS2
To avoid a single point of failure, we are configuring an Oracle cluster
with two private NICs per node. They are directly attached via crossover
cable to its peer on the other server (two-node cluster).
How can we tell OCFS2 that it has available a second card?
Here is our cluster.conf:
node:
ip_port = 7777
ip_address = 10.10.2.1
number = 0
name = neo
2012 Aug 06
0
Problem with mdadm + lvm + drbd + ocfs ( sounds obvious, eh ? :) )
Hi there
First of all apologies for the lenghty message, but it's been a long weekend.
I'm trying to setup a two node cluster with the following configuration:
OS: Debian 6.0 amd64
ocfs: 1.4.4-3 ( debian package )
drbd: 8.3.7-2.1
lvm2: 2.02.66-5
kernel: 2.6.32-45
mdadm: 3.1.4-1+8efb9d1+squeeze1
layout:
0- 2 36GB scsi disks in a raid1 array , with mdadm.
1- 1 lvm2 VG above the raid1 ,
2009 Mar 12
1
DLM Problem?
Hello,
I have an active, balanced webcluster with 2 SLES10SP2 nodes, both running
ocfs2 with an iscsi target.
The ocfs2 volume is mounted on both nodes.
Everything works fine, except sometime the load on both systems go as high
as 200, then both systems freeze, only reboot helps to regain control.
I noticed that this behaviour does not occur when the cluster is NOT
balanced. When the load on both
2010 Oct 20
1
OCFS2 + iscsi: another node is heartbeating in our slot (over scst)
Hi,
I'm building a cluster containing two nodes with seperate common storage
server.
On storage server i have volume with ocfs2 fs which is sharing this
volume via iscsi target.
When node connected to the target i can local mount volume on node and
using it.
Unfortunately. on storage server ocfs2 logged to dmesg:
Oct 19 22:21:02 storage kernel: [ 1510.424144]
2011 Dec 06
2
OCFS2 showing "No space left on device" on a device with free space
Hi ,
I am getting the error "No space left on device" on a device with free
space which is ocfs2 filesystem.
Additional information is as below,
[root at sai93 staging]# debugfs.ocfs2 -n -R "stats" /dev/sdb1 | grep -i
"Cluster Size"
Block Size Bits: 12 Cluster Size Bits: 15
[root at sai93 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release