Displaying 20 results from an estimated 300 matches similar to: "Private Interconnect and self fencing"
2006 May 10
1
Floating Point Exception
I have a Fedora Core server running:
Fedora Core release 4 (Stentz)
kernel version: 2.6.15-1.1833_FC4smp
( I have also tried kernel version: 2.6.16-1.2108_FC4smp)
I compiled the ocfs2 and ocfs2-tools using the following steps:
# MODULES:
tar zxvpf ocfs2-1.2.1.tar.gz
cd ocfs2-1.2.1
./configure
make
make install
# TOOLS:
tar zxf ocfs2-tools-1.2.1.tar.gz
cd ocfs2-tools-1.2.1
./configure
2006 Jun 09
1
RHEL 4 U2 / OCFS 1.2.1 weekly crash?
Hello,
I have two nodes running the 2.6.9-22.0.2.ELsmp kernel and the OCFS2
1.2.1 RPMs. About once a week, one of the nodes crashes itself (self-
fencing) and I get a full vmcore on my netdump server. The netdump log
file shows the shared filesystem LUN (/dev/dm-6) did not respond within
12000ms. I have not changed the default heartbeat values
in /etc/sysconfig/o2cb. There was no other IO
2006 Apr 18
1
Self-fencing issues (RHEL4)
Hi.
I'm running RHEL4 for my test system, Adaptec Firewire controllers,
Maxtor One Touch III shared disk (see the details below),
100Mb/s dedicated interconnect. It panics with no load about each
20 minutes (error message from netconsole attached)
Any clues?
Yegor
---
[root at rac1 ~]# cat /proc/fs/ocfs2/version
OCFS2 1.2.0 Tue Mar 7 15:51:20 PST 2006 (build
2006 Nov 03
2
Newbie questions -- is OCFS2 what I even want?
Dear Sirs and Madams,
I run a small visual effects production company, Hammerhead Productions.
We'd like to have an easily extensible inexpensive relatively
high-performance
storage network using open-source components. I was hoping that OCFS2
would be that system.
I have a half-dozen 2 TB fileservers I'd like the rest of the network to see
as a single 12 TB disk, with the aggregate
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2010 Sep 25
5
unpredictable Xen crash w NetBSD 5.0.2(XEN3PAE_DOMU)
Dear all:
I''m sorry I crossmail.
I try setup aoe-vblade server on netbsd 5.0.2(domU)
and I try to do some stress test with
for i in {65536}; do dd if=/dev/zero of=/dev/etherd/e?.? bs=4K;done
on a Linux box
Two Xen dom0 configurations I use:
1. 32bits SuSE Enterprise Linux 11sp1 2.6.32.12-0.7-xen with 32bits
Xen 4.0.0_21091_04-0.2.6
2. 64bits Gentoo 2.6.32-xen-r1 with 64bits Xen 4.0.0
2010 Jan 18
1
Getting Closer (was: Fencing options)
One more follow on,
The combination of kernel.panic=60 and kernel.printk=7 4 1 7 seems to
have netted the culrptit:
E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_write_timeout:137
ERROR: Heartbeat write timeout to device dm-12 after 60000
milliseconds
E01-netconsole.log:Jan 18 09:45:10 E01
(10,0):o2hb_stop_all_regions:1517 ERROR: stopping heartbeat on all
active regions.
2009 Jul 09
4
Issues with file.info?
Are there any tricks associated with file.info?
I just tried it on a directory folder and it returned NA for all fields for all files. I tried it on a different folder with different files and it still returned NA.
I tried it on a specific file and it returned all the proper info correctly.
Just wondering if there are any tricks I've overlooked.
2008 Aug 21
5
VM node won't talk to host
I am trying to mount the same partition from a KVM ubuntu 8.04.1 virtual
machine and on an ubuntu 8.04.1 host server.
I am able to mount the partition just on fine on two ubuntu host servers, they
both talk to each other. The logs on both servers show the other machine
mounting and unmounting the drive.
However, when I mount the drive in the KVM VM I get no communication to the
host
2005 Jul 22
10
AOE (Ata over ethernet) troubles on xen 2.0.6
I understand that all work is going into xen3, but I had wanted to note
that aoe (drivers/block/aoe) is giving me trouble on xen 2.0.6 (so we
can keep and eye on xen3).
Specifically I can''t see nor export AOE devices. As a quick background
on AOE, it is not IP (not routable, etc), but works with broadcasts and
packets to MAC addresses (see http://www.coraid.com).
(for anyone who
2011 Feb 28
2
ocfs2 crash with bugs reports (dlmmaster.c)
Hi,
After problem described in http://oss.oracle.com/pipermail/ocfs2-users/2010-
December/004854.html we've upgraded kernels and ocfs2-tools on every node.
The present versions are:
kernel 2.6.32-bpo.5-amd64 (from debian lenny-backports)
ocfs2-tolls 1.4.4-3 (from debian squeeze)
We didn't noticed any problems in logs untill last friday, when the whole
ocfs2 cluster crashed.
We know
2006 Apr 14
1
[RFC: 2.6 patch] fs/ocfs2/: remove unused exports
This patch removes the following unused EXPORT_SYMBOL_GPL's:
- cluster/heartbeat.c: o2hb_check_node_heartbeating_from_callback
- cluster/heartbeat.c: o2hb_stop_all_regions
- cluster/nodemanager.c: o2nm_get_node_by_num
- cluster/nodemanager.c: o2nm_configured_node_map
- cluster/nodemanager.c: o2nm_get_node_by_ip
- cluster/nodemanager.c: o2nm_node_put
- cluster/nodemanager.c: o2nm_node_get
-
2010 Apr 04
2
CentOS and Xen 3.0.3
Hi
I install CentOS 5.4 and try to use Xen....
But when I deploy some VM with Virt-manager and specify 2 or more
VCPUS to use, when start the vm it start with just one CPU...
On this VM I install Windows 2003 Server...
This the file of VM:
name = "Avision"
uuid = "625139bf-1ad3-86fc-45e6-e8efbb9db643"
maxmem = 2048
memory = 2048
vcpus = 4
cpus = "0,1,2,3"
builder =
2010 Dec 07
1
Two-node cluster often hanging in o2hb/jdb2
Hi,
I'm pretty new to ocfs2 and a bit stuck. I have two Debian/Squeeze
(testing) machines accessing an ocfs2 filesystem over aoe. The
filesystem sits on an lvm2 volume, but I guess that is irrelevant.
Even when mostly idle, everything accessing the cluster sometimes hangs
for about 20 seconds. This happens rather frequently, say every 5
minutes, but the interval seems irregular while the
2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings,
Last night I manually unpresented and deleted a LUN (a SAN snapshot)
that was presented to one node in a four node RAC environment running
OCFS2 v1.4.1-1. The system then rebooted with the following error:
Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR:
Heartbeat write timeout to device dm-24 after 120000 milliseconds
Oct 21 16:45:34 ausracdb03 kernel:
2009 Feb 25
2
1/2 OFF-TOPIC: How to use CLVM (on top AoE vblades) instead just plain LVM for Xen based VMs on Debian 5.0?
Guys,
I have setup my hard disc with 3 partitions:
1- 256MB on /boot;
2- 2GB on / for my dom0 (Debian 5.0) (eth0 default bridge for guests LAN);
3- 498GB exported with vblade-persist to my network (eth1 for the AoE
protocol).
On dom0 hypervisor01:
vblade-persist setup 0 0 eth1 /dev/sda3
vblade-persist start all
How to create a CVLM VG with /dev/etherd/e0.0 on each of my dom0s?
Including the
2010 Oct 08
23
O2CB global heartbeat - hopefully final drop!
All,
This is hopefully the final drop of the patches for adding global heartbeat
to the o2cb stack.
The diff from the previous set is here:
http://oss.oracle.com/~smushran/global-hb-diff-2010-10-07
Implemented most of the suggestions provided by Joel and Wengang.
The most important one was to activate the feature only at the end,
Also, got mostly a clean run with checkpatch.pl.
Sunil
2006 Jul 10
1
2 Node cluster crashing
Hi,
We have a two node cluster running SLES 9 SP2 connecting directly to an
EMC CX300 for storage.
We are using OCFS(OCFS2 DLM 0.99.15-SLES) for the voting disk etc, and
ASM for data files.
The system has been running until last Friday when the whole cluster
went down with the following error messages in the /var/log/messages
files :
rac1:
Jul 7 14:56:23 rac1 kernel:
2006 Mar 14
1
problems with ocfs2
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20060314/b38f73eb/attachment.html