Displaying 20 results from an estimated 300 matches similar to: "strange fencing behavior"
2012 Sep 11
2
Sort or Permutate
Dear all,
I am having a struct that contains on the first column file names and on the second column a number which is a "rating" of the file on first column
A small subset looks like that
small
[,1]
[1,]
2003 Oct 19
1
jail + devfs + snp problem (FreeBSD 5.1-RELEASE-p10)
shell# /sbin/devfs rule -s 2 delset
shell# /sbin/devfs rule -s 2 add hide
shell# /sbin/devfs rule -s 2 add path random unhide
shell# /sbin/devfs rule -s 2 add path urandom unhide
shell# /sbin/devfs rule -s 2 add path zero unhide
shell# /sbin/devfs rule -s 2 add path pty\* unhide
shell# /sbin/devfs rule -s 2 add path pty\* unhide
shell# /sbin/devfs rule -s 2 add path tty\* unhide
shell#
2013 Nov 01
1
How to break out the unstop loop in the recovery thread? Thanks a lot.
Hi everyone,
I have one OCFS2 issue.
The OS is Ubuntu, using linux kernel is 3.2.50.
There are three node in the OCFS2 cluster, and all the node is using the iSCSI SAN of HP 4330 as the storage.
As the storage restarted, there were two node restarted for fence without heartbeating writting on to the storage.
But the last one does not restart, and it still write error message into syslog as below:
2006 Apr 18
1
Self-fencing issues (RHEL4)
Hi.
I'm running RHEL4 for my test system, Adaptec Firewire controllers,
Maxtor One Touch III shared disk (see the details below),
100Mb/s dedicated interconnect. It panics with no load about each
20 minutes (error message from netconsole attached)
Any clues?
Yegor
---
[root at rac1 ~]# cat /proc/fs/ocfs2/version
OCFS2 1.2.0 Tue Mar 7 15:51:20 PST 2006 (build
2009 Mar 31
4
About multiple hosts with same hostname
Hello all
I have a somewhat annoying problem with OpenSSH. Now, granted, it's
certainly not a bug. I'm just wondering what the best course of action is.
At work, we have multiple customers with machines named "fw0", "fs0",
etc. This is all good, since it conforms to a standard naming scheme, so
it's easier to administrate.
However, when we go to our
2007 Nov 29
1
Troubles with two node
Hi all,
I'm running OCFS2 on two system with OpenSUSE 10.2 connected on fibre
channel with a shared storage (HP MSA1500 + HP PROLIANT MSA20).
The cluster has two node (web-ha1 and web-ha2), sometimes (1 or 2 times
on a month) the OCFS2 stop to work on both system. On the first node I'm
getting no error in log files and after a forced shoutdown of the first
node on the second I can see
2015 Oct 15
3
CentOS7 - Serial Console and Flow Control
Hello List,
I'm ironing out details to upgrade a few systems to CentOS7.
My servers have BMC with Serial over LAN support. In C5 and C6, I
determined how to have BIOS/POST, kernel, and serial console access. I'm
reading up on the method to accomplish the pieces with C7.
Presently SoL output works, so I see BIOS/POST messages and the GRUB boot
list.
My changes to enable serial
2006 Jul 10
1
2 Node cluster crashing
Hi,
We have a two node cluster running SLES 9 SP2 connecting directly to an
EMC CX300 for storage.
We are using OCFS(OCFS2 DLM 0.99.15-SLES) for the voting disk etc, and
ASM for data files.
The system has been running until last Friday when the whole cluster
went down with the following error messages in the /var/log/messages
files :
rac1:
Jul 7 14:56:23 rac1 kernel:
2009 Feb 04
1
Strange dmesg messages
Hi list,
Something went wrong this morning and we have a node ( #0 ) reboot.
Something blocked the NFS access from both nodes, one rebooted and the
another we restarted the nfsd and it brought him back.
Looking at node #0 - the one that rebooted - logs everything seems
normal, but looking at the othere node dmesg's we saw this messages:
First the o2net detected that node #0 was dead: (It
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary
(v8.3.8) and Pacemaker. Everything seems to be working great, except during
testing of hard-boot scenarios.
Whenever I hard-boot one of the nodes, the other node is successfully fenced
and marked ?Outdated?
* <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"
2009 May 12
2
add error check for ocfs2_read_locked_inode() call
After upgrading from 2.6.28.10 to 2.6.29.3 I've saw following new errors
in kernel log:
May 12 14:46:41 falcon-cl5
May 12 14:46:41 falcon-cl5 (6757,7):ocfs2_read_locked_inode:466 ERROR:
status = -22
Only one node is mounted volumes in cluster:
/dev/sde on /home/apache/users/D1 type ocfs2
(rw,_netdev,noatime,heartbeat=local)
/dev/sdd on /home/apache/users/D2 type ocfs2
2013 Dec 03
0
Problem booting guest with more than 8 disks
Hello All,
On my host machine, I'm using kvm, libvirt, ceph and ubuntu versions as
follows:
>> QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5), Copyright (c)
2003-2008 Fabrice Bellard
>> root at kitt:~# virsh --version: 1.1.1
>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>> VERSION="13.10, Saucy Salamander"
>> Linux kitt
2011 Nov 28
1
POP3/IMAP crash signal 10
Hi,
I'm building a Postfix/Dovecot mail server and while I am able to send/receive emails using telnet, after establishing a connection to Dovecot via a client (Mail Live, Thunderbird etc) the following appears in the logs:
Nov 28 14:11:02 mailserver dovecot: [ID 583609 mail.info] pop3-login: Login: user=<user at domain.com>, method=PLAIN, rip=xxx.xxx.xxx.xxx, lip=xxx.xxx.xxx.xxx,
2009 Mar 04
2
[PATCH 1/1] Patch to recover orphans in offline slots during recovery and mount
During recovery, a node recovers orphans in it's slot and the dead node(s). But
if the dead nodes were holding orphans in offline slots, they will be left
unrecovered.
If the dead node is the last one to die and is holding orphans in other slots
and is the first one to mount, then it only recovers it's own slot, which
leaves orphans in offline slots.
This patch queues complete_recovery
2006 Sep 21
0
ocfs2 reboot
Hi ,
I'm new in this mailing list but I have several errors using ocfs2.
We had ocfs 1.2.1 and both node of cluster reboot so We have made an upgrade
to ocfs2 1.2.3
Again we had a reboot of one node of the cluster .
/var/log/messages show :
o2net_idle_timer:1309 here are some times that might help debug the
situation: (tmr 1158758358.807993 now 1158758368.805980 dr 1158758358.807964adv
2006 Mar 14
1
problems with ocfs2
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20060314/b38f73eb/attachment.html
2001 Oct 18
2
Incorrect return types for snprintf() and vsnprintf()
Both of these functions are using strlen() to create return value.
Cheers,
Scott Rankin
*** /openbsd-compat/bsd-snprintf.c.orig Thu Oct 18 13:57:51 2001
--- /openbsd-compat/bsd-snprintf.c Thu Oct 18 13:58:26 2001
***************
*** 632,638 ****
#endif /* !defined(HAVE_SNPRINTF) || !defined(HAVE_VSNPRINTF) */
#ifndef HAVE_VSNPRINTF
! int
vsnprintf(char *str, size_t count, const char *fmt,
2007 Mar 08
4
ocfs2 cluster becomes unresponsive
We are running OCFS2 on SLES9 machines using a FC SAN. Without warning both nodes will become unresponsive. Can not access either machine via ssh or terminal (hangs after typing in username). However the machine still responds to pings. This continues until one node is rebooted, at which time the second node resumes normal operations.
I am not entirely sure that this is an OCFS2 problem at all
2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings,
Last night I manually unpresented and deleted a LUN (a SAN snapshot)
that was presented to one node in a four node RAC environment running
OCFS2 v1.4.1-1. The system then rebooted with the following error:
Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR:
Heartbeat write timeout to device dm-24 after 120000 milliseconds
Oct 21 16:45:34 ausracdb03 kernel:
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything:
TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604 246266859
TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
6074335 30371669 285493670
TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc
debugfs.ocfs2 1.6.4
5239722 26198604