similar to: OCFS2 + iscsi: another node is heartbeating in our slot (over scst)

Displaying 20 results from an estimated 100 matches similar to: "OCFS2 + iscsi: another node is heartbeating in our slot (over scst)"

2008 Oct 22
2
Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings, Last night I manually unpresented and deleted a LUN (a SAN snapshot) that was presented to one node in a four node RAC environment running OCFS2 v1.4.1-1. The system then rebooted with the following error: Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR: Heartbeat write timeout to device dm-24 after 120000 milliseconds Oct 21 16:45:34 ausracdb03 kernel:
2006 May 26
1
Another node is heartbeating in our slot!
All, We are having some problems getting OCFS2 to run, we are using kernel 2.6.15 with OCFS2 1.2.1. Compiling the OCFS2 sources went fine and all modules load perfectly. However, we can only mount the OCFS2 volume on one machine at a time, when we try to mount the volume on the 2 other machines we get an error stating that another node is heartbeating in our slot. When we mount the volume
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2008 Sep 18
0
Ocfs2-users Digest, Vol 57, Issue 14
I think I might have miss understood where it is failing, has this file been added to the DB on the web site or does it fail when you try to onfigure this? Carle Simmonds Infrastructure Consultant Technology Services Experian UK Ltd __________________________________________________ Tel: +44 (0)115 941 0888 (main switchboard) Mobile: +44 (0)7813 854834 E-Mail: carle.simmonds at uk.experian.com
2008 Sep 18
2
o2hb_do_disk_heartbeat:982:ERROR
Hi everyone; I have a problem on my 10 nodes cluster with ocfs2 1.2.9 and the OS is RHEL 4.7 AS. 9 nodes can start o2cb service and mount san disks on startup however one node can not do that. My cluster configuration is : node: ip_port = 7777 ip_address = 192.168.5.1 number = 0 name = fa01 cluster = ocfs2 node: ip_port =
2010 Apr 18
0
heartbeating in the wrong slot
Our cluster is has recently been logging messages like this: Apr 18 08:05:39 lv2 kernel: (26235,1):o2hb_do_disk_heartbeat:776 ERROR: Device "blk_shared": another node is heartbeating in our slot! where blk_shared is the name of the shared disk that we have ocfs2 mounted on. Can someone tell me what this means?
2008 Mar 05
0
ocfs2 and another node is heartbeating in our slot
Hello, I have one cluster drbd8+ocfs2. If I mount ocfs2 partition on node1 it's work but when I mount partition on node 2 I receive in /var/log/messages this -Mar 5 18:10:04 suse4 kernel: (2857,0):o2hb_do_disk_heartbeat:665 ERROR: Device "drbd1": another node is heartbeating in our slot! -Mar 5 18:10:04 suse4 kernel: WARNING: at include/asm/dma-mapping.h:44 dma_map_sg() -Mar 5
2007 Mar 16
2
re: o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1" another node is heartbeating in our slot!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Folks, I'm trying to wrap my head around something that happened in our environment. Basically, we noticed the error in /var/log/messages with no other errors. "Mar 16 13:38:02 dbo3 kernel: (3712,3):o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1": another node is heartbeating in our slot!" Usually there are a
2010 Oct 08
23
O2CB global heartbeat - hopefully final drop!
All, This is hopefully the final drop of the patches for adding global heartbeat to the o2cb stack. The diff from the previous set is here: http://oss.oracle.com/~smushran/global-hb-diff-2010-10-07 Implemented most of the suggestions provided by Joel and Wengang. The most important one was to activate the feature only at the end, Also, got mostly a clean run with checkpatch.pl. Sunil
2010 Dec 07
1
Two-node cluster often hanging in o2hb/jdb2
Hi, I'm pretty new to ocfs2 and a bit stuck. I have two Debian/Squeeze (testing) machines accessing an ocfs2 filesystem over aoe. The filesystem sits on an lvm2 volume, but I guess that is irrelevant. Even when mostly idle, everything accessing the cluster sometimes hangs for about 20 seconds. This happens rather frequently, say every 5 minutes, but the interval seems irregular while the
2006 Nov 03
2
Newbie questions -- is OCFS2 what I even want?
Dear Sirs and Madams, I run a small visual effects production company, Hammerhead Productions. We'd like to have an easily extensible inexpensive relatively high-performance storage network using open-source components. I was hoping that OCFS2 would be that system. I have a half-dozen 2 TB fileservers I'd like the rest of the network to see as a single 12 TB disk, with the aggregate
2008 Sep 11
4
Some more debug stuff
Added two debugfs entries... one to dump o2hb livenodes and the other to dump osb. $ cat /sys/kernel/debug/ocfs2/BC4F4550BEA74F92BDCC746AAD2EC0BF/fs_state Device => Id: 8,65 Uuid: BC4F4550BEA74F92BDCC746AAD2EC0BF Gen: 0xA02024F2 Label: sunil-xattr Volume => State: 1 Flags: 0x0 Sizes => Block: 4096 Cluster: 4096 Features => Compat: 0x1 Incompat: 0x350 ROcompat: 0x1
2005 Mar 30
0
New vorbis music http://pan.zipcon.net
April-1 NEWS from http://pan.zipcon.net New postings: Arthur_Grossman_Live: Saint-Saens Bassoon-piano Sonata (opus 168, 1921) with Joseph Levine, piano William McColl, clarinet and Joseph Levine, piano play the: Grand Duo Concertant for clarinet and piano by Carl Maria Von Weber, opus 48 (1816) Felix Skowronek and Marshall Winslow play Reicha's Lento from the Grand
2023 Jun 06
0
[bug report] ocfs2/cluster: Pin/unpin o2hb regions
[ This is ancient code. - dan ] fs/fs_context.c 168 { 169 int ret; 170 171 struct fs_parameter param = { 172 .key = key, 173 .type = fs_value_is_flag, 174 .size = v_size, 175 }; 176 177 if (value) { --> 178 param.string =
2011 Sep 09
17
High Number of VMs
Hi, I''m curious about how you guys deal with big virtualization installations. To this date we only dealt with a small number of VM''s (~10)on not too big hardware (2xquad xeons+16GB ram). As I''m the "storage guy" I find it quite convenient to present to the dom0s one LUN per VM that makes live migration possible but without the cluster file system or cLVM
2011 Mar 12
1
What iSCSI is used in Centos 5 and RHEL6?
I was looking up on iSCSI in preparation and became aware that there are different iSCSI software/drivers/whatever-is-the-correct-term available. e.g. IET http://iscsitarget.sourceforge.net/ SCST http://scst.sourceforge.net/ STGT http://stgt.berlios.de/ LIO http://linux-iscsi.org/ Based on what I can see, it seems to be STGT because the site provides a link to Redhat advisory which seems to
2013 Mar 26
3
iSCSI connection corrupts Xen block devices.
Hi, hope the week has started out well for everyone. This report may be in the FWIW department since there may be a fundamental reason why this doesn''t work. We elected to report this to the Xen community since we thought any behavior which corrupted disk images needed to at least be reported. We are maintaining the Xen-SAN release which provides hotplug functionality to allow Xen
2013 Sep 18
3
multipathing XCP 1.6
Hello, I hope to be in the right list. my question is the following. I can''t seem to get multipathing configure on XCP 1.6. In XenCenter I was able to enable "Multipath" however, in the LUN under general tab I see " multipath 1 of 1 active", the the CLI I do the following command xe --m node session it list only one session. Can someone help me please? Thanks in
2008 May 05
7
iscsi conn error: Xen related?
Hello all, I got some severe iscsi connection loss on my dom0 (Gentoo 2.6.20-xen-r6, xen 3.1.1). Happening several times a day. open-iscsi version is 2.0.865.12. Target iscsi is the open-e DSS product. Here is a snip of my messages log file: May 5 16:52:50 ying connection226:0: iscsi: detected conn error (1011) May 5 16:52:51 ying iscsid: connect failed (111) May 5 16:52:51 ying iscsid:
2013 Jul 25
0
FNIC nested PVM
I''m trying to do nested XEN to give some of my colleagues a play area to work in, and it seems to work - but not quite. I can build the nested environment (using OVM, and I have to step back to OVM3.2.2 to allow me to do PCI passthrough), but as soon as I start up a VM with a phys device passed through, the first layer loses all connectivity to the SAN. Setup: * 2 UCS