Daniel Keisling
2008-Oct-22 16:22 UTC
[Ocfs2-users] Another node is heartbeating in our slot! errors with LUN removal/addition
Greetings, Last night I manually unpresented and deleted a LUN (a SAN snapshot) that was presented to one node in a four node RAC environment running OCFS2 v1.4.1-1. The system then rebooted with the following error: Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR: Heartbeat write timeout to device dm-24 after 120000 milliseconds Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_stop_all_regions:1873 ERROR: stopping heartbeat on all active regions. I'm assuming that dm-24 was the LUN that was deleted. Looking back in the syslog, I see many of these errors since the time the snapshot was taken until the reboot: Oct 21 16:42:54 ausracdb03 kernel: (6624,2):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-24": another node is heartbeating in our slot! The errors stopped when the node came back up. However, after another snapshot was taken, the errors are back, and I'm afraid a node will reboot again when the LUN snapshot gets unpresented. Here are the steps that happen to generate the errors: After unmounting and deleting the LUN that contains the snapshot, I receive: Oct 22 03:15:43 ausracdb03 multipathd: dm-28: umount map (uevent) Oct 22 03:15:44 ausracdb03 kernel: ocfs2_hb_ctl[7721]: segfault at 0000000000000000 rip 0000000000428fa0 rsp 00007fff88a7efb8 error 4 Oct 22 03:15:44 ausracdb03 kernel: ocfs2: Unmounting device (253,28) on (node 2) The kernel will then sense that all SCSI paths to the device are gone, and multipathd will then mark all paths as down, which seems correct behavior. After creating and presenting a new snapshot, multipath will now see the paths reappear, which also seems normal behavior: Oct 22 03:16:06 ausracdb03 multipathd: sdcj: tur checker reports path is up Oct 22 03:16:06 ausracdb03 multipathd: 69:112: reinstated Oct 22 03:16:06 ausracdb03 multipathd: mpath0: queue_if_no_path enabled Oct 22 03:16:06 ausracdb03 multipathd: mpath0: Recovered to normal mode Oct 22 03:16:06 ausracdb03 multipathd: mpath0: remaining active paths: 1 Oct 22 03:16:06 ausracdb03 multipathd: dm-27: add map (uevent) Oct 22 03:16:06 ausracdb03 multipathd: dm-27: devmap already registered However, I then get this message: Oct 22 03:16:06 ausracdb03 kernel: (13210,2):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:06 ausracdb03 kernel: (8605,4):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! I'm assuming dm-28 is the old snapshot as now there is no dm-28 in the multipath map (multpath -ll | grep dm-28). The new snapshot has the device map name of "dm-29." I then mount the snapshot LUN (after changing the UUID and label): Oct 22 03:16:30 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:30 ausracdb03 kernel: ocfs2_dlm: Nodes in domain ("BCF5F59FF88A4BE0A75BC1491A021664"): 2 Oct 22 03:16:30 ausracdb03 kernel: (9860,1):ocfs2_find_slot:249 slot 0 is already allocated to this node! Oct 22 03:16:30 ausracdb03 kernel: (9860,1):ocfs2_check_volume:1745 File system was not unmounted cleanly, recovering volume. Oct 22 03:16:30 ausracdb03 kernel: kjournald starting. Commit interval 5 seconds Oct 22 03:16:30 ausracdb03 kernel: ocfs2: Mounting device (253,28) on (node 2, slot 0) with ordered data mode. Oct 22 03:16:30 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 Recovering node 0 from slot 3 on device (253,28) Oct 22 03:16:32 ausracdb03 kernel: (9861,2):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:34 ausracdb03 kernel: (9861,2):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:36 ausracdb03 kernel: kjournald starting. Commit interval 5 seconds Oct 22 03:16:36 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 Recovering node 1 from slot 2 on device (253,28) Oct 22 03:16:36 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:38 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:40 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:41 ausracdb03 kernel: kjournald starting. Commit interval 5 seconds Oct 22 03:16:41 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 Recovering node 3 from slot 1 on device (253,28) Oct 22 03:16:42 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:44 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:46 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 ERROR: Device "dm-28": another node is heartbeating in our slot! Oct 22 03:16:47 ausracdb03 kernel: kjournald starting. Commit interval 5 seconds We upgraded to v1.4.1-1 on Sunday, up from 1.2.8 and never received these errors under v1.2.8. Once again, these snapshot LUNs are only presented to one node in a four node cluster. How do I prevent this behavior? Should I be flushing the multipath mapping ("multipath -F", and perhaps restarting multipathd) after deleting the LUN? How do I tell OCFS2 to stop looking at the old device for the heartbeat? How do I tell OCFS2 to ignore read/write timeouts to LUNs that are unmounted and unpresented so that it won't fence itself? Any insight would be greatly appreciated. TIA, Daniel ______________________________________________________________________ This email transmission and any documents, files or previous email messages attached to it may contain information that is confidential or legally privileged. If you are not the intended recipient or a person responsible for delivering this transmission to the intended recipient, you are hereby notified that you must not read this transmission and that any disclosure, copying, printing, distribution or use of this transmission is strictly prohibited. If you have received this transmission in error, please immediately notify the sender by telephone or return email and delete the original transmission and its attachments without reading or saving in any manner.
Sunil Mushran
2008-Oct-23 00:51 UTC
[Ocfs2-users] Another node is heartbeating in our slot! errors with LUN removal/addition
Are you mounting the snapshotted lun on more than one node? If not, then use tunefs.ocfs2 to also make it mount local. That is, do it the time you are changing the label and uuid. This will avoid the problem as the fs will not start hb for local mounts. However, this just avoids the issue. To resolve I'll need more info. For starters, walk thru the process and upload the message file. Indicate the device you are snapshotting, etc. In your description you mention assuming you were snapshotting a particular device. Don't assume.... because I don't know what to make of it. The ocfs2 heartbeat is designed to start on mount and stop on umount. But it may not work out that way. One handy command to use is: $ ocfs2_hb_ctl -I -d /dev/dm-X This will tell you the number of hb references on the device. If it is zero, then o2hb is not heartbeating on that device. I see a ocfs2_hb_ctl segfault. Is that consistent? If so, it could indicate that the stop heartbeat was not being successful and that the above command would return 1 hb reference. If so, then that's your likely problem. Sunil Daniel Keisling wrote:> Greetings, > > Last night I manually unpresented and deleted a LUN (a SAN snapshot) > that was presented to one node in a four node RAC environment running > OCFS2 v1.4.1-1. The system then rebooted with the following error: > > Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_write_timeout:166 ERROR: > Heartbeat write timeout to device dm-24 after 120000 milliseconds > Oct 21 16:45:34 ausracdb03 kernel: (27,1):o2hb_stop_all_regions:1873 > ERROR: stopping heartbeat on all active regions. > > I'm assuming that dm-24 was the LUN that was deleted. Looking back in > the syslog, I see many of these errors since the time the snapshot was > taken until the reboot: > > Oct 21 16:42:54 ausracdb03 kernel: (6624,2):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-24": another node is heartbeating in our slot! > > > The errors stopped when the node came back up. However, after another > snapshot was taken, the errors are back, and I'm afraid a node will > reboot again when the LUN snapshot gets unpresented. Here are the steps > that happen to generate the errors: > > After unmounting and deleting the LUN that contains the snapshot, I > receive: > > Oct 22 03:15:43 ausracdb03 multipathd: dm-28: umount map (uevent) > Oct 22 03:15:44 ausracdb03 kernel: ocfs2_hb_ctl[7721]: segfault at > 0000000000000000 rip 0000000000428fa0 rsp 00007fff88a7efb8 error 4 > Oct 22 03:15:44 ausracdb03 kernel: ocfs2: Unmounting device (253,28) on > (node 2) > > The kernel will then sense that all SCSI paths to the device are gone, > and multipathd will then mark all paths as down, which seems correct > behavior. > > After creating and presenting a new snapshot, multipath will now see the > paths reappear, which also seems normal behavior: > > Oct 22 03:16:06 ausracdb03 multipathd: sdcj: tur checker reports path is > up > Oct 22 03:16:06 ausracdb03 multipathd: 69:112: reinstated > Oct 22 03:16:06 ausracdb03 multipathd: mpath0: queue_if_no_path enabled > Oct 22 03:16:06 ausracdb03 multipathd: mpath0: Recovered to normal mode > Oct 22 03:16:06 ausracdb03 multipathd: mpath0: remaining active paths: 1 > Oct 22 03:16:06 ausracdb03 multipathd: dm-27: add map (uevent) > Oct 22 03:16:06 ausracdb03 multipathd: dm-27: devmap already registered > > However, I then get this message: > > Oct 22 03:16:06 ausracdb03 kernel: (13210,2):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:06 ausracdb03 kernel: (8605,4):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > > I'm assuming dm-28 is the old snapshot as now there is no dm-28 in the > multipath map (multpath -ll | grep dm-28). The new snapshot has the > device map name of "dm-29." > > I then mount the snapshot LUN (after changing the UUID and label): > > Oct 22 03:16:30 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:30 ausracdb03 kernel: ocfs2_dlm: Nodes in domain > ("BCF5F59FF88A4BE0A75BC1491A021664"): 2 > Oct 22 03:16:30 ausracdb03 kernel: (9860,1):ocfs2_find_slot:249 slot 0 > is already allocated to this node! > Oct 22 03:16:30 ausracdb03 kernel: (9860,1):ocfs2_check_volume:1745 File > system was not unmounted cleanly, recovering volume. > Oct 22 03:16:30 ausracdb03 kernel: kjournald starting. Commit interval > 5 seconds > Oct 22 03:16:30 ausracdb03 kernel: ocfs2: Mounting device (253,28) on > (node 2, slot 0) with ordered data mode. > Oct 22 03:16:30 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 > Recovering node 0 from slot 3 on device (253,28) > Oct 22 03:16:32 ausracdb03 kernel: (9861,2):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:34 ausracdb03 kernel: (9861,2):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:36 ausracdb03 kernel: kjournald starting. Commit interval > 5 seconds > Oct 22 03:16:36 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 > Recovering node 1 from slot 2 on device (253,28) > Oct 22 03:16:36 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:38 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:40 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:41 ausracdb03 kernel: kjournald starting. Commit interval > 5 seconds > Oct 22 03:16:41 ausracdb03 kernel: (9939,1):ocfs2_replay_journal:1076 > Recovering node 3 from slot 1 on device (253,28) > Oct 22 03:16:42 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:44 ausracdb03 kernel: (9861,1):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:46 ausracdb03 kernel: (9861,3):o2hb_do_disk_heartbeat:770 > ERROR: Device "dm-28": another node is heartbeating in our slot! > Oct 22 03:16:47 ausracdb03 kernel: kjournald starting. Commit interval > 5 seconds > > We upgraded to v1.4.1-1 on Sunday, up from 1.2.8 and never received > these errors under v1.2.8. Once again, these snapshot LUNs are only > presented to one node in a four node cluster. > > How do I prevent this behavior? Should I be flushing the multipath > mapping ("multipath -F", and perhaps restarting multipathd) after > deleting the LUN? How do I tell OCFS2 to stop looking at the old device > for the heartbeat? How do I tell OCFS2 to ignore read/write timeouts to > LUNs that are unmounted and unpresented so that it won't fence itself? > > Any insight would be greatly appreciated. > > TIA, > > Daniel >
Daniel Keisling
2008-Dec-04 18:15 UTC
[Ocfs2-users] Another node is heartbeating in our slot! errorswith LUN removal/addition
I've restarted the box and the heartbeat threads and messages are now gone. I've taken six snapshots and unmounted the filesystems several times and the segmentation faults do not occur. Thank you so much for looking into this, finding the problem, and getting me a fix. I look forward to the 1.4.2 release. Daniel> -----Original Message----- > From: Sunil Mushran [mailto:sunil.mushran at oracle.com] > Sent: Thursday, December 04, 2008 11:45 AM > To: Daniel Keisling > Cc: Joel Becker > Subject: Re: [Ocfs2-users] Another node is heartbeating in > our slot! errorswith LUN removal/addition > > These could be hb thread that were not killed when you > umounted those volumes. Have you restarted the box > since you cleaned out those devices? > > Daniel Keisling wrote: > > Sunil, > > > > I edited /dev/sdo and /dev/sdr and the rest of corrupted devices > > disappeared, so there are no more corrupted OCFS2 > filesystems when doing > > a 'mounted.ocfs2 -f.' However, the 'heartbeating in our slot' error > > messages are still coming. The devices in question are not in the > > device-mapper maps and are not mounted, but do appear in > mounted.ocfs2. > > Do I need to do the same procedure and wipe out the signature? > > > > Dec 4 10:29:35 ausracdbd01 kernel: > (26064,2):o2hb_do_disk_heartbeat:770 > > ERROR: Device "dm-43": another node is heartbeating in our slot! > > > > [root at ausracdbd01 ~]# multipath -ll | grep dm-43 > > [root at ausracdbd01 ~]# > > > > [root at ausracdbd01 ~]# mounted.ocfs2 -f | grep dm-43 > > /dev/dm-43 ocfs2 ausracdbd01 > > > > [root at ausracdbd01 ~]# mounted.ocfs2 -d | grep dm-43 > > /dev/dm-43 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > > > [root at ausracdbd01 ~]# mounted.ocfs2 -d | grep > > ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdw1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdat1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdbq1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdcn1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sddk1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdeh1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdfe1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/sdgb1 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > /dev/dm-43 ocfs2 ce7c5099-145f-457b-9644-923202450f31 > > > > Daniel > > > > > >> -----Original Message----- > >> From: Sunil Mushran [mailto:sunil.mushran at oracle.com] > >> Sent: Wednesday, December 03, 2008 3:01 PM > >> To: Daniel Keisling > >> Cc: Joel Becker; Sunil Mushran > >> Subject: Re: [Ocfs2-users] Another node is heartbeating in > >> our slot! errorswith LUN removal/addition > >> > >> OK... so now know what the problem is. Filed a bugzilla for this. > >> http://oss.oracle.com/bugzilla/show_bug.cgi?id=1053 > >> > >> Instead of waiting for the fix, may be quicker if you fix > >> this by hand. > >> > >> Do you have a binary editor? While we could script this, it > >> will be safer > >> if you _fix_ this manually. > >> > >> Say. you had bvi. The steps for 4K blocksize fs would be: > >> > >> $ bvi -b 8192 -s 512 /dev/sdo > >> > >> You will see OCFSV2 signature at the very start. Edit 4F (O) > >> to 00 (.). > >> Or something other than Oh. In short, we want to clobber the > >> signature. > >> This needs to be repeated for each volume below. If you > don't see the > >> signature, abort. Means the blocksize is less than 4K... say > >> 2K. In that > >> case, it will become "bvi -b 4096 -s 512 DEVICE". > >> > >> You will know it is fixed when "mounted.ocfs2 -d" does not show any > >> of these volumes. > >> > >> Sunil > >> > >> Daniel Keisling wrote: > >> > >>> [root at ausracdbd01 ~]# debugfs.ocfs2 -R "stat //heartbeat" > /dev/sdo > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> > >>> [root at ausracdbd01 ~]# mount -t debugfs debugfs /debug > >>> [root at ausracdbd01 ~]# debugfs.ocfs2 -R "stat //heartbeat" > /dev/sdo > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> > >>> [root at ausracdbd01 ~]# for d in o r al ao bi bl cf ci dc df > >>> > >> dz ec ew ez > >> > >>> ft fw ; do > >>> > >>> > >>>> echo Device /dev/sd${d} ; > >>>> debugfs.ocfs2 -R "stat //heartbeat" /dev/sd${d} ; > >>>> done ; > >>>> > >>>> > >>> Device /dev/sdo > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdr > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdal > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdao > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdbi > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdbl > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdcf > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdci > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sddc > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sddf > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sddz > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdec > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdew > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdez > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdft > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> Device /dev/sdfw > >>> stat: OCFS2 directory corrupted '//heartbeat' > >>> [root at ausracdbd01 ~]# > >>> > >>> > >>> > >>>> -----Original Message----- > >>>> From: Sunil Mushran [mailto:sunil.mushran at oracle.com] > >>>> Sent: Wednesday, December 03, 2008 1:07 PM > >>>> To: Daniel Keisling > >>>> Subject: Re: [Ocfs2-users] Another node is heartbeating in > >>>> our slot! errorswith LUN removal/addition > >>>> > >>>> I think I know what the issue is. > >>>> > >>>> Can you run the following on your box? > >>>> $ debugfs.ocfs2 -R "stat //heartbeat" /dev/sdo > >>>> > >>>> Email me the output. > >>>> > >>>> While we are at it, why don't you run this script as it may save > >>>> us a roundtrip. > >>>> > >>>> $ for d in o r al ao bi bl cf ci dc df dz ec ew ez ft fw ; do > >>>> echo Device /dev/sd${d} ; > >>>> debugfs.ocfs2 -R "stat //heartbeat" /dev/sd${d} ; > >>>> done ; > >>>> > >>>> All this does is dump the inode of the heartbeat inode > >>>> > >> file. I suspect > >> > >>>> these devices. Meaning no writing... only reading. > >>>> > >>>> Sunil > >>>> > >>>> Daniel Keisling wrote: > >>>> > >>>> > >>>>> Yes, please do. I have development time on the machine > >>>>> > >> for the next > >> > >>>>> couple of days. > >>>>> > >>>>> > >>>>> > >>>>> > >>>>>> -----Original Message----- > >>>>>> From: Sunil Mushran [mailto:sunil.mushran at oracle.com] > >>>>>> Sent: Tuesday, December 02, 2008 8:16 PM > >>>>>> To: Daniel Keisling > >>>>>> Cc: ocfs2-users at oss.oracle.com > >>>>>> Subject: Re: [Ocfs2-users] Another node is heartbeating in > >>>>>> our slot! errorswith LUN removal/addition > >>>>>> > >>>>>> Yes. Your diagnosis is correct. > >>>>>> > >>>>>> ocfs2_hb_ctl segfault is not making any sense. The > >>>>>> > >> coredump has not > >> > >>>>>> been helpful. I may have to send you a debug build. > >>>>>> > >> strace also led > >> > >>>>>> me down a blind alley. > >>>>>> > >>>>>> Let me know if you will be willing to copy a debug build of the > >>>>>> ocfs2_hb_ctl util. The coredump from that should help us > >>>>>> > >> nail down > >> > >>>>>> this issue. > >>>>>> > >>>>>> Sunil > >>>>>> > >>>>>> > >>>>>> > >>>> > >>>> > >>> > >> > ______________________________________________________________________ > >> > >>> This email transmission and any documents, files or previous email > >>> messages attached to it may contain information that is > >>> > >> confidential or > >> > >>> legally privileged. If you are not the intended recipient > >>> > >> or a person > >> > >>> responsible for delivering this transmission to the > >>> > >> intended recipient, > >> > >>> you are hereby notified that you must not read this > transmission and > >>> that any disclosure, copying, printing, distribution or > use of this > >>> transmission is strictly prohibited. If you have received > >>> > >> this transmission > >> > >>> in error, please immediately notify the sender by telephone > >>> > >> or return email > >> > >>> and delete the original transmission and its attachments > >>> > >> without reading > >> > >>> or saving in any manner. > >>> > >>> > >>> > >> > >> > > > > > ______________________________________________________________________ > > This email transmission and any documents, files or previous email > > messages attached to it may contain information that is > confidential or > > legally privileged. If you are not the intended recipient > or a person > > responsible for delivering this transmission to the > intended recipient, > > you are hereby notified that you must not read this transmission and > > that any disclosure, copying, printing, distribution or use of this > > transmission is strictly prohibited. If you have received > this transmission > > in error, please immediately notify the sender by telephone > or return email > > and delete the original transmission and its attachments > without reading > > or saving in any manner. > > > > > > >______________________________________________________________________ This email transmission and any documents, files or previous email messages attached to it may contain information that is confidential or legally privileged. If you are not the intended recipient or a person responsible for delivering this transmission to the intended recipient, you are hereby notified that you must not read this transmission and that any disclosure, copying, printing, distribution or use of this transmission is strictly prohibited. If you have received this transmission in error, please immediately notify the sender by telephone or return email and delete the original transmission and its attachments without reading or saving in any manner.
Reasonably Related Threads
- OCFS2 + iscsi: another node is heartbeating in our slot (over scst)
- re: o2hb_do_disk_heartbeat:963 ERROR: Device "sdb1" another node is heartbeating in our slot!
- Two-node cluster often hanging in o2hb/jdb2
- Unable to stop cluster as heartbeat region still active
- OCFS2 and ASM Question