After the clone, you want to probably run tunefs.ocfs2 -U to reset the UUID.
This is one of the steps we do when cloning volumes for database refreshes.
From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-bounces at
oss.oracle.com] On Behalf Of brad hancock
Sent: Wednesday, November 24, 2010 12:35 PM
To: ocfs2-users at oss.oracle.com
Subject: [Ocfs2-users] heartbeat and slot issues.
I setup a host with an ocfs partition on a san and then cloned that host to
another and renamed. Both machines mount their ocfs partitions but give the
following errors.
Host that was cloned:
(1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node
is heartbeating in our slot!
[345413.242260] sd 1:0:0:0: reservation conflict
[345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.242274] end_request: I/O error, dev sdb, sector 1735
[345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[345413.243159] sd 1:0:0:0: reservation conflict
[345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.243166] end_request: I/O error, dev sdb, sector 1735
[345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[448460.370132] sd 1:0:0:0: reservation conflict
[448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[448460.370149] end_request: I/O error, dev sdb, sector 1735
[448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
Clone:
sd 1:0:0:0: reservation conflict
[17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[17643.588011] end_request: I/O error, dev sdb, sector 1735
[17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[17643.588011] sd 1:0:0:0: reservation conflict
This didn't seem to be a problem, but im noticing the host are no longer
seeing the same data. I unmount the drives and remounted and they were the same
again.
Thanks for any guidance,
cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 10.x.x.248
number = 0
name = smes01
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.x.x.249
number = 1
name = smes02
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
cluster.conf same on both hosts.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://oss.oracle.com/pipermail/ocfs2-users/attachments/20101124/0404f4fa/attachment.html