I am queasy recommending such a setup to anyone. It is one thing to handle
a workload. The problem is about handling user/admin errors. You are
essentially
running a local volume manager that is unaware of the other node. Any
reconfig
that is not coordinated will lead to corruption. Below that you have
drbd which
a fine block device replication solution. But I would personally choose
iscsi
which is a excellent lowcost shared device. Also, does not limit you to
2 nodes.
The iscsi target in sles is known to be good. Why not just use that and
use drbd
to replicate the device (like emc srdf) for still higher availability.
Having said that, the 2 you are seeing is not because of the number of
nodes but
because of the hb thread. umount is supposed to stop that hb thread.
Maybe that is
not happening.
# ps aux | grep o2hb
You should see one when the volume is mounted and not when umounted.
Sunil
Armin Wied wrote:> Hello group!
>
> I'm pretty new to ocfs2 and clustered file systems in general.
> I was able to set up a 2 node cluster (CentOS 5.4) with ocfs2 1.4.4 on DRBD
> on top of a LVM volume.
>
> Everything works like a charm and is rock solid, even under heavy load
> conditions, so I'm really happy with it.
>
> However, there remains one little problem: I'd like to do backups with
> snapshots. Creating the snapshot volume, mounting, copying and dismounting
> works like expected. But I can't delete the snapshop volume after it
was
> mounted once.
>
> What I do is:
>
> lvcreate -L5G -s -n lv00snap /dev/vg00/lv00
> tunefs.ocfs2 -y --cloned-volume /dev/vg00/lv00snap
> mount -t ocfs2 /dev/vg00/lv00snap /mnt/backup
>
> (copy stuff)
>
> umount /mnt/backup
> lvremove -f /dev/vg00/lv00snap
>
> lvremove fails, saying, that the volume is open. Checking with lvdisplay it
> tells me "# open" is 1.
> And that's the funny thing: After creating the snapshot volume, # open
is 0,
> what's not a surprise. After mounting the volume, # open is 2 - which
is the
> same for the other ocfs2 volume and makes sense to me, as there are 2
nodes.
> But after unmounting the snapshot volume, the number decreases to 1, not to
> 0 so LVM consideres the volume still open.
>
> I also tried mounting read only and/or adding
"--fs-features=local" to
> tunefs.ocfs2 without success. In the moment I have to reboot the node to be
> able to remove the snapshot.
>
> So what am I doing wrong?
>
> Thank's a lot for any hint!
>
> Armin