I''ve made and successfully tested a ZFS VCS agent that replaces typical
usage of VCS with VxVM/VxFS. It replaces the DiskGroup, Volume, Mount and even
NFS VCS resources. I thought I''d post the instructions I wrote up for
anyone who may be interested below here. If anyone has any recommendations for
improvement, I''d love to hear them.
RCA
--
UNIX Administrator, BAE Systems Land & Armaments
desk 763-572-6684??mobile 612-419-9362
++++++++++++++++++++++++++++++++++++
?How to Use ZFS with Symantec''s VERITAS Cluster Server (VCS) 4.1
VCS is typically used with Symantec Storage Foundation which includes VERITAS
Volume Manager and VERITAS Filesystem in conjunction with the VCS DiskGroup,
Volume, and Mount resource types. This will show you how to use ZFS on Solaris
10 with VCS by creating a custom agent. The agent will export a ZFS pool
(residing on shared storage) from one node, then import the pool onto a VCS node
the service group its associated is moved to. This solution marries the power of
ZFS with the high availability one achieves with VCS.
Creating a ZFS custom agent will replace functionality provided by the
DiskGroup, Volume, Mount and NFS resources since all equivilent attributes in
ZFS are stored within itself and persist accross zpool exports/imports.
I. Create the Types.cf File for the Agent
This file defines the name and variables of the agent. It will be imported
later, but create it now. Create a file called ZFSTypes.cf with these contents:
--------------------------------------------------------
type ZFS (
static str ArgList[] = { ZPoolName }
str ZPoolName
)
--------------------------------------------------------
II. Create ZFS Agent Files
Make a directory on each cluster node for the agent: mkdir /opt/VRTSvcs/bin/ZFS
On each node, make a sym-link to the ScriptAgent binary in the new directory: cd
/opt/VRTSvcs/bin/ZFS; ln -s ../ScriptAgent ZFSAgent
Create the online, offline and monitor scripts in /opt/VRTSvcs/bin/ZFS. These
represent script entry points for the agent. The contents of each file are:
online:
--------------------------------------------------------
#!/bin/sh
#
# Online entry point for ZFS script agent for VCS 4.1
#
# The name of the ZFS resource
RESNAME=$1
# This is the ZFS pool name
ZPOOL=$2
# My hostname
HOSTNAME=`uname -n`
# Call this function for logging
. /opt/VRTSvcs/bin/ag_i18n_inc.sh
VCSAG_SET_ENVS $RESNAME
# Import the ZFS pool
zpool import $ZPOOL
if [ $? -eq 0 ]; then
VCSAG_LOG_MSG "I" "SUCCESSFULLY imported ZFS pool $ZPOOL
on $HOSTNAME" 372
exit 0
else
VCSAG_LOG_MSG "C" "PROBLEMS importing ZFS pool $ZPOOL on
$HOSTNAME" 373
exit 1
fi
--------------------------------------------------------
offline:
--------------------------------------------------------
#!/bin/sh
#
#
# Offline entry point for ZFS script agent for VCS 4.1
#
# The name of the ZFS resource
RESNAME=$1
# This is the ZFS pool name
ZPOOL=$2
# My hostname
HOSTNAME=`uname -n`
# Call this function for logging
. /opt/VRTSvcs/bin/ag_i18n_inc.sh
VCSAG_SET_ENVS $RESNAME
# Export the ZFS pool before being imported on another node
zpool export $ZPOOL
if [ $? -eq 0 ]; then
VCSAG_LOG_MSG "I" "SUCCESSFULLY exported ZFS pool $ZPOOL
from $HOSTNAME" 374
exit 0
else
VCSAG_LOG_MSG "C" "PROBLEMS exporting ZFS pool $ZPOOL
from $HOSTNAME" 374
exit 1
fi
--------------------------------------------------------
monitor:
--------------------------------------------------------
#!/bin/sh
#
# Monitor entry point for ZFS script agent for VCS 4.1
#
# The name of the ZFS resource
RESNAME=$1
# This is the ZFS pool name
ZPOOL=$2
# My hostname
HOSTNAME=`uname -n`
# Call this function for logging
. /opt/VRTSvcs/bin/ag_i18n_inc.sh
VCSAG_SET_ENVS $RESNAME
# Check the status of $ZPOOL
ZERR=`zpool status $ZPOOL | grep ''^errors:'' 2>/dev/null`
zpool status $ZPOOL >/dev/null 2>&1
if [ $? -eq 0 ]; then
if [ "$ZERR" != "errors: No known data errors" ];
then
# This exit code given to VCS represents
# a successful exit with lower confidence
VCSAG_LOG_MSG "I" "There may be a PROBLEM with
ZFS pool $ZPOOL on $HOSTNAME" 369
exit 101
fi
# This exit code given to VCS represents
# a success exit with high confidence
#
# Uncomment for debugging
#VCSAG_LOG_MSG "I" "ZFS pool $ZPOOL on $HOSTNAME is UP
and HEALTHY" 370
#exit 110
else
# VCS interprets this code as a failure
VCSAG_LOG_MSG "W" "PROBLEMS with ZFS pool $ZPOOL on
$HOSTNAME. Is it failing over?" 371
exit 100
fi
--------------------------------------------------------
III. Import the ZFSTypes.cf File
Open hagui and do: File -> Import Types..., then select your file.
IV. Use the ZFS Agent
Now that you have done these steps, you need to see a daemon in the process
table for the ZFS agent in order to use the ZFS agent:
ps -ef | grep Agent
root 9635 1 0 Feb 07 ? 3:01 /opt/VRTSvcs/bin/ZFS/ZFSAgent
-type ZFS
It will show up as a resource now in the cluster and you can use it with
creating or modifying service groups. Before using it, create the ZFS pool on
disk that is shared via a SAN to all the nodes in the cluster and test manually
exporting it with ''zpool export <pool name>'' and
importing it onto the other nodes using ''zpool import <pool
name>''. You can see what pools are available for importinging with
''zpool import'' (no arguments). When using the resource, the
only value needed is the ZFS pool name for the ZPoolName attribute that was
defined in the ZFSTypes.cf file.