Tom Hibbert
2005-Jun-14 22:16 UTC
[Xen-devel] Debian, Xen and DRBD: Enabling true server redundancy
Hello again Xenophiles,
I have a strong case for Xen clustering, and I''m knee deep in the dead
with builds at the moment, but I thought I''d post a bit about where
I''m
at.
I''ve noticed a few people on the list having problems with Xen and
DRBD,
so I thought I''d post an approximate walkthrough of the steps
I''ve been
taking to bring it up. This guide is heavily Sarge oriented, and may or
may not be any use to anyone. The main reason for me documenting it is
actually so I dont forget again the next time I do it. I''ve also gone
to
great pains to do this all the right way (the debian way), and shortly,
you''ll see my Sarge packages for xen-2.0.6 and kernel-source-2.4.30.
For
now, use Adam''s 2.0.5 packages.
This guide assumes you already have two Xen Dom0 machines running. You
may or may not have a dedicated network interface for Heartbeat/DRBD, it
is not required.
1. Build and install the drbd
# apt-get install drbd0.7-module-source module-assistant
the module-assistant is a very handy tool that works well both with
vanilla and with debianised kernel sources. Using it eliminates the
requirement to repatch the kernel sources and rebuild.
# ARCH=xen module-assistant
--kernel-dir=/usr/src/kernels/kernel-source-2.6.10 build drbd0.7-module
Obviously replace the --kernel-dir directive with the path to your xen0
kernel.
Once module assistant has completed its machinations, install the
resultant deb on both machines:
# dpkg -i /usr/src/drbd0.7-module-*
# update-modules
... and just to be sure it''s worked:
# modprobe drbd
Note that drbd can only be configured as a module (for reasons
unfathomable to me).
Finally install the drbd admin utilities:
# apt-get install drbd0.7-utils
2. Configure the drbd
First, make sure both nodes have entries in hosts file that match the
output from hostname. You must be able to resolve the remote node by its
hostname.
Edit the drbd.conf and add resource stanzas for all block devices you
need to replicate.
# nano /etc/drbd.conf
resource "r1" {
protocol C;
startup {
wfc-timeout 60;
degr-wfc-timeout 60;
}
disk {
on-io-error detach;
}
net {
# i have left these in incase i need to use them later
# timeout 60;
# connect-int 10;
# ping-int 10;
# max-buffers 2048;
# max-epoch-size 2048;
}
syncer {
rate 100M;
group 1; # sync concurrently with r0
}
on uplink-xen-1 {
device /dev/drbd1;
disk /dev/md1;
address 172.10.10.1:7789;
meta-disk internal;
}
on uplink-xen-2 {
device /dev/drbd1;
disk /dev/md1;
address 172.10.10.2:7789;
meta-disk internal;
}
}
Just so we''re clear, the device declaration is the drbd device and the
disk declaration is the backend block device that will store the
replicated data. "meta-disk internal" means that drbd uses a part of
the
device near the end to store its metadata, you can use an external
device or file here but internal reduces complexity somewhat.
NOTE when configuring replication using an existing filesystem, ie one
that wont be freshly created after drbd is brought up, you will probably
need to run e2resize on it to prevent "attempt to access beyond end of
device" errors.
Copy the drbd.conf file to both nodes and start drbd. Make sure the
referenced disks are not mounted before drbd is started, or Bad Things
Will Happen(tm).
# /etc/init.d/drbd start
drbd will come up on both nodes in "secondary" mode.
Make your "primary" node the primary for all drbd devices:
# drbdsetup /dev/drbdX primary --do-what-I-say
You can check the drbd status with:
# cat /proc/drbd
You may wish to wait for replication to complete before moving on to the
next step.
3. Installing heartbeat
# apt-get install heartbeat
# nano /etc/heartbeat/ha.cf
deadtime 60
warntime 30
initdead 120
bcast eth0
auto_failback off
node host1
node host2
logfacility local0
# nano /etc/heartbeat/haresources
host1 drbddisk::r2 drbddisk::r2 xendomains::domU
I created a simple "xendomains" script for (re)starting xen domains
from
heartbeat.
/etc/ha.d/resource.d/xendomains
#!/bin/bash
XM="/usr/sbin/xm"
CONFPATH="/etc/xen/"
ems-fs-dom0
RES="$1"
CMD="$2"
isrunning=false
case "$CMD" in
start)
$XM create -f $CONFPATH$RES
;;
stop)
exec $XM destroy $RES
;;
status)
$XM list | awk ''{print $1}'' | grep $RES >
/dev/null
if [ $? -eq 0 ]
then
echo running
else
echo stopped
fi
;;
*)
echo "Usage: xendomain [filename] {start|stop|status}"
exit 1
;;
esac
exit 0
There are a few more files that need to be edited
# nano /etc/ha.d/authkeys
auth 1
1 crc
# chmod 600 /etc/ha.d/authkeys
The builtin drbddisk resource handler had some problems, so I modified
it slightly.
# nano /etc/ha.d/resource.d/drbddisk
#!/bin/bash
#
# This script is inteded to be used as resource script by heartbeat # #
Jan 2003 by Philipp Reisner.
#
###
DEFAULTFILE="/etc/default/drbd"
DRBDADM="/sbin/drbdadm"
if [ -f $DEFAULTFILE ]; then
. $DEFAULTFILE
fi
if [ "$#" -eq 2 ]; then
RES="$1"
CMD="$2"
else
RES="all"
CMD="$1"
fi
case "$CMD" in
start)
# try several times, in case heartbeat deadtime
# was smaller than drbd ping time
try=6
while true; do
$DRBDADM primary $RES && break
let "--try" || exit 20
sleep 1
done
;;
stop)
# exec, so the exit code of drbdadm propagates
exec $DRBDADM secondary $RES
;;
status)
if [ "$RES" = "all" ]; then
echo "A resource name is required for status inquiries."
exit 10
fi
ST=$( $DRBDADM state $RES 2> /dev/null )
ST=${ST%/*}
if [ "$ST" = "Primary" ]; then
echo "running"
else
echo "stopped"
fi
;;
*)
echo "Usage: drbddisk [resource] {start|stop|status}"
exit 1
;;
esac
exit 0
Test the heartbeat resource scripts to ensure they are able to bring
up/down both the drbddisk and the xendomain.
# /etc/ha.d/resource.d/drbddisk r0 start
# /etc/ha.d/resource.d/drbddisk r0 stop
# /etc/ha.d/resource.d/xendomains xenu start
# /etc/ha.d/resource.d/xendomains/xenu stop
Bring up heartbeat on both machines
# /etc/init.d/heartbeat/start
Check status on the primary node
# cat /proc/drbd
version: 0.7.10 (api:77/proto:74)
SVN Revision: 1743 build by phil@mescal, 2005-01-31 12:22:07
0: cs:Connected st:Primary/Secondary ld:Consistent
ns:2547816 nr:1052796 dw:3600612 dr:313968 al:1986 bm:436 lo:0 pe:0
ua:0 ap:0
1: cs:Connected st:Primary/Secondary ld:Consistent
ns:320 nr:8 dw:328 dr:240 al:1 bm:1 lo:0 pe:0 ua:0 ap:0
# xm list
Name Id Mem(MB) CPU State Time(s) Console
Domain-0 0 123 0 r---- 1361.9
uplink 4 863 1 -b--- 56.6 9604
Congratulations, you are a winner!
Please give me some feedback on this documentation style. IE does it
work. It comes from an auto documentation system im fiddling with, using
Plone. It''s not quite ready for primetime, but I think you can take a
guess at how it works. The object is to provide a stage between build
prototyping by hand and build scripting. Eventually, semantic processing
should be able to generate a generic script to do any work recorded by
hand. Primarily, this is to document my impending psychonautical journey
into the black hole that is building a five-nines xen cluster. So let me
know if you are able to follow it.
Tom
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Michael Paesold
2005-Jun-15 07:04 UTC
Re: [Xen-devel] Debian, Xen and DRBD: Enabling true server redundancy
Tom Hibbert wrote:> Hello again Xenophiles, > > I have a strong case for Xen clustering, and I''m knee deep in the dead > with builds at the moment, but I thought I''d post a bit about where > I''m at.I am one of those trying to use Xen with drbd. Right now I''m still struggling with Xen and Cent-OS 3, so if anyone has good/bad experience with building a stable kernel for that distribution I would appreciate any hints. What debian version do you recommend? Especially with regards to /lib/tls? Is it possible to disable tls in debian 3.1 without ill effects? A comment on your setup: For /etc/ha.d/resource.d/xendomains, the stop command is rather brutal, isn''t it? You do an "exec xm destroy $RES", this is virtually pulling the plug, as I understand it. What about creating another drbd disk that will be mounted in dom0 on the primary and then do "xm save $RES /save-vms/$RES"? The start command could than look into /saved-vms and if a file exists for the domain, then do xm restore, otherwise xm start. I have created a block-drbd script that allows xen to automatically make a drbd device primary when starting a vm (attached). It has some limitations, i.e. the name of the drbd resource must match the device node. The file goes into /etc/xen/scripts. Then modify xend-config.sxp and add: # Setup script for drbd-backed block devices (block-drbd block-drbd) Now you can use it in the domain configuration, e.g.: disk = [ ''drbd:drbd0,hda1,w'' ] Xend will now automatically do "drbdadm primary drbd0" before start, and fail if that does not work. It will "drbdadm secondary drbd0" after shutdown or destroy. Comes in quite handy. Please note: this works for me but I don''t know if this is correctly done. Provided as-is. So please comment. Best Regards, Michael Paesold _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Helmut Wollmersdorfer
2005-Jun-15 09:01 UTC
[Xen-devel] Re: Debian, Xen and DRBD: Enabling true server redundancy
Tom Hibbert wrote:> # modprobe drbd> Note that drbd can only be configured as a module (for reasons > unfathomable to me).Some people compile it into the kernel, which should work. If not with a XENized one, maybe somebody on the drdb-lists has a solution.> The builtin drbddisk resource handler had some problems, so I modified > it slightly.If you report this to the developers of DRBD, this would be fine.> Please give me some feedback on this documentation style.First, great thanks for your doc. Some people do not like this ''on my $distro I did'' style. IMHO it is very valuable for others, and has a high cost/benefit ratio. Ideally an author of such a Mini-HOWTO avoids the common mistakes, which potentially lead to confusion: - don''t use ''now'', ''current version'', use explicite dates like ''15th Jun 2005'', and full version numbers like ''kernel-source-2.6.8-15 from Debian Sarge'' - try to list all preconditions, which are different from a plain default installation - the history listing of the console commands should be complete, i.e. contain even such unimportant things like ''cd ..''. Sometimes it is better to write ''compile the kernel with patches and install it'', than a list of an _incomplete_ history. - use full paths of files and directories - use copies of the command line, where anybody can see the current host, current user, current directory, e.g. ''helmut@node1:~$ cat /proc/drdb'' Especially on clusters the node/host is important. - describe diagnostics like you did with e.g. ''cat /proc/drbd'' BTW: As I understand, you mount a DRBD-device for the whole ''/'' of the guest. I will try the similar idea with www.linux-vserver.org instead of XEN. Helmut Wollmersdorfer _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nils Toedtmann
2005-Jun-15 10:50 UTC
Re: [Xen-devel] Debian, Xen and DRBD: Enabling true server redundancy
Am Mittwoch, den 15.06.2005, 10:16 +1200 schrieb Tom Hibbert:> I have a strong case for Xen clustering, and I''m knee deep in the dead > with builds at the moment, but I thought I''d post a bit about where I''m > at. > I''ve noticed a few people on the list having problems with Xen and DRBD, > so I thought I''d post an approximate walkthrough of the steps I''ve been > taking to bring it up.A meta-question: Why did you choose DRDB instead of RAID1-on-(G)NBD (more precise: DRDB-on-local-RAID instead of RAID1-on-GNDB-on-local- RAID)? Does DRDB stay usable (mountable) while resyncing after a disconnect?> [...] > Please give me some feedback on this documentation style.Thanx for this nice writeup! /nils. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel