Dave Pratt
2007-Nov-02 19:14 UTC
[zfs-discuss] ZFS Jumpstart integration and the amazing invisible zpool.cache
I''ve been wrestling with implementing some ZFS mounts for /var and /usr into a jumpstart setup. I know that jumpstart does "know" anything about zfs as in your can''t define ZFS volumes or pools in the profile. I''ve gone ahead and let the JS do a base install into a single ufs slice and then attempted to create the zpool and zfs volumes in the finish script and ufsdump|ufsrestore the data from the /usr and /var partitions into the new zfs volumes. Problem is there doesn''t seem to be a way to ensure that the zpool is imported into the freshly built system on the first reboot. I see in the archives here from a few weeks ago someone was asking a similar question and it was suggested that as part of the finish script the "/etc/zfs/zpool.cache" could be copied to "/etc/zfs/zpool.cache" but it has been my experience through some serious testing that when creating and managing zfs pools and volumes in the jumpstart scripts that no zpool.cache file is created. Even including "find / -name zpool.cache" in the finish script returns no hits on that file name. Now, I''m aware that the zpool.cache file isn''t intended to really be used for administrative tasks as it''s format and existence aren''t even well documented or solidified as part of the management framework for zfs moving forward; I would however REALLY like to know why in every other situation when managing zfs pools/vols that this file is created, but in this one situation it isn''t. I would be equally curious to know if it is possible to maybe force the creation of this file or as a last option, at least make zpool statically linked in the default solaris distribution so that I may put a method and toolchain neccessary for import pools in the early part of the SMF boot sequence. Thanks in Advance for any insight as to how to work this out.
Tomas Ă–gren
2007-Nov-03 12:55 UTC
[zfs-discuss] ZFS Jumpstart integration and the amazing invisible zpool.cache
On 02 November, 2007 - Dave Pratt sent me these 2,0K bytes:> I''ve been wrestling with implementing some ZFS mounts for /var and > /usr into a jumpstart setup. I know that jumpstart does "know" anything > about zfs as in your can''t define ZFS volumes or pools in the profile. > I''ve gone ahead and let the JS do a base install into a single ufs slice > and then attempted to create the zpool and zfs volumes in the finish > script and ufsdump|ufsrestore the data from the /usr and /var partitions > into the new zfs volumes. Problem is there doesn''t seem to be a way to > ensure that the zpool is imported into the freshly built system on the > first reboot.Ugly hack I''ve been doing to create ZFS thingies under jumpstart/sparc, but it works.. ---8<--- profile entry ---8<--- filesys c1t1d0s7 free /makezfs logging or filesys c1t1d0s7 free /makezfsmirror1 logging filesys c1t2d0s7 free /makezfsmirror2 logging ---8<--- run first in client_end_script ---8<--- #!/bin/sh echo ZFS-stuff dozfs=0 dozfsmirror=0 if [ -d /a/makezfs ]; then dozfs=1 fi if [ -d /a/makezfsmirror1 ]; then dozfs=1 dozfsmirror=1 fi test $dozfs = 1 || exit 0 if [ $dozfsmirror = 1 ]; then umount /a/makezfsmirror1 umount /a/makezfsmirror2 disk1=`grep /makezfsmirror1 /a/etc/vfstab|awk ''{print $1}''` disk2=`grep /makezfsmirror2 /a/etc/vfstab|awk ''{print $1}''` else umount /a/makezfs disk1=`grep /makezfs /a/etc/vfstab|awk ''{print $1}''` fi perl -p -i.bak -e ''s,.*/makezfs.*,#,'' /a/etc/vfstab # do it twice due to bug, see # http://bugs.opensolaris.org/view_bug.do?bug_id=6566433 zpool create -f -R /a -m /data data $disk1 || zpool create -f -R /a -m /data data $disk1 if [ "x$disk2" != "x" ]; then zpool attach data $disk1 $disk2 fi zfs set compression=on data zfs set mountpoint=none data zfs create data/lap zfs create data/scratch zfs create data/postfixspool zfs set mountpoint=/lap data/lap zfs set mountpoint=/scratch data/scratch mkdir -p /a/var/spool/postfix zfs set mountpoint=/var/spool/postfix data/postfixspool zfs set reservation=256M data/postfixspool echo ZFS-stuff done ---8<--- run last in client_end_script ---8<--- #!/bin/sh zpool list | grep -w data > /dev/null || exit 0 echo /sbin/zpool export data /sbin/zpool export data echo /sbin/mount -F lofs /devices /a/devices /sbin/mount -F lofs /devices /a/devices echo chroot /a /sbin/zpool import data chroot /a /sbin/zpool import data The final step is the trick ;) /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Dave Pratt
2007-Nov-05 23:47 UTC
[zfs-discuss] ZFS Jumpstart integration and the amazing invisible zpool.cache
> ---8<--- run last in client_end_script ---8<--- > > #!/bin/sh > > zpool list | grep -w data > /dev/null || exit 0 > > echo /sbin/zpool export data > /sbin/zpool export data > echo /sbin/mount -F lofs /devices /a/devices > /sbin/mount -F lofs /devices /a/devices > echo chroot /a /sbin/zpool import data > chroot /a /sbin/zpool import data > > > > > The final step is the trick ;) > > > /TomasThomas thank you a million times over for this suggestion. I had a few little hangups getting this implemented, but here is the script-fu that accomplished it. Some of it is still a bit kludgy for my tastes, but I expect (Sun are you listening) that zfs root and zfs targets will be supported natively in jumpstart soon enough. The shuffle of data after the ufsdump/restore is a little different if your inital jumpstart profile puts var and usr on seperate partitions. I think that dump/restore in the current CVS repo for opensolaris might have different behavior. #Define some usefule variables DISK1=`/bin/echo ${SI_DISKLIST}|/bin/awk -F, ''{print $1}''` DISK2=`/bin/echo ${SI_DISKLIST}|/bin/awk -F, ''{print $2}''` #create the base zfs mirror device pool echo "rebuild device nodes" devfsadm echo "rebuilding device nodes on target root" devfsadm -r /a echo "Destroy existing base zpool" zpool destroy -f base echo "Create new base zpool as a mirror of slice 3 from both disks" zpool create -m none base mirror ${DISK1}s3 ${DISK2}s3 echo "Create base/var zfs vol" zfs create -o mountpoint=legacy -o atime=off base/var echo "Create base/usr zfs vol" zfs create -o mountpoint=legacy -o atime=off base/usr echo "Create base/spool zfs vol" zfs create -o mountpoint=legacy -o atime=off base/spool echo "Creating and setting perms for /a/var.z" mkdir /a/var.z chmod 755 /a/var.z chown 0:0 /a/var.z echo "Creating and setting perms for /a/usr.z" mkdir /a/usr.z chmod 755 /a/usr.z chown 0:0 /a/usr.z echo "Adding lines to vfstab for zfs mounts" echo "base/var - /var zfs - yes -">>/a/etc/vfstab echo "base/usr - /usr zfs - yes -">>/a/etc/vfstab echo "base/spool - /var/spool zfs - yes -">>/a/etc/vfstab echo "mounting var.z and usr.z" mount -F zfs base/var /a/var.z mount -F zfs base/usr /a/usr.z echo "dumping /a/usr to /a/usr.z" (cd /a/usr.z;ufsdump 0f - /a/usr|ufsrestore rf -;mv ./usr/* ./;rmdir ./usr) echo "dumping /a/var to /a/var.z" (cd /a/var.z;ufsdump 0f - /a/var|ufsrestore rf -;mv ./var/* ./;rmdir ./var) echo "export base zpool" /sbin/zpool export base echo "loop /devices to /a/devices" /sbin/mount -F lofs /devices /a/devices echo "import to base zpool to /a chroot" chroot /a /sbin/zpool import base mv /a/var /a/var.local mv /a/usr /a/usr.local echo "Creating and setting perms for /a/var" mkdir /a/var chmod 755 /a/var chown 0:0 /a/var echo "Creating and setting perms for /a/usr" mkdir /a/usr chmod 755 /a/usr chown 0:0 /a/usr echo "unmounting /a/var.z and /a/usr.z" umount /a/var.z umount /a/usr.z echo "importing zpool base again for the final time" zpool import -f base echo "mounting /a/usr and /a/var for sane shutdown" mount -F zfs base/var /a/var mount -F zfs base/usr /a/usr echo "move spool contents" mv /a/var/spool /a/var/spool.old echo "create new mount point" mkdir /a/var/spool chmod 755 /a/var/spool chown 0:3 /a/var/spool echo "mounting new spool zfs vol" mount -F zfs base/spool /a/var/spool echo "moving spool contents into new mount" mv /a/var/spool.old/* /a/var/spool/ echo "Finished!" This message posted from opensolaris.org