Alas, I have some notes on the subject of migration from UFS to ZFS
with split filesystems (separate /usr /var and some /var/* subdirs), but
they are an unpublished document in Russian ;) Here I will outline some
main points, but will probably have omitted some others :(
Hope this helps anyway...
Splitting off /usr and /var/* subdirs into separate datasets has been a
varying success (worked in Solaris 10 and OpenSolaris SXCE, failed
in OpenIndiana) and may cause issues during first reboots after OS
upgrades and after some repair reboots (system tools don''t expect
such layout), but separating /var as a single dataset is supported.
Paths like /export and /opt are not involved as "system root", so
these can be implemented any way you want, including storage
on a separate "data pool".
With the zfs root in place, you can either create a swap volume
inside the zfs root, or use a dedicated partition for swapping,
or do both. With a dedicated partition you might control where
on disk it is localted (faster/slower tracks), but you dedicate
this space for only swapping if it is needed. With volumes you
can relatively easily resize the swap area.
/tmp is usually implemented as a "tmpfs" filesystem and as such it
is stored in virtual memory, which is spread between RAM and
swap areas, and its contents are lost on reboot - but you don''t
really care much about that implementation detail. In your vfstab
file you just have this line:
# grep tmp /etc/vfstab
swap - /tmp tmpfs - yes -
In short, you might not want to involve LU in this at all: after a successful
migration has been tested, you''re likely to kill the UFS partition and
use
it as part of the ZFS root pool mirror. After that you would want to start
the LU history from scratch, by naming this ZFS-rooted copy of your
installation the initial boot environment, and later LUpgrade it to newer
releases.
Data migration itself is rather simple: you create the zfs pool named
"rpool"
in an available slice (i.e. c0t1d0s0) and in that rpool you create and mount
the needed hierarchy of filesystem datasets (compatible with LU/beadm
expectations). Then you copy over all the file data from UFS into your
hierarchy (ufsdump/ufsrestore or Sun cpio preferred - to keep the ACL
data), then enable booting of the ZFS root (zpool set bootfs=), and test if it
works ;)
# format
... (create the slice #0 on c0t1d0 of appropriate size - see below)
# zpool create -f -R /a rpool c0t1d0s0
# zfs create -o mountpoint=legacy rpool/ROOT
# zfs create -o mountpoint=/ rpool/ROOT/sol10u8
# zfs create -o compression=on rpool/ROOT/sol10u8/var
# zfs create -o compression=on rpool/ROOT/sol10u8/opt
# zfs create rpool/export
# zfs create -o compression=on rpool/export/home
# zpool set bootfs=rpool/ROOT/sol10u8 rpool
# zpool set failmode=continue rpool
Optionally create the swap and dump areas, i.e.
# zfs create -V2g rpool/dump
# zfs create -V2g rpool/swap
If all goes well (and I didn''t type mistakes) you should have the
hierarchy mounted under /a. Check with "df -k" to be sure...
One way to copy - with ufsdump:
# cd /a && ( ufsdump 0f - / | ufsrestore -rf - )
# cd /a/var && ( ufsdump 0f - /var | ufsrestore -rf - )
# cd /a/opt && ( ufsdump 0f - /opt | ufsrestore -rf - )
# cd /a/export/home && ( ufsdump 0f - /export/home | ufsrestore -rf - )
Another way - with Sun cpio:
# cd /a
# mkdir -p tmp proc devices var/run system/contract system/object
etc/svc/volatile
# touch etc/mnttab etc/dfs/sharetab
# cd / && ( /usr/bin/find . var opt export/home -xdev -depth -print |
/usr/bin/cpio -Ppvdm /a )
Review the /a/etc/vfstab file, you probably need to comment away the explicit
mountpoints for your new datasets, including root. It might get to look like
this:
# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
/devices - /devices devfs - no -
/proc - /proc proc - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
sharefs - /etc/dfs/sharetab sharefs - no
-
fd - /dev/fd fd - no -
swap - /tmp tmpfs - yes -
/dev/zvol/dsk/rpool/swap - - swap -
no -
Finally, install the right bootloader for the current OS.
* In case of GRUB:
# /a/sbin/installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c0t1d0s0
# mkdir -p /a/rpool/boot/grub
# cp /boot/grub/menu.lst /a/rpool/boot/grub
Review and update the GRUB menu file as needed. Note that the current
disk which GRUB is booting from is always hd0, regardless of the BIOS
point of view - first or second disk.
* For SPARC:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c0t1d0s0
==
Prepare to reboot:
# touch /a/reconfigure
# bootadm update-archive -R /a
# zpool export rpool
# init 6
Good luck :)
==
Other comments:
As I see, in your original system you have these slices:
c0t0d0s0 = /
c0t0d0s3 = /var
c0t0d0s4 = /opt
c0t0d0s5 = /export/home
I believe your swap is currently on the second disk, and has to be disabled
if you plan to use it for the zfs root. Also note that slice #2 by standard
convention addresses the whole disk, so if you really have swap on both
c0t1d0s2 and c0t1d0s3, where s2 is the whole disk and s3 is its part -
this may be a source of problems.
If you plan to use the whole disk as the root pool, just make a slice #0
(see format command) which also addresses the whole second disk.
If you plan to seperate the root pool and data pools, estimate how much
root you''d need (sum up /, /var and maybe /opt, add swap size and
optionally dump size for kernel dumps - approx 50% to 100% of
RAM size), and make slice #0 of appropriate size (starting with sector
0 of the Solaris partition).
I''ve rarely seen root pools on server installations (no X11) needing
more than 4-5Gb for one boot environment without swap/dump.
If you upgrade the OS, you should about as much free space for each
new BE.
The rest of the disk, such as /export, can be stored in the separate
data pool stored in another slice, starting with some sector just after
the root-pool slice #0. Separating data may be good in order to keep
potential problems (failed writes during reset) from touching your
rpool and preventing the system to boot even in repair mode.
Also the rpool has a rather small and simple hierarchy, while data
pools tend to be complex and require more processing time to
import and maintain during OS lifetime.
Another incentive reason may be that root pools are limited to be
single slices or mirrors, while data pools may enjoy raidZ or raid10
configurations - if you have more than 2 disks.
Also note that for Solaris 10 you should have at least Update 6 to have
zfs roots (maybe it was even in sol10u4, but with sol10u6 it has certainly
worked).
----- Original Message -----
From: BIll Palin <billp52018 at swissmail.net>
Date: Tuesday, May 31, 2011 18:02
Subject: [zfs-discuss] not sure how to make filesystems
To: zfs-discuss at opensolaris.org
> I''m migrating some filesystems from UFS to ZFS and I''m
not sure
> how to create a couple of them.
>
> I want to migrate /, /var, /opt, /export/home and also want swap
> and /tmp. I don''t care about any of the others.
>
> The first disk, and the one with the UFS filesystems, is c0t0d0
> and the 2nd disk is c0t1d0.
>
> I''ve been told that /tmp is supposed to be part of swap.
> So far I have:
>
> lucreate -m /:/dev/dsk/c0t0d0s0:ufs -m
> /var:/dev/dsk/c0t0d0s3:ufs -m /export/home:/dev/dsk/c0t0d0s5:ufs
> -m /opt:/dev/dsk/c0t0d0s4:ufs -m -:/dev/dsk/c0t1d0s2:swap -m
> /tmp:/dev/dsk/c0t1d0s3:swap-n zfsBE -p rootpool
>
> And then set quotas for them. Is this right?
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
+============================================================+
| |
| ?????? ???????, Jim Klimov |
| ??????????? ???????? CTO |
| ??? "??? ? ??" JSC COS&HT |
| |
| +7-903-7705859 (cellular) mailto:jimklimov at cos.ru |
| CC:admin at cos.ru,jimklimov at gmail.com |
+============================================================+
| () ascii ribbon campaign - against html mail |
| /\ - against microsoft attachments |
+============================================================+
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110531/8361888f/attachment-0001.html>