Hello everyone, I just wanted to play with zfs just a bit before I start using
it at my workplace on servers so I did set it up on my Solaris 10 U2 box.
I used to have all my disks mounted as UFS and everything was fine. I had my
/etc/vfstab as such:
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c1t0d0s1 - - swap - no -
/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no logging
/dev/dsk/c1t0d0s6 /dev/rdsk/c1t0d0s6 /usr ufs 1 no logging
/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /var ufs 1 no logging
/dev/dsk/c1t0d0s7 /dev/rdsk/c1t0d0s7 /d/d1 ufs 2 yes logging
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
/dev/dsk/c1t1d0s7 /dev/rdsk/c1t1d0s7 /d/d2 ufs 2 yes logging
/d/d2/downloads - /d/d2/web/htdocs/downloads lofs 2 yes -
/d/d1/home/cw/pics - /d/d2/web/htdocs/pics lofs 2 yes -
So I decided to put /d/d2 drive on zfs, created my pool, then did create zfs an
dmounted it under /d/d2 while I did copy content od /d/d2 to my new zfs and then
removed it from vfstab file.
Ok, so now line where is does say:
/dev/dsk/c1t1d0s7 /dev/rdsk/c1t1d0s7 /d/d2 ufs 2 yes logging
is commented out from my vfstab file. I rebooted my system just to get all my
things started as I wanted (well I did bring all webservers and everything else
down for the duration of copy so that nothing was accessing /d/d2 drive).
So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... somehow system
could not mount those two items from vfstab:
/d/d2/downloads - /d/d2/web/htdocs/downloads lofs 2 yes -
/d/d1/home/cw/pics - /d/d2/web/htdocs/pics lofs 2 yes -
I could not login and do anything, had to login trough console put my service
svc:/system/filesystem/local:default out of maitenance mode, clear maitenance
state and all my services started to get going and system was no longer in
single user mode...
That sucks a bit since how can I mount both UFS drives, then mount zfs and then
get lofs mountpoints after?
Also if certain dysks did not mount I used to go to /etc/vfstab and was able to
see what was going on, now since zfs does not use vfstab how can I know what was
mounted or not before system went down? Sometimes drives go bad, sometimes
certain dysks are commented out in vfstab such as backup disks, with zfs it is
controlled trough command line, what if I do not want to boot something at boot
time? How can I distinguish what suppose to be mounted at boot and whats not
uzing zfs list? is there a config file that I can just comment out few lines and
be able to mount them at other times other than boot?
Thanks for suggestions... and sorry if this is wrong group to post such question
since this is not a question about opensolaris but zfs on Solaris 10 Update 2.
Chris