Hi, concerning this issue I didn''t find anything in the bug database, so I thought I report it here... When running live-upgrade on a system with a zfs, LU creates directories for all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go to maintainance state, when booting the ABE, because the zpool won''t be imported because of the existing directory structure in its mount point. I observed this behavior on a Solaris 10 system with live-upgrade 11.10. Tom This message posted from opensolaris.org
Thomas Maier-Komor wrote:>Hi, > >concerning this issue I didn''t find anything in the bug database, so I thought I report it here... > >When running live-upgrade on a system with a zfs, LU creates directories for all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go to maintainance state, when booting the ABE, because the zpool won''t be imported because of the existing directory structure in its mount point. > >I observed this behavior on a Solaris 10 system with live-upgrade 11.10. > > >Last time I reported this (a upgrade to build 41) I was told the only solution was to remove the unwanted mount points before booting the new BE. Ian
You can use the -x option (on each zfs file system) to prevent lucreate from creating new copies of each one in the new BE. lori Ian Collins wrote:> Thomas Maier-Komor wrote: > > >>Hi, >> >>concerning this issue I didn''t find anything in the bug database, so I thought I report it here... >> >>When running live-upgrade on a system with a zfs, LU creates directories for all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go to maintainance state, when booting the ABE, because the zpool won''t be imported because of the existing directory structure in its mount point. >> >>I observed this behavior on a Solaris 10 system with live-upgrade 11.10. >> >> >> > > Last time I reported this (a upgrade to build 41) I was told the only > solution was to remove the unwanted mount points before booting the new BE. > > Ian > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Lori, I did a liveupgrade from NV b41 to b47 and I still ran into this problem on one of my ZFS mounts. Both mounts failed to mount in the new BE because directories were created for the mount points, but only one of the mounts actually had its data copied into the BE. I checked /etc/default/lu and I do have the fix for 6335531 Liveupgrade should not copy zfs file systems into new BEs which was putback to build 27. Here''s my configuration # zfs list NAME USED AVAIL REFER MOUNTPOINT scratch 3.07G 39.5G 3.07G /scratch twosyncs 52.1G 176G 24.5K /twosyncs twosyncs/home 52.1G 176G 25.5K /export/home twosyncs/home/haik 52.1G 176G 51.4G /export/home/haik The data in /scratch was copied into a /scratch directory in the new BE. /export/home/haik wasn''t copied into the new BE, but directories were created in the new BE preventing it from mounting on boot. With the fix for 6335531, do we still need to use the -x option? Thanks, Haik This message posted from opensolaris.org
I believe I am experiencing a similar, but more severe issue and I do not know how to resolve it. I used liveupgrade from s10u2 to NV b46 (via solaris express release). My second disk is zfs with the file system fitz. I did a ''zpool export fitz'' Reboot with init 6 into new environment, NV b46, I get the following error: cannot mount ''/fitz'' : directory is not empty svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95. zfs list = nothing listed. There is already a /fitz directory filled with the zpool fitz files on mounted. Since filesystem/local svc won''t start, I cannot start X, which is critical to using the computer. I now see that there was no real need to export the pool fitz and that I should have just imported it once in the new BE. How can I now solve this issue? (BTW, attempting to boot back into s10u2, the original BE, results in a kernel panic, so I cannot go back). thanks, aric --On September 21, 2006 10:01:28 AM -0700 Haik Aftandilian <haik.aftandilian at sun.com> sent:> I did a liveupgrade from NV b41 to b47 and I still ran into this > problem on one of my ZFS mounts. Both mounts failed to mount in the > new BE because directories were created for the mount points, but > only one of the mounts actually had its data copied into the BE. I > checked /etc/default/lu and I do have the fix for > > 6335531 Liveupgrade should not copy zfs file systems into new BEs > > which was putback to build 27. Here''s my configuration > ># zfs list > NAME USED AVAIL REFER MOUNTPOINT > scratch 3.07G 39.5G 3.07G /scratch > twosyncs 52.1G 176G 24.5K /twosyncs > twosyncs/home 52.1G 176G 25.5K /export/home > twosyncs/home/haik 52.1G 176G 51.4G /export/home/haik > > The data in /scratch was copied into a /scratch directory in the new > BE. /export/home/haik wasn''t copied into the new BE, but directories > were created in the new BE preventing it from mounting on boot.
> I believe I am experiencing a similar, but more > severe issue and I do > not know how to resolve it. I used liveupgrade from > s10u2 to NV b46 > (via solaris express release). My second disk is zfs > with the file > system fitz. I did a ''zpool export fitz'' > > Reboot with init 6 into new environment, NV b46, I > get the following > error: > cannot mount ''/fitz'' : directory is not empty > svc:/system/filesystem/local:default: WARNING: > /usr/sbin/zfs mount -a > failed: exit status 1 > svc.startd[7]: svc:/system/filesystem/local:default: > Method > "/lib/svc/method/fs-local" failed with exit status > 95. > > zfs list = nothing listed. > > There is already a /fitz directory filled with the > zpool fitz files on > mounted. Since filesystem/local svc won''t start, I > cannot start X, > which is critical to using the computer. I now see > that there was no > real need to export the pool fitz and that I should > have just imported > it once in the new BE. How can I now solve this > issue? (BTW, attempting > to boot back into s10u2, the original BE, results in > a kernel panic, so > I cannot go back).Aric, It sounds like you can resolve this issue by simply booting into the new BE and deleting the /fitz directory and then rebooting and going back into the new BE. I say this because from your message it sounds like the data from your zfs filesystem in /fitz was copied to /fitz in the new BE (instead of just being mounted in the new BE). BEFORE DELETING ANYTHING, please make sure /fitz is not a zfs mount and just a plain directory and therefore just a copy of what is in your zpool. Be careful, I don''t want you to lose any data. Also, what does "zpool list" report? Lastly, ZFS people might be interested in the panic message you get when you boot back into Solaris 10. Haik> > thanks, > > aric >This message posted from opensolaris.org
Apologies for any confusion, but I am now able to give more output regarding the zpool fitz. unknown# zfs list --> returns list of zfs file system fitz and related snapshots unknown# zpool status pool: fitz state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool upgrade''. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME SATE fitz ONLINE c2d0s7 ONLINE errors: No known data errors unknown# zpool upgrade -v This system is currently running ZFS version 3. the following versions are supported: ...... unknown# zfs mount --> lists the zfs pool as mounted as it should be at /fitz but ''zfs unmount fitz'' returns ''cannot unmount ''fitz'' : not currently mounted zpool import --> no pools available to import zpool import -d /fitz --> no pools available to import thanks, aric ===============================================I believe I am experiencing a similar, but more severe issue and I do not know how to resolve it. I used liveupgrade from s10u2 to NV b46 (via solaris express release). My second disk is zfs with the file system fitz. I did a ''zpool export fitz'' Reboot with init 6 into new environment, NV b46, I get the following error: cannot mount ''/fitz'' : directory is not empty svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95. zfs list = nothing listed. There is already a /fitz directory filled with the zpool fitz files on mounted. Since filesystem/local svc won''t start, I cannot start X, which is critical to using the computer. I now see that there was no real need to export the pool fitz and that I should have just imported it once in the new BE. How can I now solve this issue? (BTW, attempting to boot back into s10u2, the original BE, results in a kernel panic, so I cannot go back). thanks, aric
> It sounds like you can resolve this issue by simply > booting into the new BE and deleting the /fitz > directory and then rebooting and going back into the > new BE. I say this because from your message it > sounds like the data from your zfs filesystem in > /fitz was copied to /fitz in the new BE (instead of > just being mounted in the new BE). BEFORE DELETING > ANYTHING, please make sure /fitz is not a zfs mount > and just a plain directory and therefore just a copy > of what is in your zpool. Be careful, I don''t want > you to lose any data.Haik, Thank you very much. ''zpool list'' yeilds NAME SIZE USED AVAIL CAP HEALTH ALTROOT fitz 74.5G 22.9G 51.6G 30% ONLINE - How do I confirm that /fitz is not currently a zfs mountpoint? ''zfs mount'' yeilds fitz/home /fitz/home fitz/home/aorchid /fitz/home/aorchid fitz/music /fitz/music fitz/pg /fitz/pg fitz/pictures /fitz/pictures ''ls -la /fitz'' yeilds total 85 drwxr-xr-x 7 root sys 512 Sep 20 10:41 . drwxr-xr-x 30 root root 512 Sep 21 18:28 .. --> this is when I ran ''zpool export fitz'' drwxr-xr-x 3 root sys 3 Jul 25 12:22 home etc... /etc/vfstab does not have /fitz and umount /fitz returns umount: warning: /fitz not in mnttab umount: /fitz not mounted> Lastly, ZFS people might be interested in the panic > message you get when you boot back into Solaris 10.They are all related to the NVIDIA driver, gfxp, from what I remember from two weeks ago. I am on an Ultra 20. thanks, aric This message posted from opensolaris.org
> Haik, > > Thank you very much. ''zpool list'' yeilds > NAME SIZE USED AVAIL CAP HEALTH > ALTROOT > z 74.5G 22.9G 51.6G 30% ONLINE - > > How do I confirm that /fitz is not currently a zfs > mountpoint? ''zfs mount'' yeilds > > fitz/home /fitz/home > fitz/home/aorchid /fitz/home/aorchid > fitz/music /fitz/music > fitz/pg /fitz/pg > fitz/pictures /fitz/picturesAh, OK. It''s good that you didn''t delete /fitz. This is what I recommend that you do. # zfs unmount -a # zfs mount <This should produce no output since now all zfs filesystems are unmounted> # find /fitz <This should produce no files, only empty directories> <At this point, as long as there is nothing important in /fitz, you can go ahead an delete it> # rm -r /fitz <Or just delete everything inside /fitz> # zfs mount -a < Now /fitz should be all set. When you reboot you should not see the /fitz filesystem mount error> Someone else please chime in if this looks wrong. Hope that helps. Haik> > ''ls -la /fitz'' yeilds > total 85 > drwxr-xr-x 7 root sys 512 Sep 20 10:41 . > drwxr-xr-x 30 root root 512 Sep 21 18:28 .. --> > this is when I ran ''zpool export fitz'' > drwxr-xr-x 3 root sys 3 Jul 25 12:22 home > etc... > > /etc/vfstab does not have /fitz and umount /fitz > returns > umount: warning: /fitz not in mnttab > umount: /fitz not mounted > > > Lastly, ZFS people might be interested in the > panic > > message you get when you boot back into Solaris > 10. > > They are all related to the NVIDIA driver, gfxp, from > what I remember from two weeks ago. I am on an Ultra > 20. > > thanks, > aricThis message posted from opensolaris.org