Recently some changes were made to how a root pool is opened for root filesystem mounting. Previously the root pool had to be present in zpool.cache. Now it is automatically discovered by probing available GEOM providers. The new scheme is believed to be more flexible. For example, it allows to prepare a new root pool at one system, then export it and then boot from it on a new system without doing any extra/magical steps with zpool.cache. It could also be convenient after zpool split and in some other situations. The change was introduced via multiple commits, the latest relevant revision in head is r243502. The changes are partially MFC-ed, the remaining parts are scheduled to be MFC-ed soon. I have received a report that the change caused a problem with booting on at least one system. The problem has been identified as an issue in local environment and has been fixed. Please read on to see if you might be affected when you upgrade, so that you can avoid any unnecessary surprises. You might be affected if you ever had a pool named the same as your current root pool. And you still have any disks connected to your system that belonged to that pool (in whole or via some partitions). And that pool was never properly destroyed using zpool destroy, but merely abandoned (its disks re-purposed/re-partitioned/reused). If all of the above are true, then I recommend that you run 'zdb -l <disk>' for all suspect disks and their partitions (or just all disks and partitions). If this command reports at least one valid ZFS label for a disk or a partition that do not belong to any current pool, then the problem may affect you. The best course is to remove the offending labels. If you are affected, please follow up to this email. -- Andriy Gapon
> > Recently some changes were made to how a root pool is opened for root filesystem > mounting. Previously the root pool had to be present in zpool.cache. Now it is > automatically discovered by probing available GEOM providers. > The new scheme is believed to be more flexible. For example, it allows to prepare > a new root pool at one system, then export it and then boot from it on a new > system without doing any extra/magical steps with zpool.cache. It could also be > convenient after zpool split and in some other situations. > > The change was introduced via multiple commits, the latest relevant revision in > head is r243502. The changes are partially MFC-ed, the remaining parts are > scheduled to be MFC-ed soon. > > I have received a report that the change caused a problem with booting on at least > one system. The problem has been identified as an issue in local environment and > has been fixed. Please read on to see if you might be affected when you upgrade, > so that you can avoid any unnecessary surprises. > > You might be affected if you ever had a pool named the same as your current root > pool. And you still have any disks connected to your system that belonged to that > pool (in whole or via some partitions). And that pool was never properly > destroyed using zpool destroy, but merely abandoned (its disks > re-purposed/re-partitioned/reused). > > If all of the above are true, then I recommend that you run 'zdb -l <disk>' for > all suspect disks and their partitions (or just all disks and partitions). If > this command reports at least one valid ZFS label for a disk or a partition that > do not belong to any current pool, then the problem may affect you. > > The best course is to remove the offending labels. > > If you are affected, please follow up to this email.GREATE!!!! in a diskless environment, /boot is read only, and the zpool.cache issue has been bothering me ever since, there was no way (and I tried) to re route it. thanks, danny
On Wed, Nov 28, 2012 at 8:35 PM, Andriy Gapon <avg at freebsd.org> wrote:> > Recently some changes were made to how a root pool is opened for root filesystem > mounting. Previously the root pool had to be present in zpool.cache. Now it is > automatically discovered by probing available GEOM providers. > The new scheme is believed to be more flexible. For example, it allows to prepare > a new root pool at one system, then export it and then boot from it on a new > system without doing any extra/magical steps with zpool.cache. It could also be > convenient after zpool split and in some other situations. > > The change was introduced via multiple commits, the latest relevant revision in > head is r243502. The changes are partially MFC-ed, the remaining parts are > scheduled to be MFC-ed soon. > > I have received a report that the change caused a problem with booting on at least > one system. The problem has been identified as an issue in local environment and > has been fixed. Please read on to see if you might be affected when you upgrade, > so that you can avoid any unnecessary surprises. > > You might be affected if you ever had a pool named the same as your current root > pool. And you still have any disks connected to your system that belonged to that > pool (in whole or via some partitions). And that pool was never properly > destroyed using zpool destroy, but merely abandoned (its disks > re-purposed/re-partitioned/reused). > > If all of the above are true, then I recommend that you run 'zdb -l <disk>' for > all suspect disks and their partitions (or just all disks and partitions). If > this command reports at least one valid ZFS label for a disk or a partition that > do not belong to any current pool, then the problem may affect you. > > The best course is to remove the offending labels. > > If you are affected, please follow up to this email. > > -- > Andriy Gapon > _______________________________________________ > freebsd-current at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"Hello, What is the status of the MFC process to 9-STABLE? I'm on 9-STABLE r244407, should I be able to boot from this ZFS pool without zpool.cache? zpool status pool: zwhitezone state: ONLINE scan: scrub repaired 0 in 0h53m with 0 errors on Sat Dec 15 23:41:09 2012 config: NAME STATE READ WRITE CKSUM zwhitezone ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 label/wzdisk0 ONLINE 0 0 0 label/wzdisk1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 label/wzdisk2 ONLINE 0 0 0 label/wzdisk3 ONLINE 0 0 0 errors: No known data errors
I have MFCed the following change, so please double-check if you might be affected. Preferably before upgrading :-) on 28/11/2012 20:35 Andriy Gapon said the following:> > Recently some changes were made to how a root pool is opened for root filesystem > mounting. Previously the root pool had to be present in zpool.cache. Now it is > automatically discovered by probing available GEOM providers. > The new scheme is believed to be more flexible. For example, it allows to prepare > a new root pool at one system, then export it and then boot from it on a new > system without doing any extra/magical steps with zpool.cache. It could also be > convenient after zpool split and in some other situations. > > The change was introduced via multiple commits, the latest relevant revision in > head is r243502. The changes are partially MFC-ed, the remaining parts are > scheduled to be MFC-ed soon. > > I have received a report that the change caused a problem with booting on at least > one system. The problem has been identified as an issue in local environment and > has been fixed. Please read on to see if you might be affected when you upgrade, > so that you can avoid any unnecessary surprises. > > You might be affected if you ever had a pool named the same as your current root > pool. And you still have any disks connected to your system that belonged to that > pool (in whole or via some partitions). And that pool was never properly > destroyed using zpool destroy, but merely abandoned (its disks > re-purposed/re-partitioned/reused). > > If all of the above are true, then I recommend that you run 'zdb -l <disk>' for > all suspect disks and their partitions (or just all disks and partitions). If > this command reports at least one valid ZFS label for a disk or a partition that > do not belong to any current pool, then the problem may affect you. > > The best course is to remove the offending labels. > > If you are affected, please follow up to this email.-- Andriy Gapon