I have a customer that described this issue to me in general terms. I''d like to know how to replicated it, and what the best practice is to a avoid the issue, or fix it in an accepted manner. If they kernel patch, and reboot they may get messages informing them that the pool version is down rev''d. If they act on the message and upgrade the pool version, then have to boot from the failsafe it fails as that kernel does not support that pool version. What would be a way to fix this, and should we allow they catch to even happen? Thanks -- This message posted from opensolaris.org
Hi Shawn, I think this can happen if you apply patch 141445-09. It should not happen in the future. I believe the workaround is this: 1. Boot the system from the correct media. 2. Install the boot blocks on the root pool disk(s). 3. Upgrade the pool. Thanks, Cindy On 06/24/10 09:24, Shawn Belaire wrote:> I have a customer that described this issue to me in general terms. > > I''d like to know how to replicated it, and what the best practice is to a avoid the issue, or fix it in an accepted manner. > > If they kernel patch, and reboot they may get messages informing them that the pool version is down rev''d. If they act on the message and upgrade the pool version, then have to boot from the failsafe it fails as that kernel does not support that pool version. > > What would be a way to fix this, and should we allow they catch to even happen? > > Thanks
Reasonably Related Threads
- centos6 - failsafe terminal on login screen
- FAILSAFE preventing errors from displaying?
- form_tag with remote=>true Error during failsafe response:
- zfs send/receive question
- [PATCH AUTOSEL 5.1 028/186] drm/nouveau/disp/dp: respect sink limits when selecting failsafe link configuration