Rich
2006-Sep-13 17:40 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
Hi zfs-discuss, I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I didn''t have space on the root for live_upgrade, so I booted from disc to upgrade, but it failed on every attempt, so I ended up blowing away / and doing a clean b44 install. Now the zpool that was attached to that system won''t stop thinking that it''s mounted on another system, regardless of what I try. On boot, the system thinks the pool is mounted elsewhere, and won''t mount it unless I log in and zpool import -f. I tried zpool export followed by import, and that required no -f, but on reboot, lo, the problem returned. I even tried destroying and reimporting the pool, which led to this hilarious sequence: # zpool import no pools available to import # zpool import -D pool: moonside id: 8290331144559232496 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. The pool was destroyed, but can be imported using the ''-Df'' flags. config: moonside ONLINE raidz1 ONLINE c2t0d0 ONLINE c2t1d0 ONLINE c2t2d0 ONLINE c2t3d0 ONLINE c2t4d0 ONLINE c2t5d0 ONLINE c2t6d0 ONLINE # zpool import -D moonside cannot import ''moonside'': pool may be in use from other system use ''-f'' to import anyway # This is either a bug or a missing feature (the ability to make a filesystem stop thinking it''s mounted somewhere else) - anybody have any ideas? Thanks, - Rich -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060913/34da5d83/attachment.html>
Eric Schrock
2006-Sep-13 17:46 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
Can you send the output of ''zdb -l /dev/dsk/c2t0d0s0'' ? So you do the ''zpool import -f'' and all is well, but then when you reboot, it doesn''t show up, and you must import it again? Can you send the output of ''zdb -C'' both before and after you do the import? Thanks, - Eric On Wed, Sep 13, 2006 at 01:40:13PM -0400, Rich wrote:> Hi zfs-discuss, > > I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I didn''t > have space on the root for live_upgrade, so I booted from disc to upgrade, > but it failed on every attempt, so I ended up blowing away / and doing a > clean b44 install. > > Now the zpool that was attached to that system won''t stop thinking that it''s > mounted on another system, regardless of what I try. > > On boot, the system thinks the pool is mounted elsewhere, and won''t mount it > unless I log in and zpool import -f. I tried zpool export followed by > import, and that required no -f, but on reboot, lo, the problem returned. > > I even tried destroying and reimporting the pool, which led to this > hilarious sequence: > > # zpool import > no pools available to import > # zpool import -D > pool: moonside > id: 8290331144559232496 > > state: ONLINE (DESTROYED) > action: The pool can be imported using its name or numeric identifier. > The pool was destroyed, but can be imported using the ''-Df'' flags. > config: > > moonside ONLINE > > raidz1 ONLINE > c2t0d0 ONLINE > c2t1d0 ONLINE > c2t2d0 ONLINE > c2t3d0 ONLINE > c2t4d0 ONLINE > c2t5d0 ONLINE > c2t6d0 ONLINE > > # zpool import -D moonside > cannot import ''moonside'': pool may be in use from other system > use ''-f'' to import anyway > # > > This is either a bug or a missing feature (the ability to make a filesystem > stop thinking it''s mounted somewhere else) - anybody have any ideas? > > Thanks, > - Rich> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Rich
2006-Sep-13 19:28 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
I do the ''zpool import -f moonside'', and all is well until I reboot, at which point I must zpool import -f again. Below is zdb -l /dev/dsk/c2t0d0s0''s output: -------------------------------------------- LABEL 0 -------------------------------------------- version=3 name=''moonside'' state=0 txg=1644418 pool_guid=8290331144559232496 top_guid=12835093579979239393 guid=7480231448190751824 vdev_tree type=''raidz'' id=0 guid=12835093579979239393 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=127371575296 children[0] type=''disk'' id=0 guid=7480231448190751824 path=''/dev/dsk/c2t0d0s0'' devid=''id1,sd at x00609487340a923a/a'' whole_disk=1 DTL=23 children[1] type=''disk'' id=1 guid=2626377814825345466 path=''/dev/dsk/c2t1d0s0'' devid=''id1,sd at x006094877400b454/a'' whole_disk=1 DTL=22 children[2] type=''disk'' id=2 guid=16932309055791750053 path=''/dev/dsk/c2t2d0s0'' devid=''id1,sd at x006094877400fef2/a'' whole_disk=1 DTL=21 children[3] type=''disk'' id=3 guid=18145699204085538208 path=''/dev/dsk/c2t3d0s0'' devid=''id1,sd at x00609487340a2aea/a'' whole_disk=1 DTL=20 children[4] type=''disk'' id=4 guid=2046828747707454119 path=''/dev/dsk/c2t4d0s0'' devid=''id1,sd at x00609487340afb80/a'' whole_disk=1 DTL=19 children[5] type=''disk'' id=5 guid=5851407888580937378 path=''/dev/dsk/c2t5d0s0'' devid=''id1,sd at x00609487341314fc/a'' whole_disk=1 DTL=18 children[6] type=''disk'' id=6 guid=10476478316210434659 path=''/dev/dsk/c2t6d0s0'' devid=''id1,sd at x0060948734131391/a'' whole_disk=1 DTL=17 -------------------------------------------- LABEL 1 -------------------------------------------- version=3 name=''moonside'' state=0 txg=1644418 pool_guid=8290331144559232496 top_guid=12835093579979239393 guid=7480231448190751824 vdev_tree type=''raidz'' id=0 guid=12835093579979239393 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=127371575296 children[0] type=''disk'' id=0 guid=7480231448190751824 path=''/dev/dsk/c2t0d0s0'' devid=''id1,sd at x00609487340a923a/a'' whole_disk=1 DTL=23 children[1] type=''disk'' id=1 guid=2626377814825345466 path=''/dev/dsk/c2t1d0s0'' devid=''id1,sd at x006094877400b454/a'' whole_disk=1 DTL=22 children[2] type=''disk'' id=2 guid=16932309055791750053 path=''/dev/dsk/c2t2d0s0'' devid=''id1,sd at x006094877400fef2/a'' whole_disk=1 DTL=21 children[3] type=''disk'' id=3 guid=18145699204085538208 path=''/dev/dsk/c2t3d0s0'' devid=''id1,sd at x00609487340a2aea/a'' whole_disk=1 DTL=20 children[4] type=''disk'' id=4 guid=2046828747707454119 path=''/dev/dsk/c2t4d0s0'' devid=''id1,sd at x00609487340afb80/a'' whole_disk=1 DTL=19 children[5] type=''disk'' id=5 guid=5851407888580937378 path=''/dev/dsk/c2t5d0s0'' devid=''id1,sd at x00609487341314fc/a'' whole_disk=1 DTL=18 children[6] type=''disk'' id=6 guid=10476478316210434659 path=''/dev/dsk/c2t6d0s0'' devid=''id1,sd at x0060948734131391/a'' whole_disk=1 DTL=17 -------------------------------------------- LABEL 2 -------------------------------------------- version=3 name=''moonside'' state=0 txg=1644418 pool_guid=8290331144559232496 top_guid=12835093579979239393 guid=7480231448190751824 vdev_tree type=''raidz'' id=0 guid=12835093579979239393 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=127371575296 children[0] type=''disk'' id=0 guid=7480231448190751824 path=''/dev/dsk/c2t0d0s0'' devid=''id1,sd at x00609487340a923a/a'' whole_disk=1 DTL=23 children[1] type=''disk'' id=1 guid=2626377814825345466 path=''/dev/dsk/c2t1d0s0'' devid=''id1,sd at x006094877400b454/a'' whole_disk=1 DTL=22 children[2] type=''disk'' id=2 guid=16932309055791750053 path=''/dev/dsk/c2t2d0s0'' devid=''id1,sd at x006094877400fef2/a'' whole_disk=1 DTL=21 children[3] type=''disk'' id=3 guid=18145699204085538208 path=''/dev/dsk/c2t3d0s0'' devid=''id1,sd at x00609487340a2aea/a'' whole_disk=1 DTL=20 children[4] type=''disk'' id=4 guid=2046828747707454119 path=''/dev/dsk/c2t4d0s0'' devid=''id1,sd at x00609487340afb80/a'' whole_disk=1 DTL=19 children[5] type=''disk'' id=5 guid=5851407888580937378 path=''/dev/dsk/c2t5d0s0'' devid=''id1,sd at x00609487341314fc/a'' whole_disk=1 DTL=18 children[6] type=''disk'' id=6 guid=10476478316210434659 path=''/dev/dsk/c2t6d0s0'' devid=''id1,sd at x0060948734131391/a'' whole_disk=1 DTL=17 -------------------------------------------- LABEL 3 -------------------------------------------- version=3 name=''moonside'' state=0 txg=1644418 pool_guid=8290331144559232496 top_guid=12835093579979239393 guid=7480231448190751824 vdev_tree type=''raidz'' id=0 guid=12835093579979239393 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=127371575296 children[0] type=''disk'' id=0 guid=7480231448190751824 path=''/dev/dsk/c2t0d0s0'' devid=''id1,sd at x00609487340a923a/a'' whole_disk=1 DTL=23 children[1] type=''disk'' id=1 guid=2626377814825345466 path=''/dev/dsk/c2t1d0s0'' devid=''id1,sd at x006094877400b454/a'' whole_disk=1 DTL=22 children[2] type=''disk'' id=2 guid=16932309055791750053 path=''/dev/dsk/c2t2d0s0'' devid=''id1,sd at x006094877400fef2/a'' whole_disk=1 DTL=21 children[3] type=''disk'' id=3 guid=18145699204085538208 path=''/dev/dsk/c2t3d0s0'' devid=''id1,sd at x00609487340a2aea/a'' whole_disk=1 DTL=20 children[4] type=''disk'' id=4 guid=2046828747707454119 path=''/dev/dsk/c2t4d0s0'' devid=''id1,sd at x00609487340afb80/a'' whole_disk=1 DTL=19 children[5] type=''disk'' id=5 guid=5851407888580937378 path=''/dev/dsk/c2t5d0s0'' devid=''id1,sd at x00609487341314fc/a'' whole_disk=1 DTL=18 children[6] type=''disk'' id=6 guid=10476478316210434659 path=''/dev/dsk/c2t6d0s0'' devid=''id1,sd at x0060948734131391/a'' whole_disk=1 DTL=17 Below is the (lack of) ''zdb -C'' output: # zdb -C # zpool import pool: moonside id: 8290331144559232496 state: ONLINE action: The pool can be imported using its name or numeric identifier. The pool may be active on on another system, but can be imported using the ''-f'' flag. config: [snip] # zpool import -f moonside # zdb -C # On 9/13/06, Eric Schrock <eric.schrock at sun.com> wrote:> > Can you send the output of ''zdb -l /dev/dsk/c2t0d0s0'' ? So you do the > ''zpool import -f'' and all is well, but then when you reboot, it doesn''t > show up, and you must import it again? Can you send the output of ''zdb > -C'' both before and after you do the import? > > Thanks, > > - Eric > > On Wed, Sep 13, 2006 at 01:40:13PM -0400, Rich wrote: > > Hi zfs-discuss, > > > > I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I > didn''t > > have space on the root for live_upgrade, so I booted from disc to > upgrade, > > but it failed on every attempt, so I ended up blowing away / and doing a > > clean b44 install. > > > > Now the zpool that was attached to that system won''t stop thinking that > it''s > > mounted on another system, regardless of what I try. > > > > On boot, the system thinks the pool is mounted elsewhere, and won''t > mount it > > unless I log in and zpool import -f. I tried zpool export followed by > > import, and that required no -f, but on reboot, lo, the problem > returned. > > > > I even tried destroying and reimporting the pool, which led to this > > hilarious sequence: > > > > # zpool import > > no pools available to import > > # zpool import -D > > pool: moonside > > id: 8290331144559232496 > > > > state: ONLINE (DESTROYED) > > action: The pool can be imported using its name or numeric identifier. > > The pool was destroyed, but can be imported using the ''-Df'' > flags. > > config: > > > > moonside ONLINE > > > > raidz1 ONLINE > > c2t0d0 ONLINE > > c2t1d0 ONLINE > > c2t2d0 ONLINE > > c2t3d0 ONLINE > > c2t4d0 ONLINE > > c2t5d0 ONLINE > > c2t6d0 ONLINE > > > > # zpool import -D moonside > > cannot import ''moonside'': pool may be in use from other system > > use ''-f'' to import anyway > > # > > > > This is either a bug or a missing feature (the ability to make a > filesystem > > stop thinking it''s mounted somewhere else) - anybody have any ideas? > > > > Thanks, > > - Rich > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Eric Schrock, Solaris Kernel Development > http://blogs.sun.com/eschrock >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060913/b3ec13b4/attachment.html>
Rich
2006-Sep-20 04:33 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
Hi all, Has anyone else had any thoughts on this, or should I just sit patiently and hope the problem is resolved somehow because I''m missing something obvious or file something somewhere that I don''t know about? Thanks, - Rich On 9/13/06, Rich <rincebrain at gmail.com> wrote:> > I do the ''zpool import -f moonside'', and all is well until I reboot, at > which point I must zpool import -f again. > > Below is zdb -l /dev/dsk/c2t0d0s0''s output: > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=3 > name=''moonside'' > state=0 > txg=1644418 > pool_guid=8290331144559232496 > top_guid=12835093579979239393 > guid=7480231448190751824 > vdev_tree > type=''raidz'' > id=0 > guid=12835093579979239393 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=127371575296 > children[0] > type=''disk'' > id=0 > guid=7480231448190751824 > path=''/dev/dsk/c2t0d0s0'' > devid=''id1,sd at x00609487340a923a/a'' > whole_disk=1 > DTL=23 > children[1] > type=''disk'' > id=1 > guid=2626377814825345466 > path=''/dev/dsk/c2t1d0s0'' > devid=''id1,sd at x006094877400b454/a'' > whole_disk=1 > DTL=22 > children[2] > type=''disk'' > id=2 > guid=16932309055791750053 > path=''/dev/dsk/c2t2d0s0'' > devid=''id1,sd at x006094877400fef2/a'' > whole_disk=1 > DTL=21 > children[3] > type=''disk'' > id=3 > guid=18145699204085538208 > path=''/dev/dsk/c2t3d0s0'' > devid=''id1,sd at x00609487340a2aea/a'' > whole_disk=1 > DTL=20 > children[4] > type=''disk'' > id=4 > guid=2046828747707454119 > path=''/dev/dsk/c2t4d0s0'' > devid=''id1,sd at x00609487340afb80/a'' > whole_disk=1 > DTL=19 > children[5] > type=''disk'' > id=5 > guid=5851407888580937378 > path=''/dev/dsk/c2t5d0s0'' > devid=''id1, sd at x00609487341314fc/a'' > whole_disk=1 > DTL=18 > children[6] > type=''disk'' > id=6 > guid=10476478316210434659 > path=''/dev/dsk/c2t6d0s0'' > devid=''id1,sd at x0060948734131391/a'' > whole_disk=1 > DTL=17 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=3 > name=''moonside'' > state=0 > txg=1644418 > pool_guid=8290331144559232496 > top_guid=12835093579979239393 > guid=7480231448190751824 > vdev_tree > type=''raidz'' > id=0 > guid=12835093579979239393 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=127371575296 > children[0] > type=''disk'' > id=0 > guid=7480231448190751824 > path=''/dev/dsk/c2t0d0s0'' > devid=''id1,sd at x00609487340a923a/a'' > whole_disk=1 > DTL=23 > children[1] > type=''disk'' > id=1 > guid=2626377814825345466 > path=''/dev/dsk/c2t1d0s0'' > devid=''id1,sd at x006094877400b454/a'' > whole_disk=1 > DTL=22 > children[2] > type=''disk'' > id=2 > guid=16932309055791750053 > path=''/dev/dsk/c2t2d0s0'' > devid=''id1,sd at x006094877400fef2/a'' > whole_disk=1 > DTL=21 > children[3] > type=''disk'' > id=3 > guid=18145699204085538208 > path=''/dev/dsk/c2t3d0s0'' > devid=''id1,sd at x00609487340a2aea/a'' > whole_disk=1 > DTL=20 > children[4] > type=''disk'' > id=4 > guid=2046828747707454119 > path=''/dev/dsk/c2t4d0s0'' > devid=''id1,sd at x00609487340afb80/a'' > whole_disk=1 > DTL=19 > children[5] > type=''disk'' > id=5 > guid=5851407888580937378 > path=''/dev/dsk/c2t5d0s0'' > devid=''id1,sd at x00609487341314fc/a'' > whole_disk=1 > DTL=18 > children[6] > type=''disk'' > id=6 > guid=10476478316210434659 > path=''/dev/dsk/c2t6d0s0'' > devid=''id1, sd at x0060948734131391/a'' > whole_disk=1 > DTL=17 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=3 > name=''moonside'' > state=0 > txg=1644418 > pool_guid=8290331144559232496 > top_guid=12835093579979239393 > guid=7480231448190751824 > vdev_tree > type=''raidz'' > id=0 > guid=12835093579979239393 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=127371575296 > children[0] > type=''disk'' > id=0 > guid=7480231448190751824 > path=''/dev/dsk/c2t0d0s0'' > devid=''id1,sd at x00609487340a923a/a'' > whole_disk=1 > DTL=23 > children[1] > type=''disk'' > id=1 > guid=2626377814825345466 > path=''/dev/dsk/c2t1d0s0'' > devid=''id1,sd at x006094877400b454/a'' > whole_disk=1 > DTL=22 > children[2] > type=''disk'' > id=2 > guid=16932309055791750053 > path=''/dev/dsk/c2t2d0s0'' > devid=''id1,sd at x006094877400fef2/a'' > whole_disk=1 > DTL=21 > children[3] > type=''disk'' > id=3 > guid=18145699204085538208 > path=''/dev/dsk/c2t3d0s0'' > devid=''id1, sd at x00609487340a2aea/a'' > whole_disk=1 > DTL=20 > children[4] > type=''disk'' > id=4 > guid=2046828747707454119 > path=''/dev/dsk/c2t4d0s0'' > devid=''id1,sd at x00609487340afb80/a'' > whole_disk=1 > DTL=19 > children[5] > type=''disk'' > id=5 > guid=5851407888580937378 > path=''/dev/dsk/c2t5d0s0'' > devid=''id1,sd at x00609487341314fc/a'' > whole_disk=1 > DTL=18 > children[6] > type=''disk'' > id=6 > guid=10476478316210434659 > path=''/dev/dsk/c2t6d0s0'' > devid=''id1,sd at x0060948734131391/a'' > whole_disk=1 > DTL=17 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=3 > name=''moonside'' > state=0 > txg=1644418 > pool_guid=8290331144559232496 > top_guid=12835093579979239393 > guid=7480231448190751824 > vdev_tree > type=''raidz'' > id=0 > guid=12835093579979239393 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=127371575296 > children[0] > type=''disk'' > id=0 > guid=7480231448190751824 > path=''/dev/dsk/c2t0d0s0'' > devid=''id1,sd at x00609487340a923a/a'' > whole_disk=1 > DTL=23 > children[1] > type=''disk'' > id=1 > guid=2626377814825345466 > path=''/dev/dsk/c2t1d0s0'' > devid=''id1,sd at x006094877400b454/a'' > whole_disk=1 > DTL=22 > children[2] > type=''disk'' > id=2 > guid=16932309055791750053 > path=''/dev/dsk/c2t2d0s0'' > devid=''id1,sd at x006094877400fef2/a'' > whole_disk=1 > DTL=21 > children[3] > type=''disk'' > id=3 > guid=18145699204085538208 > path=''/dev/dsk/c2t3d0s0'' > devid=''id1,sd at x00609487340a2aea/a'' > whole_disk=1 > DTL=20 > children[4] > type=''disk'' > id=4 > guid=2046828747707454119 > path=''/dev/dsk/c2t4d0s0'' > devid=''id1,sd at x00609487340afb80/a'' > whole_disk=1 > DTL=19 > children[5] > type=''disk'' > id=5 > guid=5851407888580937378 > path=''/dev/dsk/c2t5d0s0'' > devid=''id1, sd at x00609487341314fc/a'' > whole_disk=1 > DTL=18 > children[6] > type=''disk'' > id=6 > guid=10476478316210434659 > path=''/dev/dsk/c2t6d0s0'' > devid=''id1,sd at x0060948734131391/a'' > whole_disk=1 > DTL=17 > > > Below is the (lack of) ''zdb -C'' output: > # zdb -C > # zpool import > > pool: moonside > id: 8290331144559232496 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > The pool may be active on on another system, but can be imported > using > the ''-f'' flag. > config: > [snip] > # zpool import -f moonside > # zdb -C > # > > > On 9/13/06, Eric Schrock <eric.schrock at sun.com> wrote: > > > > Can you send the output of ''zdb -l /dev/dsk/c2t0d0s0'' ? So you do the > > ''zpool import -f'' and all is well, but then when you reboot, it doesn''t > > show up, and you must import it again? Can you send the output of ''zdb > > -C'' both before and after you do the import? > > > > Thanks, > > > > - Eric > > > > On Wed, Sep 13, 2006 at 01:40:13PM -0400, Rich wrote: > > > Hi zfs-discuss, > > > > > > I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I > > didn''t > > > have space on the root for live_upgrade, so I booted from disc to > > upgrade, > > > but it failed on every attempt, so I ended up blowing away / and doing > > a > > > clean b44 install. > > > > > > Now the zpool that was attached to that system won''t stop thinking > > that it''s > > > mounted on another system, regardless of what I try. > > > > > > On boot, the system thinks the pool is mounted elsewhere, and won''t > > mount it > > > unless I log in and zpool import -f. I tried zpool export followed by > > > import, and that required no -f, but on reboot, lo, the problem > > returned. > > > > > > I even tried destroying and reimporting the pool, which led to this > > > hilarious sequence: > > > > > > # zpool import > > > no pools available to import > > > # zpool import -D > > > pool: moonside > > > id: 8290331144559232496 > > > > > > state: ONLINE (DESTROYED) > > > action: The pool can be imported using its name or numeric identifier. > > > The pool was destroyed, but can be imported using the ''-Df'' > > flags. > > > config: > > > > > > moonside ONLINE > > > > > > raidz1 ONLINE > > > c2t0d0 ONLINE > > > c2t1d0 ONLINE > > > c2t2d0 ONLINE > > > c2t3d0 ONLINE > > > c2t4d0 ONLINE > > > c2t5d0 ONLINE > > > c2t6d0 ONLINE > > > > > > # zpool import -D moonside > > > cannot import ''moonside'': pool may be in use from other system > > > use ''-f'' to import anyway > > > # > > > > > > This is either a bug or a missing feature (the ability to make a > > filesystem > > > stop thinking it''s mounted somewhere else) - anybody have any ideas? > > > > > > Thanks, > > > - Rich > > > > > _______________________________________________ > > > zfs-discuss mailing list > > > zfs-discuss at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > -- > > Eric Schrock, Solaris Kernel Development > > http://blogs.sun.com/eschrock > > >-- Friends don''t let friends use Internet Explorer or Outlook. Choose something better. www.mozilla.org www.getfirefox.com www.getthunderbird.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060920/3d8c001b/attachment.html>
Eric Schrock
2006-Sep-20 04:53 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
On Wed, Sep 20, 2006 at 12:33:50AM -0400, Rich wrote:> Hi all, > Has anyone else had any thoughts on this, or should I just sit patiently and > hope the problem is resolved somehow because I''m missing something obvious > or file something somewhere that I don''t know about? > > Thanks, > - RichRich - Sorry about this. It''s definitely my fault, I promised to take a look at this when I had a chance, and I haven''t yet done so. Unfortunately tomorrow I''m out of town - I''ll try take a closer look at your situation tonight or on Thursday if I have a chance. You are on the right alias for getting the fastest resolution possible, trust me. Unfortunately we''re all bogged down with a dozen different issues and sometimes we forget what''s come up on various aliases. If someone else from the ZFS team can take a look at this, I''d appreciate it[1]. - Eric [1] I also realized that I forgot to test the weird zones "lots o'' datasets" failure that had been previously reported. Any help from other folks within Sun would be appreciated. -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric Schrock
2006-Sep-22 01:41 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
On Wed, Sep 20, 2006 at 12:33:50AM -0400, Rich wrote:> > >Below is the (lack of) ''zdb -C'' output: > ># zdb -C > ># zpool import > > > > pool: moonside > > id: 8290331144559232496 > > state: ONLINE > >action: The pool can be imported using its name or numeric identifier. > > The pool may be active on on another system, but can be imported > >using > > the ''-f'' flag. > >config: > >[snip] > ># zpool import -f moonside > ># zdb -C > >#Hmmm, that''s seriously busted. This indicates that it wasn''t able to write out the /etc/zfs/zpool.cache file. Can you do an ''ls -l'' of this file and the containing directory, both before and after you do the import? - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Rich
2006-Sep-22 07:36 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
...huh. So /etc/zfs doesn''t exist. At all. Creating /etc/zfs using mkdir, then importing the pool with zpool import -f, then rebooting, the behavior vanishes, so...yay. Problem solved, I guess, but shouldn''t ZFS be smarter about creating its own config directory? - Rich On 9/21/06, Eric Schrock <eric.schrock at sun.com> wrote:> On Wed, Sep 20, 2006 at 12:33:50AM -0400, Rich wrote: > > > > >Below is the (lack of) ''zdb -C'' output: > > ># zdb -C > > ># zpool import > > > > > > pool: moonside > > > id: 8290331144559232496 > > > state: ONLINE > > >action: The pool can be imported using its name or numeric identifier. > > > The pool may be active on on another system, but can be imported > > >using > > > the ''-f'' flag. > > >config: > > >[snip] > > ># zpool import -f moonside > > ># zdb -C > > ># > > Hmmm, that''s seriously busted. This indicates that it wasn''t able to > write out the /etc/zfs/zpool.cache file. Can you do an ''ls -l'' of this > file and the containing directory, both before and after you do the > import? > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock >
Eric Schrock
2006-Sep-22 15:37 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
On Fri, Sep 22, 2006 at 03:36:36AM -0400, Rich wrote:> ...huh. > > So /etc/zfs doesn''t exist. At all. > > Creating /etc/zfs using mkdir, then importing the pool with zpool > import -f, then rebooting, the behavior vanishes, so...yay. > > Problem solved, I guess, but shouldn''t ZFS be smarter about creating > its own config directory?That seems a reasonable RFE, but I wonder how you got into this situation in the first place. What is the history of the OS on this system? Nevada? Solaris 10? Upgraded? Patched? I assume that you don''t tend to go around removing random /etc directories on purpose, so I want to make sure that our software didn''t screw up somehow. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Rich
2006-Sep-22 19:04 UTC
[zfs-discuss] zpool always thinks it''s mounted on another system
The history is quite simple: 1) Installed nv_b32 or around there on a zeroed drive. Created this ZFS pool for the first time. 2) Non-live upgraded to nv_b42 when it came out, zpool upgrade on the zpool in question from v2 to v3. 3) Tried to non-live upgrade to nv_b44, upgrade failed every time, so I just blew away my existing partition scheme and install nv_b44 cleanly. 4) Problem begins. I can''t think of any sane reason I could have blown away that directory accidentally, so I don''t know. - Rich On 9/22/06, Eric Schrock <eric.schrock at sun.com> wrote:> On Fri, Sep 22, 2006 at 03:36:36AM -0400, Rich wrote: > > ...huh. > > > > So /etc/zfs doesn''t exist. At all. > > > > Creating /etc/zfs using mkdir, then importing the pool with zpool > > import -f, then rebooting, the behavior vanishes, so...yay. > > > > Problem solved, I guess, but shouldn''t ZFS be smarter about creating > > its own config directory? > > That seems a reasonable RFE, but I wonder how you got into this > situation in the first place. What is the history of the OS on this > system? Nevada? Solaris 10? Upgraded? Patched? I assume that you > don''t tend to go around removing random /etc directories on purpose, so > I want to make sure that our software didn''t screw up somehow. > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock >-- Friends don''t let friends use Internet Explorer or Outlook. Choose something better. www.mozilla.org www.getfirefox.com www.getthunderbird.com