Ragnar Sundblad
2009-Dec-02 00:57 UTC
[zfs-discuss] zpool import - device names not always updated?
It seems that device names aren''t always updated when importing pools if devices have moved. I am not sure if this is only an cosmetic issue or if it could actually be a real problem - could it lead to the device not being found at a later import? /ragge (This is on snv_127.) I ran the following script: ------------ #!/bin/bash set -e set -x zfs create -V 1G rpool/vol1 zfs create -V 1G rpool/vol2 zpool create pool mirror /dev/zvol/dsk/rpool/vol1 /dev/zvol/dsk/rpool/vol2 zpool status pool zpool export pool zfs create rpool/subvol1 zfs create rpool/subvol2 zfs rename rpool/vol1 rpool/subvol1/vol1 zfs rename rpool/vol2 rpool/subvol2/vol2 zpool import -d /dev/zvol/dsk/rpool/subvol1 sleep 1 zpool import -d /dev/zvol/dsk/rpool/subvol2 sleep 1 zpool import -d /dev/zvol/dsk/rpool/subvol1 pool zpool status pool ------------ And got the output below. I have annotated it with ### remarks. ------------ # bash zfs-test.bash + zfs create -V 1G rpool/vol1 + zfs create -V 1G rpool/vol2 + zpool create pool mirror /dev/zvol/dsk/rpool/vol1 /dev/zvol/dsk/rpool/vol2 + zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/zvol/dsk/rpool/vol1 ONLINE 0 0 0 /dev/zvol/dsk/rpool/vol2 ONLINE 0 0 0 errors: No known data errors + zpool export pool + zfs create rpool/subvol1 + zfs create rpool/subvol2 + zfs rename rpool/vol1 rpool/subvol1/vol1 + zfs rename rpool/vol2 rpool/subvol2/vol2 + zpool import -d /dev/zvol/dsk/rpool/subvol1 pool: pool id: 13941781561414544058 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-2Q config: pool DEGRADED mirror-0 DEGRADED /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE /dev/zvol/dsk/rpool/vol2 UNAVAIL cannot open ### Note that it can''t find vol2 - which is expected. + sleep 1 ### The sleep here seems to be necessary for vol1 to magically be ### found in the next zpool import. + zpool import -d /dev/zvol/dsk/rpool/subvol2 pool: pool id: 13941781561414544058 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: pool ONLINE mirror-0 ONLINE /dev/zvol/dsk/rpool/vol1 ONLINE /dev/zvol/dsk/rpool/subvol2/vol2 ONLINE ### Note that it says vol1 is ONLINE, under it''s old path, though it actually has moved + sleep 1 + zpool import -d /dev/zvol/dsk/rpool/subvol1 pool + zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE 0 0 0 /dev/zvol/dsk/rpool/vol2 ONLINE 0 0 0 errors: No known data errors ### Note that vol2 has it old path shown! ---------------------------- ### Interestingly, if you then + zpool export pool + zpool import -d /dev/zvol/dsk/rpool/subvol2 pool ### vol2''s path gets updated too: + zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE 0 0 0 /dev/zvol/dsk/rpool/subvol2/vol2 ONLINE 0 0 0 errors: No known data errors ----------------------------
Cindy Swearingen
2009-Dec-03 17:35 UTC
[zfs-discuss] zpool import - device names not always updated?
Hi Ragnar, A bug might exist but you are building a pool based on the ZFS volumes that are created in another pool. This configuration is not supported and possible deadlocks can occur. If you can retry this example without building a pool on another pool, like using files to create a pool and can reproduce this, then please let me know. Thanks, Cindy On 12/01/09 17:57, Ragnar Sundblad wrote:> It seems that device names aren''t always updated when importing > pools if devices have moved. I am not sure if this is only an > cosmetic issue or if it could actually be a real problem - > could it lead to the device not being found at a later import? > > /ragge > > (This is on snv_127.) > > I ran the following script: > ------------ > #!/bin/bash > > set -e > set -x > > zfs create -V 1G rpool/vol1 > zfs create -V 1G rpool/vol2 > zpool create pool mirror /dev/zvol/dsk/rpool/vol1 /dev/zvol/dsk/rpool/vol2 > zpool status pool > zpool export pool > zfs create rpool/subvol1 > zfs create rpool/subvol2 > zfs rename rpool/vol1 rpool/subvol1/vol1 > zfs rename rpool/vol2 rpool/subvol2/vol2 > zpool import -d /dev/zvol/dsk/rpool/subvol1 > sleep 1 > zpool import -d /dev/zvol/dsk/rpool/subvol2 > sleep 1 > zpool import -d /dev/zvol/dsk/rpool/subvol1 pool > zpool status pool > ------------ > > And got the output below. I have annotated it with ### remarks. > > ------------ > # bash zfs-test.bash > + zfs create -V 1G rpool/vol1 > + zfs create -V 1G rpool/vol2 > + zpool create pool mirror /dev/zvol/dsk/rpool/vol1 /dev/zvol/dsk/rpool/vol2 > + zpool status pool > pool: pool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/vol1 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/vol2 ONLINE 0 0 0 > > errors: No known data errors > + zpool export pool > + zfs create rpool/subvol1 > + zfs create rpool/subvol2 > + zfs rename rpool/vol1 rpool/subvol1/vol1 > + zfs rename rpool/vol2 rpool/subvol2/vol2 > + zpool import -d /dev/zvol/dsk/rpool/subvol1 > pool: pool > id: 13941781561414544058 > state: DEGRADED > status: One or more devices are missing from the system. > action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > see: http://www.sun.com/msg/ZFS-8000-2Q > config: > > pool DEGRADED > mirror-0 DEGRADED > /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE > /dev/zvol/dsk/rpool/vol2 UNAVAIL cannot open > ### Note that it can''t find vol2 - which is expected. > + sleep 1 > ### The sleep here seems to be necessary for vol1 to magically be > ### found in the next zpool import. > + zpool import -d /dev/zvol/dsk/rpool/subvol2 > pool: pool > id: 13941781561414544058 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > config: > > pool ONLINE > mirror-0 ONLINE > /dev/zvol/dsk/rpool/vol1 ONLINE > /dev/zvol/dsk/rpool/subvol2/vol2 ONLINE > ### Note that it says vol1 is ONLINE, under it''s old path, though it actually has moved > + sleep 1 > + zpool import -d /dev/zvol/dsk/rpool/subvol1 pool > + zpool status pool > pool: pool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/vol2 ONLINE 0 0 0 > > errors: No known data errors > ### Note that vol2 has it old path shown! > > ---------------------------- > > ### Interestingly, if you then > + zpool export pool > + zpool import -d /dev/zvol/dsk/rpool/subvol2 pool > ### vol2''s path gets updated too: > + zpool status pool > pool: pool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool ONLINE 0 0 0 > mirror-0 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/subvol1/vol1 ONLINE 0 0 0 > /dev/zvol/dsk/rpool/subvol2/vol2 ONLINE 0 0 0 > > errors: No known data errors > > ---------------------------- > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ragnar Sundblad
2009-Dec-03 22:26 UTC
[zfs-discuss] zpool import - device names not always updated?
Thank you Cindy for your reply! On 3 dec 2009, at 18.35, Cindy Swearingen wrote:> A bug might exist but you are building a pool based on the ZFS > volumes that are created in another pool. This configuration > is not supported and possible deadlocks can occur.I had absolutely no idea that ZFS volumes weren''t supported as ZFS containers. Were can I find information about what is and what isn''t supported for ZFS volumes?> If you can retry this example without building a pool on another > pool, like using files to create a pool and can reproduce this, > then please let me know.I retried it with files instead, and it then worked exactly as expected. (Also, it didn''t anymore magically remember locations of earlier found volumes in other directories for import, with or without the sleeps.) I don''t know if it is of interest, to anyone, but I''ll include the reworked file based test below. /ragge -------- #!/bin/bash set -e set -x mkdir /d mkfile 1g /d/f1 mkfile 1g /d/f2 zpool create pool mirror /d/f1 /d/f2 zpool status pool zpool export pool mkdir /d/subdir1 mkdir /d/subdir2 mv /d/f1 /d/subdir1/ mv /d/f2 /d/subdir2/ zpool import -d /d/subdir1 zpool import -d /d/subdir2 zpool import -d /d/subdir1 -d /d/subdir2 pool zpool status pool # cleanup - remove the "# DELETEME_" part # DELETEME_zpool destroy pool # DELETEME_rm -rf /d --------
Cindy Swearingen
2009-Dec-04 17:47 UTC
[zfs-discuss] zpool import - device names not always updated?
Hi-- The problem with your test below was creating a pool by using the components from another pool. This configuration is not supported. We don''t have a lot of a specific information about using volumes, other than for using as iSCSI and COMSTAR devices. You might review our ZFS best practices guide, here, for guidelines on creating ZFS storage pools: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Cindy On 12/03/09 15:26, Ragnar Sundblad wrote:> Thank you Cindy for your reply! > > On 3 dec 2009, at 18.35, Cindy Swearingen wrote: > >> A bug might exist but you are building a pool based on the ZFS >> volumes that are created in another pool. This configuration >> is not supported and possible deadlocks can occur. > > I had absolutely no idea that ZFS volumes weren''t supported > as ZFS containers. Were can I find information about what > is and what isn''t supported for ZFS volumes? > >> If you can retry this example without building a pool on another >> pool, like using files to create a pool and can reproduce this, >> then please let me know. > > I retried it with files instead, and it then worked exactly > as expected. (Also, it didn''t anymore magically remember > locations of earlier found volumes in other directories for > import, with or without the sleeps.) > > I don''t know if it is of interest, to anyone, but I''ll > include the reworked file based test below. > > /ragge > > -------- > #!/bin/bash > > set -e > set -x > mkdir /d > mkfile 1g /d/f1 > mkfile 1g /d/f2 > zpool create pool mirror /d/f1 /d/f2 > zpool status pool > zpool export pool > mkdir /d/subdir1 > mkdir /d/subdir2 > mv /d/f1 /d/subdir1/ > mv /d/f2 /d/subdir2/ > zpool import -d /d/subdir1 > zpool import -d /d/subdir2 > zpool import -d /d/subdir1 -d /d/subdir2 pool > zpool status pool > # cleanup - remove the "# DELETEME_" part > # DELETEME_zpool destroy pool > # DELETEME_rm -rf /d > -------- >
Ragnar Sundblad
2009-Dec-05 22:17 UTC
[zfs-discuss] zpool import - device names not always updated?
On 4 dec 2009, at 18.47, Cindy Swearingen wrote:> Hi-- > > The problem with your test below was creating a pool by using the > components from another pool. This configuration is not supported. > > We don''t have a lot of a specific information about using volumes, > other than for using as iSCSI and COMSTAR devices. > > You might review our ZFS best practices guide, here, for guidelines on > creating ZFS storage pools: > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > CindyThank you! To me it was news that ZFS volume dsk/rdsk devices wasn''t supported as containers for ZFS file systems. A bit surprising too. The ZFS Best Practices Guide doesn''t say very much about this. I am even more surprised that a file in ZFS *is* supported for ZFS filesystems when volume devices aren''t, that wasn''t too obvious to me. Can Cindy or someone else please comment on what is and what isn''t supported of the following, which we currently use or plan to use: - UFS in a ZFS volume, mounted locally? - a ZFS volume, iSCSI exported (soon to be COMSTAR), locally imported again, and with a ZFS in it locally mounted/imported? Thanks! /ragge> > On 12/03/09 15:26, Ragnar Sundblad wrote: >> Thank you Cindy for your reply! >> On 3 dec 2009, at 18.35, Cindy Swearingen wrote: >>> A bug might exist but you are building a pool based on the ZFS >>> volumes that are created in another pool. This configuration >>> is not supported and possible deadlocks can occur. >> I had absolutely no idea that ZFS volumes weren''t supported >> as ZFS containers. Were can I find information about what >> is and what isn''t supported for ZFS volumes? >>> If you can retry this example without building a pool on another >>> pool, like using files to create a pool and can reproduce this, >>> then please let me know. >> I retried it with files instead, and it then worked exactly >> as expected. (Also, it didn''t anymore magically remember >> locations of earlier found volumes in other directories for >> import, with or without the sleeps.) >> I don''t know if it is of interest, to anyone, but I''ll >> include the reworked file based test below. >> /ragge >> -------- >> #!/bin/bash >> set -e >> set -x >> mkdir /d >> mkfile 1g /d/f1 >> mkfile 1g /d/f2 >> zpool create pool mirror /d/f1 /d/f2 >> zpool status pool >> zpool export pool >> mkdir /d/subdir1 >> mkdir /d/subdir2 >> mv /d/f1 /d/subdir1/ >> mv /d/f2 /d/subdir2/ >> zpool import -d /d/subdir1 >> zpool import -d /d/subdir2 >> zpool import -d /d/subdir1 -d /d/subdir2 pool >> zpool status pool >> # cleanup - remove the "# DELETEME_" part >> # DELETEME_zpool destroy pool >> # DELETEME_rm -rf /d >> --------