Hector De Jesus
2008-Jan-31 18:09 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Hello SUN gurus I do not know if this is supported, I have a created a zpool consisting of the SAN resources and created a zfs file system. Using third part software I have taken snapshots of all luns in the zfs pool. My question is in a recovery situation is there a way for me to mount the snapshots and import the pool while the original is still active. Right now all I am able to do is export the original than I can import the snapshots and access the zpool and filesystem. Im looking at a no downtime solution can this be done ? This message posted from opensolaris.org
Tim
2008-Jan-31 19:03 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
On 1/31/08, Hector De Jesus <hec1979 at gmail.com> wrote:> > Hello SUN gurus I do not know if this is supported, I have a created a > zpool consisting of the SAN resources and created a zfs file system. Using > third part software I have taken snapshots of all luns in the zfs pool. My > question is in a recovery situation is there a way for me to mount the > snapshots and import the pool while the original is still active. Right now > all I am able to do is export the original than I can import the snapshots > and access the zpool and filesystem. Im looking at a no downtime solution > can this be done ? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Interesting thought. I guess what you''d have to have is an import flag that allows you to "import as". So you could: zpool import --import-as yourpool.backup yourpool I definitely see why you''d want to do it. I haven''t a clue if you can :) --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080131/9eb1d86a/attachment.html>
Dave Lowenstein
2008-Jan-31 20:05 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Nope, doesn''t work. Try presenting one of those lun snapshots to your host, run cfgadm -al, then run zpool import. #zpool import no pools available to import It would make my life so much simpler if you could do something like this: zpool import --import-as yourpool.backup yourpool Tim wrote:> On 1/31/08, *Hector De Jesus* <hec1979 at gmail.com > <mailto:hec1979 at gmail.com>> wrote: > > Hello SUN gurus I do not know if this is supported, I have a > created a zpool consisting of the SAN resources and created a zfs > file system. Using third part software I have taken snapshots of > all luns in the zfs pool. My question is in a recovery situation > is there a way for me to mount the snapshots and import the pool > while the original is still active. Right now all I am able to do > is export the original than I can import the snapshots and access > the zpool and filesystem. Im looking at a no downtime solution > can this be done ? > > > This message posted from opensolaris.org <http://opensolaris.org> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > Interesting thought. I guess what you''d have to have is an import > flag that allows you to "import as". So you could: > > zpool import --import-as yourpool.backup yourpool > > I definitely see why you''d want to do it. I haven''t a clue if you can :) > > --Tim > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080131/5e1eee97/attachment.html>
Darren J Moffat
2008-Feb-04 14:35 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Dave Lowenstein wrote:> Nope, doesn''t work. > > Try presenting one of those lun snapshots to your host, run cfgadm -al, > then run zpool import. > > > #zpool import > no pools available to importDoes format(1M) see the luns ? If format(1M) can''t see them it is unlikely that ZFS will either.> It would make my life so much simpler if you could do something like > this: zpool import --import-as yourpool.backup yourpoolzpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] pool | id [newpool] Imports a specific pool. A pool can be identified by its name or the numeric identifier. If newpool is specified, the pool is imported using the name newpool. Otherwise, it is imported with the same name as its exported name. # zpool import foopool barpool -- Darren J Moffat
Dave Lowenstein
2008-Feb-04 19:15 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Try it, it doesn''t work. Format sees both but you can''t import a clone of pool "u001" if pool "u001" is already imported, even by giving it a new name. Darren J Moffat wrote:> Dave Lowenstein wrote: >> Nope, doesn''t work. >> >> Try presenting one of those lun snapshots to your host, run cfgadm >> -al, then run zpool import. >> >> >> #zpool import >> no pools available to import > > Does format(1M) see the luns ? If format(1M) can''t see them it is > unlikely that ZFS will either. > >> It would make my life so much simpler if you could do something like >> this: zpool import --import-as yourpool.backup yourpool > > zpool import [-o mntopts] [ -o property=value] ... [-d dir | > -c cachefile] [-D] [-f] [-R root] pool | id [newpool] > > Imports a specific pool. A pool can be identified by its > name or the numeric identifier. If newpool is specified, > the pool is imported using the name newpool. Otherwise, > it is imported with the same name as its exported name. > > > # zpool import foopool barpool > > >
Jim Dunham
2008-Feb-05 01:46 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Darren J Moffat wrote:> Dave Lowenstein wrote: >> Nope, doesn''t work. >> >> Try presenting one of those lun snapshots to your host, run cfgadm - >> al, >> then run zpool import. >> >> >> #zpool import >> no pools available to import > > Does format(1M) see the luns ? If format(1M) can''t see them it is > unlikely that ZFS will either. > >> It would make my life so much simpler if you could do something like >> this: zpool import --import-as yourpool.backup yourpool > > zpool import [-o mntopts] [ -o property=value] ... [-d dir | > -c cachefile] [-D] [-f] [-R root] pool | id [newpool] > > Imports a specific pool. A pool can be identified by its > name or the numeric identifier. If newpool is specified, > the pool is imported using the name newpool. Otherwise, > it is imported with the same name as its exported name.Given that the pool is snapshot of one or more vdevs in an existing ZFS storage pool, not only is the "name" identical, so it is the "numeric identifier". If can be determined that when using "zpool import ....", duplicates are suppressed, even if those duplicates are entirely separate vdevs containing block-based snapshots, physical copies, remote mirrors or iSCSI Targets. The steps to reproduce this behavior on a single node, using files and stand Solaris utilities is as follows: # mkfile 500m /var/tmp/pool_file # zpool create pool /var/tmp/pool_file # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 /var/tmp/pool_file ONLINE 0 0 0 errors: No known data errors # zpool export pool # dd if=/var/tmp/pool_file of=/var/tmp/pool_snapshot { wait, wait, wait, ... more on this later ...} 1024000+0 records in 1024000+0 records out # zpool import -d /var/tmp pool: pool id: 14424098069460077054 state: ONLINE action: The pool can be imported using its name or numeric identified config: pool ONLINE /var/tmp/pool_file ONLINE Question: What happened to the other ZFS storage pool call pool_snapshot? Answer: Its presence is suppressed by zpool import. If one was to rename /var/tmp/pool_file to some other directory, the /var/tmp/ pool_snapshot will now appear. # mv /var/tmp/pool_file /var/pool_file # zpool import -d /var/tmp pool: pool id: 14424098069460077054 state: ONLINE action: The pool can be imported using its name or numeric identified config: pool ONLINE /var/tmp/pool_snapshot ONLINE 0 0 0 At this point, if one was to go ahead with the import of pool, (which would work) then rename /var/pool_file back to /var/tmp/pool_file, its presence would now be suppressed. Conversely, if the rename was done first, then a zpool import was attempted, again only one storage pool would exists at any given time. Clearly there is some explicit suppressing of duplicate storage pools going on here. Browsing the ZFS code looking for answer, the logic surrounding zfs_inuse(), seem to cause this behavior, expected or not. http://cvs.opensolaris.org/source/search?q=vdev_inuse&project=%2Fonnv ======== As mentioned earlier, the {wait, wait, wait, ...} can be eliminated by using Availability Suite Point-in-Time Copy, by itself, or in combination with Availability Suite Remote Copy or iSCSI Target, all of which are present in OpenSolaris today, and all are much fast then the dd utility. As one that supports both Availability Suite and iSCSI Target, not suppressing duplicate pool names and pool identifiers, in combination with a rename on import, "zpool import -new <name> ...", would provide a means to support various copies, or nearly identical copies of a ZFS storage pool on the same Solaris host. While browsing the ZFS source code, I noticed that "usr/src/cmd/ztest/ ztest.c", includes ztest_spa_rename(), a ZFS test which renames a ZFS storage pool to a different name, tests the pool under its new name, and then renames it back. I wonder why this functionality was not exposed as part of zpool support? - Jim> > # zpool import foopool barpool > > > > -- > Darren J Moffat > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussJim Dunham Storage Platform Software Group Sun Microsystems, Inc. work: 781.442.4042 cell: 603-724-3972 http://blogs.sun.com/avs http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ http://www.opensolaris.org/os/community/storage/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080204/fe425b7a/attachment.html>
eric kustarz
2008-Feb-06 15:30 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
> > While browsing the ZFS source code, I noticed that "usr/src/cmd/ > ztest/ztest.c", includes ztest_spa_rename(), a ZFS test which > renames a ZFS storage pool to a different name, tests the pool > under its new name, and then renames it back. I wonder why this > functionality was not exposed as part of zpool support? >See 6280547 want to rename pools. Just hasn''t been hight on the priority list. eric
Robert Milkowski
2008-Feb-11 09:06 UTC
[zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active
Hello Dave, Monday, February 4, 2008, 7:15:54 PM, you wrote: DL> Try it, it doesn''t work. DL> Format sees both but you can''t import a clone of pool "u001" if pool DL> "u001" is already imported, even by giving it a new name. I guess because you got exact copy you also got the same zfs labels which is confusing zfs. -- Best regards, Robert mailto:milek at task.gda.pl http://milek.blogspot.com