Hi Derek,
Here''s the latest email I''ve received from the zfs-discuss
alias.
------------- Begin Forwarded Message -------------
Date: Mon, 18 Sep 2006 23:55:27 -0400
From: Jonathan Edwards <Jonathan.Edwards@sun.com>
Subject: Re: [zfs-discuss] ZFS and HDS ShadowImage
To: Eric Schrock <eric.schrock@sun.com>
Cc: zfs-discuss@opensolaris.org, Torrey McMahon <Torrey.McMahon@sun.com>,
Joerg
Haederli <Hans-Joerg.Haederli@sun.com>, j.haederli@sun.com
X-BeenThere: zfs-discuss@opensolaris.org
Delivered-to: zfs-discuss@opensolaris.org
X-PMX-Version: 5.2.0.264296
X-Original-To: zfs-discuss@opensolaris.org
X-Mailman-Version: 2.1.4
List-Post: <mailto:zfs-discuss@opensolaris.org>
List-Subscribe:
<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>,
<mailto:zfs-discuss-request@opensolaris.org?subject=subscribe>
List-Unsubscribe:
<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>,
<mailto:zfs-discuss-request@opensolaris.org?subject=unsubscribe>
List-Archive: <http://mail.opensolaris.org/pipermail/zfs-discuss>
List-Help: <mailto:zfs-discuss-request@opensolaris.org?subject=help>
List-Id: zfs-discuss.opensolaris.org
On Sep 18, 2006, at 23:16, Eric Schrock wrote:
>
>> Here''s an example: I''ve three LUNs in a ZFS pool
offered from my
>> HW raid
>> array. I take a snapshot onto three other LUNs. A day later I turn
>> the
>> host off. I go to the array and offer all six LUNs, the pool that
>> was in
>> use as well as the snapshot that I took a day previously, and
>> offer all
>> three LUNs to the host. The host comes up and automagically adds
>> all the
>> LUNs to the host with correct /dev/dsk entries.
>>
>> What happens?
>
> ZFS will use the existing pool as defined in the cache file, which in
> this case will still contain the correct devices. The new mirrored
> LUNs
> will not be used. They will not show as available pools to import
> because the pool GUID is in use. A reasonable bug is to report this
> inconsistency (ostensibly part of a pool but not present in the
> current
> config), though there are some tricky edge conditions. A more
> complicated RFE would be to detect this as a self-consistent
> version of
> the same pool, and have a way to change the GUID on import.
>
> If you export the pool before you poweroff the host, and then want to
> import one of the two pools, the version with the most recent
> uberblock
> will "win". If they both have the same uberblock (i.e. are
really the
> identical mirror), the results are non-deterministic. Depending
> on the
> order in which devices are discovered, you may end up with one pool or
> the other, or some combination of both.
ah .. there we go - so we have an interaction between an uberblock
date and prioritization on the import .. very keen. The non-
deterministic case is well known in other self-describing pools or
diskgroups (eg: vxdg) and where the 6385531 RFE/bug came from on
Leadville to provide more options for sites that lack flexibility on
the SAN and presentation ports to mask out replicated disks.
I guess there''s a couple of corner cases that you may have already
considered that would be good to explain:
1) If the zpool was imported when the split was done, can the
secondary pool be imported by another host if the /dev/dsk entries
are different? I''m assuming that you could simply use the -f
option .. would the guid change?
2) If the guid does indeed change could this zpool then be imported
back on the first host at the same time by specifying the secondary
guid instead of the pool name?
3) Can the same zpool be mounted on two separate hosts at the same
time .. in other words what happens when a second host tries to
import -f a zpool that''s already mounted by the first host?
Jonathan
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
------------- End Forwarded Message -------------