Consider the following scenario involving various failures.
We have a zpool composed of a simple mirror of two devices D0 and D1
(these may be local disks, slices, LUNs on a SAN, or whatever). For the
sake of this scenario, it''s probably most intuitive to think of them as
LUNs on a SAN. Initially, all is well and both halves of the mirror are
in sync; the data on D0 is fully consistent with that on D1.
Then there is a failure, such that D1 becomes disconnected. ZFS
continues to write on D0. If D1 were to become reconnected, it would
get resilvered normally and all would be well.
But suppose instead there is a crash, and when the system reboots it is
connected only to D1, and D0 is not available. Does ZFS have any way to
know that the data on D1 (while self-consistent) is stale and should not
be used?
The specific case of interest is not necessarily a single-server
environment (although thinking of just one server simplifies the
scenario without reducing it too far), but a cluster where ZFS is used
as a fail-over file system and connectivity issues are more likely to
arise. SVM does have a means of detecting this scenario and refusing to
mount the stale mirror.
Thanks.
--
--Ed
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ed.gould.vcf
Type: text/x-vcard
Size: 282 bytes
Desc: not available
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070212/c3ba9378/attachment.vcf>