With ZFS, we can enable copies=[1,2,3] to configure how many copies of data there are. With copies of 2 or more, in theory, an entire disk can have read errors, and the zfs volume still works. The unfortunate part here is that the redundancy lies in the volume, not the pool vdev like with raidz or mirroring. So if a disk were to go missing, the zpool (stripe) is missing a vdev and thus becomes offline. If a single disk in a raidz vdev is missing, it would become degraded and still usable. Now, with non-redundant stripes, the disk can''t be replaced, but all the data is still there with copies=2 if a disk dies. Is there not a way to force the zpool online or prevent it from offlining itself? One of the key benefits of the metadata copies is that if a single block fails, the filesystem is still navigable to grab what data is possible. -- This message posted from opensolaris.org
On Fri, 5 Dec 2008, Mike Brancato wrote:> With ZFS, we can enable copies=[1,2,3] to configure how many copies > of data there are. With copies of 2 or more, in theory, an entire > disk can have read errors, and the zfs volume still works.So you are saying that if we use copies of 2 or more that if we have only one disk drive and it does not spin up then we should be ok? My understanding is that the copies function is purely statistical and if some drives are overloaded and therefore are not selected as the next round-robin device, it is possible that the several copies may end up on the same drive. The copies functionality is intended to aid with media failure and not whole drive failure. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
In theory, with 2 80GB drives, you would always have a copy somewhere else. But a single drive, no. I guess I''m thinking in the optimal situation. With multiple drives, copies are spread through the vdevs. I guess it would work better if we could define that if copies=2 or more, that at least one copy be on a different vdev. -- This message posted from opensolaris.org
Mike Brancato wrote:> With ZFS, we can enable copies=[1,2,3] to configure how many copies of data there are. With copies of 2 or more, in theory, an entire disk can have read errors, and the zfs volume still works.No, this is not a completely true statement.> The unfortunate part here is that the redundancy lies in the volume, not the pool vdev like with raidz or mirroring. So if a disk were to go missing, the zpool (stripe) is missing a vdev and thus becomes offline. If a single disk in a raidz vdev is missing, it would become degraded and still usable. Now, with non-redundant stripes, the disk can''t be replaced, but all the data is still there with copies=2 if a disk dies. Is there not a way to force the zpool online or prevent it from offlining itself?No. If you want this feature with copies>1, then consider mirroring.> One of the key benefits of the metadata copies is that if a single block fails, the filesystem is still navigable to grab what data is possible.Yes. But you cannot guarantee that the metadata copies are on different vdevs. -- richard