Posted for my friend Marko: I''ve been reading up on ZFS with the idea to build a home NAS. My ideal home NAS would have: - high performance via striping - fault tolerance with selective use of multiple copies attribute - cheap by getting the most efficient space utilization possible (not raidz, not mirroring) - scalability I was hoping to start with 4 1TB disks, in a single striped pool with only some filesystems set to copies=2. I would be able to survive a single disk failure for my data which was on the copies2 filesystem. (trusting that I had enough free space across multiple disks that copies2 writes were not placed on the same physical disk) I could grow this filesystem just by adding single disks. Theoretically, at some point in time I would switch to copies=3 to increase my chances of surviving two disk failures. The block checksums would be a useful in early detection of failed disks. The major snag I discovered is that if a striped pool loses a disk, I can still read and write from the remaining data, but I cannot reboot and remount a partial piece of the stripe, even with -f. For example, if I lost some of my "single copies" data, I''d like to still access the good data, pop in a new (potentially larger) disk, re "cp" the important data to have multiple copies rebuilt, and not have to rebuild the entire pool structure. So the feature request would be for zfs to allow selective disk removal from striped pools, with the resultant data loss, but any data that survived, either by chance (living on the remaining disks) or policy (multiple copies) would still be accessible. Is there some underlying reason in zfs that precludes this functionality? If the filesystem partially-survives while the striped pool member disk fails and the box is still up, why not after a reboot? -- This message posted from opensolaris.org