> Two questions about plans for the future:
> - Another thread mentioned that vdev removal with data migration to
> other vdevs is in the works. Once that is there are there any plans
> to auto-migrate data away from vdevs that have lost redundancy?
Yes, and we have something in the works for allocation policy as well.
ZFS always allocates new blocks when writing, so if we discover during
a write that the vdev we selected is offline or degraded, we can
just pick another vdev. We can keep doing this as long as there are
any healthy vdevs available.
And here''s the kicker: if the device outage is transient, then when
the device comes back online, there will be *nothing* to resilver!
> - At this point I believe that dynamic striping write placement is
> largely (completely?) based on relative fullness of the vdevs.
Right.
> Are there plans to use disk performance feedback to influence it?
Yes. Ideally, you want to distribute the I/O such that it all completes
at the same time -- that means every disk is working as hard as it can.
But this will have to be balanced against fullness, health, etc.
There''s a lot of fertile ground here. The good news is that all of
these allocation decisions are pure policy: none of it affects the
on-disk format, so we can change it all we want as newer and better
ideas come along. We might even have different block pickers for
different workloads (this is what the references to "picker_init"
and "picker_fini" in metaslab.c are referring to). As always,
though, we''ll try to keep this logic automatic and feedback-driven
so that it does the right thing without user intervention.
Jeff