I think I can realistically add admin/user managable per dataset keys to
phase 1 without impacting schedule, particularly if we add a separate
key management command (see my other email from today).
This means that each dataset would have a keytype property that is one
of the following:
typedef enum {
ZFS_KEY_DSL_WRAP_POOL = 0, /* Dataset key wrapped by pool key */
ZFS_KEY_DSL, /* Dataset specific key */
ZFS_KEY_DSL_EPHEMERAL, /* Ephemeral dataset key */
ZFS_KEY_POOL /* Single pool key */
} zfs_cryptokey_type_t
At first ZFS_KEY_POOL and ZFS_KEY_DSL_WRAP_POOL appear similar. The
later has the advantage that we can change the value of the wrapping key
after a dataset is destroyed and it will become inaccessible.
I can''t see any advantages to ZFS_KEY_POOL other than during
development
since it allows me to develop other parts without needing to implement
the wrapping of the dataset keys. ZFS_KEY_POOL has a significant
disadvantage in that unlike ZFS_KEY_DSL_WRAP_POOL it means that all
datasets need to use the same keylength.
I don''t think I''d integrate with ZFS_KEY_POOL support unless
someone can
justify why it is useful given we have ZFS_KEY_DSL_WRAP_POOL. One
possible case is to first integraate with ZFS_KEY_POOL, ZFS_KEY_DSL and
add ZFS_KEY_DSL_WRAP_POOL as a phase 1.1 before we do phase 2; the
reason for considering that is dependencies on kcf support for wrap and
unwrap in a usable API.
Note also that I''m considering reintroduction of the ephemeral key type
- if we are going to have a keytype property this one is easy to do and
has some uses even if it isn''t usable for system swap.
--
Darren J Moffat