Robert Milkowski
2009-Dec-03 18:10 UTC
[zfs-discuss] L2ARC re-uses new device if it is in the same "place"
Hi,
milek at r600:/rpool/tmp# zpool status test
pool: test
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/rpool/tmp/f1 ONLINE 0 0 0
errors: No known data errors
lets add a cache device:
milek at r600:/rpool/tmp# zfs create -V 100m rpool/tmp/ssd2
milek at r600:/rpool/tmp# zpool add test cache /dev/zvol/dsk/rpool/tmp/ssd2
milek at r600:/rpool/tmp# zpool status test
pool: test
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/rpool/tmp/f1 ONLINE 0 0 0
cache
/dev/zvol/dsk/rpool/tmp/ssd2 ONLINE 0 0 0
errors: No known data errors
milek at r600:/rpool/tmp#
now lets export the pool, re-create the zvol and then import the pool again:
> milek at r600:/rpool/tmp# zpool export test
> milek at r600:/rpool/tmp# zfs destroy rpool/tmp/ssd2
> milek at r600:/rpool/tmp# zfs create -V 100m rpool/tmp/ssd2
> milek at r600:/rpool/tmp# zpool import -d /rpool/tmp/ test
milek at r600:/rpool/tmp# zpool status test
pool: test
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/rpool/tmp/f1 ONLINE 0 0 0
cache
/dev/zvol/dsk/rpool/tmp/ssd2 ONLINE 0 0 0
errors: No known data errors
milek at r600:/rpool/tmp#
No complaint here...
I''m not entirely sure that it should behave that way - in some
circumstances it could be risky.
For example what if zvol/ssd/disk which is used on one server as a cache
device has the same path on another server and then a pool is imported
there? Would l2arc just blindly start using it as a cache device and
overwriting some other data?
Shouldn''t l2arc devices have a label/signature or at least use uuid of
a
disk and during import be checked if it is the same device? Or maybe it
does and there is some other issue here with re-creating zvol...
btw: x86, snv_127
--
Robert Milkowski
http://milek.blogspot.com