Hi, this has already been the source of a lot of interesting discussions, so far I haven''t found the ultimate conclusion. From some discussion on this list in February, I learned that an antry in ZFS'' deduplication table takes (in practice) half a KiB of memory. At the moment my data looks like this (output of zdb -D)... DDT-sha256-zap-duplicate: 3299796 entries, size 350 on disk, 163 in core DDT-sha256-zap-unique: 9727611 entries, size 333 on disk, 151 in core dedup = 1.73, compress = 1.20, copies = 1.00, dedup * compress / copies = 2.07 So that means the DDT contains a total of 13,027,407 entries, meaning it''s 6,670,032,384 bytes big. So suppose our data grow on with a factor 12, it will take 80 GB. So, it would be best to buy a 128 GB SSD as L2ARC cache. Correct? Thanks for enlightening me, -- Frank Van Damme