Hello, Does anyone here have experience or thoughts regarding the use of ZFS on thin devices? Since ZFS is COW it will not play nicly with this feature, it will spread it''s blocks all over the space it has been given, and it has curently no way to get back in contact with the storage arrays to tell them which blocks have been freed since SCSI UNMAP/TRIM is not implemented with ZFS (But TRIM was added to SATA in b146). Reclaiming disk space also seems a bit problematic since all data a spread across the disks including metadata, so even if you write the whole pool full of zeroes it will be mixed with non-zero data in the form of metadata. The vendor I am looking at requires 768K of zeroes to do a reclaim. I have done some initial quick test to see if updates without increasing the size of the data on disk ends upp with ZFS resusing the blocks and not spread out with new blocks all the time, but it seems to continue to claim new blocks. (With S10U9, this can have changed with zpool recover, i know ZFS in the past was supposed to reuse blocks to take advantage of the fastest parts of the disks?). There is a RFE for this, but I would like to know if someone have had experience with this in it''s current state. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913905 Regards Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101025/a2195953/attachment-0001.html>
Hi Henrik Yes I have the following concerns?.. and as I haven?t done any practical tests this is only ?in theory??. 1. Reclaiming thin devices will not work, EMC have a 768K byte minimum reclaim limit (768Kbytes needs to be all zeros). And HDS have 42MB I believe, IBM and 3PAR I don?t know. The COW will eventually leav footprints in ~128Kbytes and as that is less then eg: 768 it will be difficult to reclaim that space. 2. ZFS don?t have the possibility to ?tell the array? to reclaim it?s blocks, and event is ZFS would be able to do that in the future how will 8-128Kbytes harmonize with the 786Kbytes for EMC and 42MB for HDS? 3. I have concerns of the COW behavior, Will this create a ?random read? situation for the spindles ?in my 450TB array?. 4. COW don?t have any performance advantages when the IO arrives at the storage arrays as writes always is written to the cache (if it?s not full). 5. We are an EMC V-MAX customer that have decided to only use thin devices, one reason is that we plan to use FAST VP in the future with SSD FC and SATA disk installed in the array. My belief is that in a ZFS + thin device environment this feature will not have the possibility to interact with the (future) FAST VP algorithm as the COW will spread it?s blocks in a non predictable way. 6. The cost savings will not be sufficient when comparing ZFS free of charge with VxFS thin aware in our environment as the over allocation in the file systems are significant. 7. The possibility to over provision a server when thin devices work 100% can be administratively positive as the organization don?t need to spend time with provisioning ?as often? in a thin aware configuration as the real disk is not consumed. 8. We don?t use this but I would like to have a ?backup mount server? dealing with our major large Oracle databases, I believe that snapshots of an ZFS file system can?t be exported outside the server. Ps: I do understand that ZFS will make th administration easyer from a Solaris perspective ? /Laban -- This message posted from opensolaris.org