Hi, It would be great help to understand the how the actually allocated disk space keeps getting reduced as shown below (I understand different filesystem/volume management implementation would have different logic to set aside some space reserved for meta-data etc). If the following can be explained it would really help. I allocated 80*25G luns to a solaris server (clariion luns). (80*25 80*25*1024*1024*1024 = 2147483648000 bytes). On the server I see df -k root# df -k Filesystem kbytes used avail capacity Mounted on data 2054062080 990583929 1063466706 49% /data v I see a loss of (2147483648000 - 2054062080*1024) = 44124078080 bytes here. These 80 luns were put into a zfs filesystem volume and the following sizes are reported - root# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT data 1.94T 945G 1.02T 47% ONLINE - root# zfs data NAME USED AVAIL REFER MOUNTPOINT data 945G 1014G 945G /sasuser v As you can see a 2TB lun has been reported as 1.94T. I have seen an article on the dsl_pool.c which shows that zfs takes 1/64th of the disk size as reserve for better allocation. However even that does not explain this much loss of space in reporting. Warm Regards Sharad Confidentiality Note: This e-mail, including any attachment to it, may contain material that is confidential, proprietary, privileged and/or "Protected Health Information," within the meaning of the regulations under the Health Insurance Portability & Accountability Act as amended. If it is not clear that you are the intended recipient, you are hereby notified that you have received this transmittal in error, and any review, dissemination, distribution or copying of this e-mail, including any attachment to it, is strictly prohibited. If you have received this e-mail in error, please immediately return it to the sender and delete it from your system. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100601/7828f8f8/attachment.html>