Hi, probably I was missing some point, ... I installed the latest gnusolaris dist and was playing around with zfs, and found a weird behavior. I created a RaidZ Pool with 3 disks, which looks like this # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT data 27.8G 4.67G 23.1G 16% ONLINE - # zpool iostat -v data capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- data 4.67G 23.1G 0 0 0 59.1K raidz 4.67G 23.1G 0 0 0 59.1K c0d1 - - 0 0 2 29.6K c1d0 - - 0 0 3 29.6K c1d1 - - 0 0 3 29.6K ---------- ----- ----- ----- ----- ----- ----- I made a file on the zfs filesystem, ... and it looks like this # df -k data 28870549 4893944 23976605 17% /data # ls -lk -rw-r--r-- 1 root root 3262336 Nov 21 06:12 myfile The 2 different sizes corrolate with the factor 2/3 (N-1 Raid). df -k also shows the capacity of all 3 disks as if they are stiped. Is this a bug in GnuSolaris or a feature of ZFS or did I miss a point in the documentation? This message posted from opensolaris.org
It''s not you, it''s us. Internally, ZFS represents the space in any vdev in something called a space map. For single disks, or for simple mirrors, the space in the space map is identical to the actual usable space. For RAID-Z, however, the space map represents the total data + parity space. I''ll explain the reasons for that in an upcoming blog entry. But the net result is that what you observe as an end user today isn''t quite right. We need to scale the number by (N-1)/N, exactly as you inferred. There''s an open bug on this which we''ll fix in the next couple of weeks. Sorry for the confusion, Jeff This message posted from opensolaris.org