Already making use of it, thank you! http://www.justinconover.com/blog/?p=17 I took 6x250gb disk and tried raidz2/raidz/none # zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0 df -h zfs Filesystem size used avail capacity Mounted on zfs 915G 49K 915G 1% /zfs # zpool destroy -f zfs Plain old raidz (raid-5ish) # zpool create zfs raidz c0d0 c1d0 c2d0 c3d0 c7d0 c8d0 # df -h zfs Filesystem size used avail capacity Mounted on zfs 1.1T 41K 1.1T 1% /zfs # zpool destroy -f zfs Or no raidz # zpool create zfs c0d0 c1d0 c2d0 c3d0 c7d0 c8d0 # df -h zfs Filesystem size used avail capacity Mounted on zfs 1.3T 1K 1.3T 1% /zfs So even though I''m loosing about 400gb of space, this is a backup server for my wife, a professional photographer, so loosing or photos might be worst than loosing space ;) This message posted from opensolaris.org
Is this a bug? capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- zfs 14.2G 1.35T 0 62 0 5.46M raidz2 14.2G 1.35T 0 62 0 5.46M c0d0 - - 0 60 0 1.37M c1d0 - - 0 58 0 1.37M c2d0 - - 0 60 0 1.37M c3d0 - - 0 58 0 1.37M c7d0 - - 0 58 0 1.37M c8d0 - - 0 49 0 1.37M ---------- ----- ----- ----- ----- ----- ----- This shows 1.35TB of space df -h /export/home/amy/ Filesystem size used avail capacity Mounted on zfs/home/amy 915G 9.7G 905G 2% /export/home/amy 6x250gb would get about 1.3 but since its raidz2 its 4/6 right, so zpool iostatus is reporting wrong. This message posted from opensolaris.org
> Is this a bug? > > > capacity operations bandwidth > ed avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > zfs 14.2G 1.35T 0 62 0 5.46M > raidz2 14.2G 1.35T 0 62 0 5.46M > c0d0 - - 0 60 0 1.37M > c1d0 - - 0 58 0 1.37M > c2d0 - - 0 60 0 1.37M > c3d0 - - 0 58 0 1.37M > c7d0 - - 0 58 0 1.37M > c8d0 - - 0 49 0 1.37M > -------- ----- ----- ----- ----- ----- ----- > > > This shows 1.35TB of space > > df -h /export/home/amy/ > > Filesystem size used avail capacity > Mounted on > fs/home/amy 915G 9.7G 905G 2% > /export/home/amy > > 6x250gb would get about 1.3 but since its raidz2 its > 4/6 right, so zpool iostatus is reporting wrong.I believe that is a known issue "6308817 discrepancy between zfs list and zpool usage stats". I saw the same behaviour with plain RAIDZ. See this post: http://www.opensolaris.org/jive/thread.jspa?messageID=42716ꛜ This message posted from opensolaris.org