I''m seeing odd behaviour when I create a ZFS raidz pool using three
disks. The output of "zpool status" shows the pool size as the size
of the three disks combined (as if it were a Raid 0 volume). This
isn''t expected behaviour is it? When I create a mirrored volume in ZFS
everything is as one would expect the pool is the size of a single drive.
My setup:
Compaq DL380
Host OS: CentOS 4.3 (x86)
VMware Server Guest OS: Solaris Nevada Build 39
Host Memory: 4GB
Guest Memory: 1.5GB
Disks: 3 x 300GB Seagate SATA II drives (with ~25GB carved out of each for the
Host OS)
--
The commands I ran:
[b]bash-3.00# zpool create sata raidz c2t0d0 c2t1d0 c2t2d0
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
sata 838G 108K 838G 0% ONLINE -
bash-3.00# zpool status
pool: sata
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
sata ONLINE 0 0 0
raidz ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
sata 145K 825G 49K /sata
bash-3.00# zpool destroy -f sata
bash-3.00# zpool create sata mirror c2t0d0 c2t1d0
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
sata 278G 52.5K 278G 0% ONLINE -
bash-3.00# zpool status
pool: sata
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
sata ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
sata 75.5K 274G 24.5K /sata[/b]
--
[b]bash-3.00# cat /tmp/disks.out
format> disk
selecting c2t0d0
Current partition table (original):
Total disk sectors available: 586055949 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 279.45GB 586055949
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 586055950 8.00MB 586072333
format> disk
selecting c2t1d0
Current partition table (original):
Total disk sectors available: 586055949 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 279.45GB 586055949
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 586055950 8.00MB 586072333
format> disk
selecting c2t2d0
Current partition table (original):
Total disk sectors available: 586055949 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 279.45GB 586055949
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 586055950 8.00MB 586072333
[/b]
This message posted from opensolaris.org
On Sun, Jun 11, 2006 at 05:58:20PM -0700, Nathanael Burton wrote:> I''m seeing odd behaviour when I create a ZFS raidz pool using three > disks. The output of "zpool status" shows the pool size as the size > of the three disks combined (as if it were a Raid 0 volume). This > isn''t expected behaviour is it? When I create a mirrored volume in > ZFS everything is as one would expect the pool is the size of a single > drive.You''re hitting 6288488 "du reports misleading size on RAID-Z". This bug was fixed in build 42. Note that you must create your pool under build 42 or later for the fix to take effect. Upgrading an existing raidz-based pool to build 42 will not change the behavior. --matt
Nathanael Burton
2006-Jun-13 01:05 UTC
[zfs-discuss] Re: ZFS + Raid-Z pool size incorrect?
Thanks for the help! So I BFU''d to the following: bash-3.00# uname -a SunOS mathrock-opensolaris 5.11 opensol-20060605 i86pc i386 i86pc I blew away all my old ZFS pools and created a new raidz pool with my three disks. The file system now correctly reports the right size, and df/du report the right size, but "zpool list" still shows the size of the three disks combined. Is it still supposed to show that? I would think you would want the pool size to show you the total and available capacities of the pool not the size of the disks added together. While it''s not a performance or rather serious bug, it''s definitely "bugging" me! Thanks, Nate> You''re hitting 6288488 "du reports misleading size on > RAID-Z". This bug > was fixed in build 42. Note that you must create > your pool under build > 42 or later for the fix to take effect. Upgrading an > existing > raidz-based pool to build 42 will not change the > behavior. > > --matt > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >This message posted from opensolaris.org
On Mon, Jun 12, 2006 at 06:05:06PM -0700, Nathanael Burton wrote:> I blew away all my old ZFS pools and created a new raidz pool with my > three disks. The file system now correctly reports the right size, > and df/du report the right size, but "zpool list" still shows the size > of the three disks combined. Is it still supposed to show that? I > would think you would want the pool size to show you the total and > available capacities of the pool not the size of the disks added > together.Yep, this is a different known bug (which has always existed, but the difference was previously less severe): 6308817 discrepancy between zfs list and zpool iostat usage stats --matt