Johan Andersson
2009-Feb-07 11:05 UTC
[zfs-discuss] zpool list vs zfs list, size differs...
Hi, New to OpenSolaris and ZFS... Wondering about a size difference I see on my newly installed OpenSolaris system, Homebuilt AMD Phenom system with SATA3 disks... [code] johan at krynn:~$ zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 696G 7.67G 688G 1% ONLINE - zpool 2.72T 135K 2.72T 0% ONLINE - johan at krynn:~$ zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 11.5G 674G 72K /rpool rpool/ROOT 3.78G 674G 18K legacy rpool/ROOT/opensolaris 3.78G 674G 3.65G / rpool/dump 3.87G 674G 3.87G - rpool/export 18.7M 674G 19K /export rpool/export/home 18.7M 674G 50K /export/home rpool/export/home/admin 18.6M 674G 18.6M /export/home/admin rpool/swap 3.87G 677G 16K - zpool 94.3K 2.00T 26.9K /zpool johan at krynn:~$ zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c3d0s0 ONLINE 0 0 0 c4d0s0 ONLINE 0 0 0 errors: No known data errors pool: zpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c3d1 ONLINE 0 0 0 c4d1 ONLINE 0 0 0 c6d0 ONLINE 0 0 0 c6d1 ONLINE 0 0 0 errors: No known data errors [/code] The disks are all 750GB SATA3 disks, why is zpool listing the raidz zpool as 2.74TB but zfs list the /zpool filesys as 2.0TB? Is this a limit of my server in some way or something I can "tune" up? /Johan A
On 07 February, 2009 - Johan Andersson sent me these 1,5K bytes:> Hi, > > New to OpenSolaris and ZFS... > Wondering about a size difference I see on my newly installed > OpenSolaris system, Homebuilt AMD Phenom system with SATA3 disks... > > [code] > johan at krynn:~$ zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > rpool 696G 7.67G 688G 1% ONLINE - > zpool 2.72T 135K 2.72T 0% ONLINE -The pool has disks that can hold ... 4*750000000000/1024/1024/1024/1024 =~ 2.72TB> johan at krynn:~$ zfs list > NAME USED AVAIL REFER MOUNTPOINT > rpool 11.5G 674G 72K /rpool > rpool/ROOT 3.78G 674G 18K legacy > rpool/ROOT/opensolaris 3.78G 674G 3.65G / > rpool/dump 3.87G 674G 3.87G - > rpool/export 18.7M 674G 19K /export > rpool/export/home 18.7M 674G 50K /export/home > rpool/export/home/admin 18.6M 674G 18.6M /export/home/admin > rpool/swap 3.87G 677G 16K - > zpool 94.3K 2.00T 26.9K /zpoolIn that pool, due to raidz, you can store about ... 3*750000000000/1024/1024/1024/1024 =~ 2TB> The disks are all 750GB SATA3 disks, why is zpool listing the raidz > zpool as 2.74TB but zfs list the /zpool filesys as 2.0TB? > Is this a limit of my server in some way or something I can "tune" up?Space worth about 1x750GB is lost to parity with raidz.. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Johan Andersson
2009-Feb-07 12:30 UTC
[zfs-discuss] zpool list vs zfs list, size differs...
Tomas ?gren wrote:> On 07 February, 2009 - Johan Andersson sent me these 1,5K bytes: > > >> Hi, >> >> New to OpenSolaris and ZFS... >> Wondering about a size difference I see on my newly installed >> OpenSolaris system, Homebuilt AMD Phenom system with SATA3 disks... >> >> [code] >> johan at krynn:~$ zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> rpool 696G 7.67G 688G 1% ONLINE - >> zpool 2.72T 135K 2.72T 0% ONLINE - >> > > The pool has disks that can hold ... > 4*750000000000/1024/1024/1024/1024 =~ 2.72TB > > >> johan at krynn:~$ zfs list >> NAME USED AVAIL REFER MOUNTPOINT >> rpool 11.5G 674G 72K /rpool >> rpool/ROOT 3.78G 674G 18K legacy >> rpool/ROOT/opensolaris 3.78G 674G 3.65G / >> rpool/dump 3.87G 674G 3.87G - >> rpool/export 18.7M 674G 19K /export >> rpool/export/home 18.7M 674G 50K /export/home >> rpool/export/home/admin 18.6M 674G 18.6M /export/home/admin >> rpool/swap 3.87G 677G 16K - >> zpool 94.3K 2.00T 26.9K /zpool >> > > In that pool, due to raidz, you can store about ... > 3*750000000000/1024/1024/1024/1024 =~ 2TB > > >> The disks are all 750GB SATA3 disks, why is zpool listing the raidz >> zpool as 2.74TB but zfs list the /zpool filesys as 2.0TB? >> Is this a limit of my server in some way or something I can "tune" up? >> > > Space worth about 1x750GB is lost to parity with raidz.. > > /Tomas >Thanks, I didnt realize that the pool counted in the parity data... I should have though... if I had bothered to calc it *duh* /Johan A -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090207/fe01194b/attachment.html>