I''ve a interesting situation. I''ve created two pool now and one pool named "Data" and another named "raid5". Check the details here: bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT Data 10.7T 9.82T 892G 91% ONLINE - raid5 10.9T 145K 10.9T 0% ONLINE - As you see, the sizes are approximately the same. If I run the df command, it reports: bash-3.00# df -h /Data Filesystem size used avail capacity Mounted on Data 11T 108M 154G 1% /Data bash-3.00# df -h /raid5 Filesystem size used avail capacity Mounted on raid5 8.9T 40K 8.9T 1% /raid5 You see that the Data has 11 TB when zpool reported 10.7 TB and the raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a difference of 2 TB. Where did they go? Any explanation would be find. Regards, Lars-Gunnar Persson
On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:> I''ve a interesting situation. I''ve created two pool now and one pool > named "Data" and another named "raid5". Check the details here: > > bash-3.00# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > Data 10.7T 9.82T 892G 91% ONLINE - > raid5 10.9T 145K 10.9T 0% ONLINE - > > As you see, the sizes are approximately the same. If I run the df > command, it reports: > > bash-3.00# df -h /Data > Filesystem size used avail capacity Mounted on > Data 11T 108M 154G 1% /Data > bash-3.00# df -h /raid5 > Filesystem size used avail capacity Mounted on > raid5 8.9T 40K 8.9T 1% /raid5 > > You see that the Data has 11 TB when zpool reported 10.7 TB and the > raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a > difference of 2 TB. Where did they go?To your raid5 (raidz) parity. Check ''zpool status'' to see how your two pools differ.. zpool list shows the disk space you have.. zfs/df shows how much you can store there.. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Here is what zpool status reports:
bash-3.00# zpool status
pool: Data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
c4t5000402001FC442Cd0 ONLINE 0 0 0
errors: No known data errors
pool: raid5
state: ONLINE
scrub: none requested
config:
NAME STATE READ
WRITE CKSUM
raid5 ONLINE 0
0 0
raidz1 ONLINE 0
0 0
c7t6000402001FC442C609DCA2200000000d0 ONLINE 0
0 0
c7t6000402001FC442C609DCA4A00000000d0 ONLINE 0
0 0
c7t6000402001FC442C609DCAA200000000d0 ONLINE 0
0 0
c7t6000402001FC442C609DCABF00000000d0 ONLINE 0
0 0
c7t6000402001FC442C609DCADB00000000d0 ONLINE 0
0 0
c7t6000402001FC442C609DCAF800000000d0 ONLINE 0
0 0
errors: No known data errors
On 9. mars. 2009, at 14.29, Tomas ?gren wrote:
> On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:
>
>> I''ve a interesting situation. I''ve created two pool
now and one pool
>> named "Data" and another named "raid5". Check the
details here:
>>
>> bash-3.00# zpool list
>> NAME SIZE USED AVAIL CAP HEALTH
>> ALTROOT
>> Data 10.7T 9.82T 892G 91% ONLINE -
>> raid5 10.9T 145K 10.9T 0% ONLINE -
>>
>> As you see, the sizes are approximately the same. If I run the df
>> command, it reports:
>>
>> bash-3.00# df -h /Data
>> Filesystem size used avail capacity Mounted on
>> Data 11T 108M 154G 1% /Data
>> bash-3.00# df -h /raid5
>> Filesystem size used avail capacity Mounted on
>> raid5 8.9T 40K 8.9T 1% /raid5
>>
>> You see that the Data has 11 TB when zpool reported 10.7 TB and the
>> raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a
>> difference of 2 TB. Where did they go?
>
> To your raid5 (raidz) parity.
>
> Check ''zpool status'' to see how your two pools differ..
zpool list
> shows
> the disk space you have.. zfs/df shows how much you can store there..
>
> /Tomas
> --
> Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/
> |- Student at Computing Science, University of Ume?
> `- Sysadmin at {cs,acc}.umu.se
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
.--------------------------------------------------------------------------.
|Lars-Gunnar
Persson |
|IT-
sjef |
|
|
|Nansen senteret for milj? og
fjernm?ling |
|Adresse : Thorm?hlensgate 47, 5006
Bergen |
|Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58
01 |
|Internett: http://www.nersc.no, e-post: lars-
gunnar.persson at nersc.no |
''--------------------------------------------------------------------------''
This was enlightening! Thanks a lot and sorry for the noise. Lars-Gunnar Persson On 9. mars. 2009, at 14.27, Tim wrote:> > > On Mon, Mar 9, 2009 at 7:07 AM, Lars-Gunnar Persson <lars-gunnar.persson at nersc.no > > wrote: > I''ve a interesting situation. I''ve created two pool now and one pool > named "Data" and another named "raid5". Check the details here: > > bash-3.00# zpool list > NAME SIZE USED AVAIL CAP HEALTH > ALTROOT > Data 10.7T 9.82T 892G 91% ONLINE - > raid5 10.9T 145K 10.9T 0% ONLINE - > > As you see, the sizes are approximately the same. If I run the df > command, it reports: > > bash-3.00# df -h /Data > Filesystem size used avail capacity Mounted on > Data 11T 108M 154G 1% /Data > bash-3.00# df -h /raid5 > Filesystem size used avail capacity Mounted on > raid5 8.9T 40K 8.9T 1% /raid5 > > You see that the Data has 11 TB when zpool reported 10.7 TB and the > raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a > difference of 2 TB. Where did they go? > > Any explanation would be find. > > Regards, > > Lars-Gunnar Persson > > Parity drives. zpool list shows total size including parity > drives. df is showing usable after subtracting parity drives. > > --Tim-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090309/db50bead/attachment.html>