Rodrigo E. De León Plicet
2009-May-31 15:29 UTC
[zfs-discuss] ZFS, difference in reported available space?
Hi.
Just to report the following.
Thanks for your time.
Regards.
****************************************
Forwarded conversation
Subject: Difference in reported available space?
------------------------
From: Rodrigo E. De Le?n Plicet <rdeleonp at gmail.com>
Date: Wed, May 27, 2009 at 1:00 PM
To: zfs-fuse at googlegroups.com
Sorry if the following is a dumb question.
Related to the attached file, I just want to understand why, if ''zpool
list'' reports 191MB available for coolpool, ''df -h|grep
cool'' only
shows 159MB available for coolpool?
Thanks for your time.
Regards.
----------
From: Rodrigo E. De Le?n Plicet <rdeleonp at gmail.com>
Date: Sat, May 30, 2009 at 8:51 PM
To: zfs-fuse at googlegroups.com
Anyone?
----------
From: Fajar A. Nugraha <fajar at fajar.net>
Date: Sun, May 31, 2009 at 5:34 AM
To: zfs-fuse at googlegroups.com
Possibly upstream bug.
http://markmail.org/message/zmygvaarfvseipzx
Better ask Sun folks, as it still happens on latest opensolaris
(2009.06) as well.
--
Fajar
****************************************
-------------- next part --------------
root at localhost:/# uname -a
Linux newage 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009 i686
GNU/Linux
root at localhost:/# zpool upgrade -v
This system is currently running ZFS pool version 13.
The following versions are supported:
VER DESCRIPTION
--- --------------------------------------------------------
1 Initial ZFS version
2 Ditto blocks (replicated metadata)
3 Hot spares and double parity RAID-Z
4 zpool history
5 Compression using the gzip algorithm
6 bootfs pool property
7 Separate intent log devices
8 Delegated administration
9 refquota and refreservation properties
10 Cache devices
11 Improved scrub performance
12 Snapshot properties
13 snapused property
For more information on a particular version, including supported releases, see:
http://www.opensolaris.org/os/community/zfs/version/N
Where ''N'' is the version number.
root at localhost:/# for i in 1 2 3 4; do dd if=/dev/zero of=disk$i bs=1024k
count=100; done
root at localhost:/# du -h disk*
101M disk1
101M disk2
101M disk3
101M disk4
root at localhost:/# zpool create coolpool mirror /disk1 /disk2
root at localhost:/# zpool add coolpool mirror /disk3 /disk4
root at localhost:/# zpool status
pool: coolpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
coolpool ONLINE 0 0 0
mirror ONLINE 0 0 0
/disk1 ONLINE 0 0 0
/disk2 ONLINE 0 0 0
mirror ONLINE 0 0 0
/disk3 ONLINE 0 0 0
/disk4 ONLINE 0 0 0
errors: No known data errors
root at localhost:/# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
coolpool 191M 78K 191M 0% ONLINE -
root at localhost:/# df -h|grep cool
coolpool 159M 18K 159M 1% /coolpool
On Sun, May 31, 2009 at 10:29 AM, Rodrigo E. De Le?n Plicet < rdeleonp at gmail.com> wrote:> Hi. > > Just to report the following. > > Thanks for your time. > > Regards. > > **************************************** > > Forwarded conversation > Subject: Difference in reported available space? > ------------------------ > > From: Rodrigo E. De Le?n Plicet <rdeleonp at gmail.com> > Date: Wed, May 27, 2009 at 1:00 PM > To: zfs-fuse at googlegroups.com > > Sorry if the following is a dumb question. > > Related to the attached file, I just want to understand why, if ''zpool > list'' reports 191MB available for coolpool, ''df -h|grep cool'' only > shows 159MB available for coolpool? > > Thanks for your time. > > Regards. > > ---------- > From: Rodrigo E. De Le?n Plicet <rdeleonp at gmail.com> > Date: Sat, May 30, 2009 at 8:51 PM > To: zfs-fuse at googlegroups.com > > Anyone? > > ---------- > From: Fajar A. Nugraha <fajar at fajar.net> > Date: Sun, May 31, 2009 at 5:34 AM > To: zfs-fuse at googlegroups.com > > Possibly upstream bug. > http://markmail.org/message/zmygvaarfvseipzx > > Better ask Sun folks, as it still happens on latest opensolaris > (2009.06) as well. > > -- > Fajar > > **************************************** > >That''s not a bug, and the behavior isn''t changing. zpool list shows the size of the entire pool, including parity disks. You cannot write user data to parity disks/devices. I suppose you could complain that the zpool list command should account for user data as well as parity data being subtracted from the entire pool, bug regardless, you shouldn''t be using zpool list to track your data usage as it doesn''t "hide" parity space like the standard userland utilities. df shows the space available minus the parity device(s). --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090531/3d443c2a/attachment.html>
Rodrigo E. De León Plicet
2009-Jun-07 21:53 UTC
[zfs-discuss] ZFS, difference in reported available space?
On Sun, May 31, 2009 at 12:57 PM, Tim<tim at tcsac.net> wrote:> > On Sun, May 31, 2009 at 10:29 AM, Rodrigo E. De Le?n Plicet > <rdeleonp at gmail.com> wrote: >> >> Related to the attached file, I just want to understand why, if ''zpool >> list'' reports 191MB available for coolpool, ''df -h|grep cool'' only >> shows 159MB available for coolpool? > > That''s not a bug, and the behavior isn''t changing.? zpool list shows the > size of the entire pool, including parity disks.? You cannot write user data > to parity disks/devices.? I suppose you could complain that the zpool list > command should account for user data as well as parity data being subtracted > from the entire pool, bug regardless, you shouldn''t be using zpool list to > track your data usage as it doesn''t "hide" parity space like the standard > userland utilities. > > df shows the space available minus the parity device(s).But, in this case, I''m using mirroring. Wouldn''t the ZFS parity overhead apply to raidz/raidz2 only? I confess I''m not 100% clear on the concepts... Thanks for your time. Regards.
Rodrigo E. De León Plicet
2009-Jun-07 22:36 UTC
[zfs-discuss] ZFS, difference in reported available space?
Tim, Nevermind. I should have Read The Fine Manual (tm) from the start. Says zpool(1M): ------------------------------------------------------------------- (...) zpool list (...) (...) This command reports actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable. (...) ------------------------------------------------------------------- Thanks for your time. Regards.